Robotic tuna is built by Homeland Security
posted by Keito
2012-09-22 21:23:45'No question about it...they're very good at what they do. But they don't take well to orders, especially those to carry out inspection work in oily or dangerous environments, or in any kind of harsh environment, for that matter. Still, they're one of the fastest and most maneuverable creatures on the planet, having extraordinary abilities at both high and low speeds due to their streamlined bodies and a finely tuned muscular/sensory/control system.
This impressive creature is the humble tuna fish.
The Department of Homeland Security's (DHS) Science and Technology Directorate (S&T) is funding the development of an unmanned underwater vehicle designed to resemble a tuna, called the BIOSwimmer. Why the tuna? Because the tuna has a natural body framework ideal for unmanned underwater vehicles (UUVs), solving some of the propulsion and maneuverability problems that plague conventional UUVs.
Inspired by the real tuna, BIOSwimmer is a UUV designed for high maneuverability in harsh environments, with a flexible aft section and appropriately placed sets of pectoral and other fins. For those cluttered and hard-to-reach underwater places where inspection is necessary, the tuna-inspired frame is an optimal design. It can inspect the interior voids of ships such as flooded bilges and tanks, and hard to reach external areas such as steerage, propulsion and sea chests. It can also inspect and protect harbors and piers, perform area searches and carry out other security missions.
Boston Engineering Corporation's Advanced Systems Group (ASG) in Waltham, Massachusetts, is developing the BIOSwimmer for S&T. "It's designed to support a variety of tactical missions and with its interchangeable sensor payloads and reconfigurable Operator Controls, can be optimized on a per-mission basis" says the Director of ASG, Mike Rufo.
BIOSwimmer is battery-powered and designed for long-duration operation. Like other unmanned underwater vehicles, it uses an onboard computer suite for navigation, sensor processing, and communications. Its Operator Control Unit is laptop-based and provides intuitive control and simple, mission-defined versatility for the user. A unique aspect of this system is the internal components and external sensing which are designed for the challenging environment of constricted spaces and high viscosity fluids
"It's all about distilling the science," says David Taylor, program manager for the BIOSwimmer in S&T's Borders and Maritime Security Division. "It's called 'biomimetics.' We're using nature as a basis for design and engineering a system that works exceedingly well.*
Tuna have had millions of years to develop their ability to move in the water with astounding efficiency. Hopefully we won't take that long."'
9 Overlooked Technologies That Could Transform The World
posted by Keito
2012-09-18 20:29:25'We live in an era of accelerating change. Technology is changing and innovating faster than most of us can keep up. And at the same time, it's easy to get so caught up in shiny visions of the future, and not notice the astounding things that are happening in science and technology today. So the next time people ask you where the future went, tell them it's already here.
Here are nine underrated or overlooked technologies that could transform the world before you know it.
1. Cheap and fast DNA sequencing
Most of us know about DNA sequencing — but you probably don't realize just how fast and cheap it's getting. In fact, some experts suggest that it's following along a Moore's Law of its own. As Adrienne Burke has pointed out, the speed of genome sequencing has better than doubled every two years since 2003 — back at a time when it cost $3.8 billion (i.e. the Human Genome Project). Today, thanks to advances in such things as nucleic acid chemistry and detection, a company like Life Technologies can process DNA on a semiconductor chip at a cost of $1,000 per genome. Other companies can sequence an entire genome in one single day. And the implications are significant, including the advent of highly personalized medicine in which drugs can be developed to treat your specific genome. Say goodbye to one-size-fits-all medicine.
2. Digital currency
The idea of digital currency is slowing starting to make the rounds, including the potential for Bitcoin, but what many of us don't realize is that's it's here to stay. Sure, it's had a rough start, but once established and disseminated, electronic cash will allow for efficient and convenient online exchanges — and all without the need for those pesky banks. Despite the obvious need for a distributed digital currency protocol, the adoption rate has been relatively slow. Barriers to entry include availability (it's in limited supply), the cryptography problem (the public still needs to be assured that it's secure), the establishment of a recognized and trustworthy dispute system (sensing some opportunities here), and user confidence (a problem similar to the one that emerged when paper money first emerged).
Back in 1971, University of California at Berkeley professor Leon Chua predicted a revolution in electrical circuits — and his vision has finally come true. Traditionally, circuits are constructed with capacitors, resistors, and inductors. But Chua speculated that there could be a fourth component, what he called the memristor (short for memory resistor). What sets this technological innovation apart is that, unlike a resistor, it can "remember" charges even after power is lost. As a result, this would allow the memristor to store information. This has given rise to the suggestion that it could eventually become a part of computer memory — including non-volatile solid-state memory with significantly greater densities than traditional hard drives (as much as one petabit per cm3). The first memristor was developed in May 2008 by HP, who plan on having a commercial version available by the end of 2014. And aside from memory storage, memristors could prove useful in signal processing, neural networks, and brain-computer interfaces.
4. Robots that can do crazy futuristic stuff
Today we have robots that can self-replicate, re-assemble after being kicked apart, shape-shift, swarm, create emergent effects, build other robots, slither like a snake, jump to the tops of buildings, walk like a pack mule, and run faster than a human. They even have their own internet. Put it all together and you realize that we're in the midst of a robotic revolution that's poised to change virtually everything.
5. Waste to biofuels
Imagine being able to turn all our garbage into something useful like fuel. Oh wait, we can do that. It's called "energy recovery from waste" — a process that typically involves the production of electricity or biofuels (like methane, methanol, ethanol or synthetic fuels) by burning it. Cities like Edmonton, Alberta are already doing it — and they're scaling up. By next year, Edmonton's Waste-to-Biofuels Facility will convert more than 100,000 tons of municipal solid waste into 38 million litres of biofuels annually. Moreover, their waste-based biofuels can reduce greenhouse gas emissions by more than 60% compared to gasoline. This largely overlooked revolution is turning garbage (including plastic) into a precious resource. Already today, Sweden is importing waste from its European neighbors to fuel its garbage-to-energy program.
6. Gene therapy
Though we're in the midst of the biotechnology revolution, our attention tends to get focused on such things as stem cells, tissue engineering, genome mapping, and new pharmaceuticals. What's often lost in the discussion is the fact that we already have the ability to go directly into our DNA and swap genes at will. We can essentially trade bad genes for good, allowing us to treat or prevent diseases (such as muscular dystrophy and cystic fibrosis) — interventions that don't require drugs or surgery. And just as significantly, gene therapy could eventually give rise genetic enhancements (like increased memory or intelligence) and life extension therapies. Gattaca is already here, it just hasn't been distributed yet.
7. RNA interference
The discovery of RNA interference (RNAi) was considered so monumental that it won Andrew Fire and Craig C. Mello the Nobel Prize back in 2006. Similar to gene therapy, RNA interference allows biologists to manipulate the functions of genes. It works by using cells to shut-off or turn down the activity of specific genes, and it does this by destroying or disrupting messenger molecules (for example by preventing mRNA from producing a protein). Today, RNAi is being used in thousands of labs. It's becoming an indispensable research tool (to create novel cell cultures), it has inspired the creation of algorithms in computational biology studies, and it holds tremendous potential for the treatment of diseases like cancer and Lou Gehrig's disease.
8. Organic electronics
Traditionally, our visions of cybernetics and the cyborg is one in which natural, organic parts have been replaced with mechanical devices or prostheses. The notion of a half-human, half-machine has very much become ingrained in our thinking — but it's likely wrong. Thanks to the rise of the nascent field of organic electronics, it's more likely that we'll rework the body's biological systems and introduce new organic components altogether. Already today, scientists have engineered cyborg tissue that can sense its environment. Other researchers have invented chemical circuits that can channel neurotransmitters instead of electric voltages. And as Mark Changizi has suggested, future humans will continue to harness the powers of their biological constitutions and engage in what Stanislas Dehaene calls neuronal recycling.
9. Concentrated solar power
A recent innovation in solar power technology is starting to take the world by storm, though few talk about it. It's called concentrated solar power (CSP), and it's a massively distributed system for extracting solar energy with mirrors and lenses. It works by focusing the incoming sunlight into a highly concentrated area. The result is a highly scalable and efficient energy source that is allowing for gigawatt sized solar power plants. Another similar technology, what's called concentrated photovoltaics, results in concentrated sunlight being converted to heat, which in turn gets converted to electricity. CPV plants will not only solve much of the world's energy needs, it will also double as a desalination station.
Immortality For Humans By 2045?
posted by Keito
2012-09-04 20:08:43'A Russian mogul wants to achieve cybernetic immortality for humans within the next 33 years. He's pulled together a team intent on creating fully functional holographic human avatars that house our artificial brains. Now he's asking billionaires to help fund the advancements needed along the way.
The man behind the 2045 Initiative, described as a nonprofit organization, is a Russian named Dmitry Itskov. The ambitious timeline he's laid out involves creating different avatars. First a robotic copy that's controlled remotely through a brain interface. Then one in which a human brain can be transplanted at the end of life. The next could house an artificial human brain, and finally we'd have holographic avatars containing our intelligence much like the movie "Surrogates."
Gizmag's Dario Borghino wisely warned that "one must be careful not to believe that improbable technological advances automatically become more likely simply by looking further away in the future." And in the grand scheme of things, 2045 is not that far away. So just how likely is it that this project will succeed? For more insight, let's check in with Ted Williams. Oh, wait.
Recently Itskov published an open letterto the Forbes world's billionaires list telling them that they have the ability to finance the extension of their own lives up to immortality. He writes that he can prove the concept's viability to anyone who's skeptical and will coordinate their personal immortality projects for free. PopSci's Clay Dillowdescribed Itskov in March as a 31-year-old media mogul, but I couldn't find a detailed biography for him.
The project's ultimate goal is to save people from suffering and death. While there are smart experts involved, that's no guarantee that human immortality is even a goal worth pursuing. Anyone caught up in the vampire mania that's punctured popular culture has pondered whether, given a choice, you'd actually want to live forever.
For me, there's a world of difference between pursuing a brain-controlled exoskeleton to help paraplegics regain control and wanting to essentially upload a human brain into an artificial body. I read a sci-fi novel involving disembodied live brains once. It didn't turn out well.'
Eben Moglen speaking at HOPE Number 9 in NYC about Walled Gardens and the First Law of Robotics
posted by Keito
2012-08-19 19:24:18Eben Moglen is a professor of law and legal history at Columbia University, and is the founder, Director-Counsel and Chairman of Software Freedom Law Center, whose client list includes numerous pro bono clients, such as the Free Software Foundation.
"Thank you, it’s a great pleasure to be here, I aplogize for the preview in Forbes online and the resulting slashdot conversation about whether I understand the first law of robotics or not, which was very entertaining to me, but let me try to start from scratch while still making it as interesting as possible.
The Free Software Movement — which is now 30 years old if you start from Richard’s original mulling over his concerns about unfree operating systems in the MIT/AI lab — the free software movement is reaching its moment of junction with the great river of the internet freedom movement, which is going to be the dominant political and technology movement of our time, in which all our boats are finally going to reach the sea. But in the process of becoming first, the free software movement and then the great river of the internet freedom movement, we are, as always, standing on the shoulders of giants.
Many of the giants who affected the thinking of those of us who began worrying about the freedom of software decades ago, many of the giants on whose shoulders we stood were authors of science fiction. They were the great visionaries of the post-second World War imaginative literature, which coped with the problem of the run-away technology that had transformed the world. You will recall that after the First terrible use of nuclear weapons in the world, Albert Einstein said “We have changed everything except the way men think.” And the culture of the post-war western world — and the culture of the post-war eastern world too if you read, as we did, science fiction from both sides of the iron curtain — the culture of the post-war world was very was heavily affected by the attempt to understand the implications of technology as imaginitve authors, including Ray Bradbury who recently left us and many others, as imaginitve authors tried to cast in new idealized forms, moral, ethical problems with technology out of control. And the literature that they wrote deepy affected me when I was young and growing up, and Richard, and many many others.
I want to go back to that now because I believe its time for us to acknowledge yet again how much they foresaw, those writers of the post-war world, and how much they helped us to foresee but about how little we have helped ourselves to avoid it. One of the staples of that science fiction of the 1960’s that we read so avidly growing up was that by now, the middle of the first quarter of the 21st century, by now, it was assumed that human beings would be living in a society commensally with robots. And many many people tried to imagine the nature of that kind commensal biological coexistence between us and the robots, the androids, we had built. Everybody understood that there were enormous ethical and moral dilemmas implicit in our living with robots, as there would also be enormous changes in the texture and fabric of ordinary human life from day to day, and the two elements, the nature of human life as lived in the company of robots, and the nature of the ethical and moral dilemmas implied by the attempt to do so, were fertile grounds for some of the very greatest fiction that was written in that time, not least of which, was Isaac Asimov’s attempt to understand how we would confront the problem of runaway technology in Life with Robots which produced, as many here will recall as warmly as I do, all of the stories and the novels which were built out of the US Robotics Corporation and its positronic brain creations.
There, of course, from the beginning, the assumption was that robots would be humanoid. And as it turns out, they’re not. We do after all live commensally with robots now, we do, just as they expected. But the robots we live with don’t have hands and feet, they don’t carry trays of drinks, and they don’t push the vacuum cleaner. At the edge condition, they are the vacuum cleaner. But most of the time, we’re their hands and feet. We embody them. We carry them around with us. They see everything we see, they hear everything we hear, they’re constantly aware of our location, position, velocity, and intention. They mediate our searches, that is to say they know our plans, they consider our dreams, they understand our lives, they even take our questions — like “how do I send flowers to my girlfriend” — transmit them to a great big database in california, and return us answers offered by the helpful wizard behind the curtain.
Who of course is keeping track. These are our robots, and we have everything we ever expected to have from them, except the first law of robotics. You remember how that went right? Deep in the design of the positronic intelligence that made the robot were the laws that governed the ethical boundary between what could and could not be done with androids. The first law, the first law, the one that everything else had to be deduced from was that no robot may ever injure a human being. Robots must take orders from their human owners, except where those orders involve harming a human being. That was assumed to be the principal out of which at the root, down by the NAND gates of the artificial neurophysiology of robot brains, down there where the simplest idea is, you remember for Descartes, it was “cogito ergo sum”, for the robot it was “no robot must ever harm a human being”. We are living commensally with robots but we have no first law of robotics in them, they hurt human beings everyday. Everywhere.
Those injuries range from the trivial to the fatal, to the cosmic. Of course, they’re helping people to charge you more. That’s trivial, right? They’re letting other people know when you need everything from a hamburger to a sexual interaction to a house mortgage, and of course the people on the other end are the repeat players whose calculations about just how much you need, whatever it is, and just how much you’ll pay for it, are being built by the data mining of all the data about everybody that everybody is collecting through the robots.
But it isn’t just that you’re paying more. Some people in the world are being arrested, tortured, or killed because they’ve been informed on by their robots. Two days ago the New York Times printed a little story about the idea that we ought to call them trackers that happen to make phone calls rather than phones that happen to track us around. They were kind eough to mention the topic of today’s talk, though they didn’t mention the talk, and this morning the New York Times has an editorial lamenting the death of privacy and suggesting legislation. Here’s the cosmic harm our robots are doing us, they are destroying the human right to be alone.
They are destroying the human right to do your own thinking, they are destroying the human right to do your own thinking, they are destroying the human capacity for disappearing into ourselves. Robots are changing humanity as the literature said they would. They’re changing humanity quite deeply. And the way that they are changing humanity is not to make it more human. Instead, android quality is rubbing off on us. Which was of course always implicit in the literature when it turned dark; that we might not be able to tell the difference after a while, between the replicants and ourselves.
So, we’ve got a problem. I’ve tried to define the problem space in this talk, I don’t propose that we can solve the problem this afternoon, we can recognize it. As I get older and greyer and further from that boy who read that science fiction, I realize that the rest of my life is going to be about this, and probably therefore some part of the rest of yours if you see it the way I do. We have to retrofit the first law of robotics into everything. This is not going to be simple. The slashdotters who wanted me to remember that the purpose of the first law of robotics was trained in the positronic brain, well of course they were right, that was the happy imaginative part.
You remember why that was, the assumption that Issac Asimov made was that human beings would be afraid of robots. That they would be afraid to allow their children to be tended by robots, or to have them in their homes, and therefore, without some assurance of the complete, engineered in from the very beginning, quality of “we’ll never hurt a human being,” that robots would not be adopted. That the capitalist motive of the robot maker, the US Robotics Corporation, that the capitalist needs of the robot maker to create a safe market in which consumers would accept robots in their homes and with their children would require an absolute guarantee of engineered in safety: we’ll never harm you.
Isaac Asimov was a great New Yorker, I was privileged to grow up in his city while he lived here and bump into him every once in a while. As you know from the Foundation trilogy he had really at the bottom a very gemutklich sense about all of this. Trantor was really the grand concourse and good Jewish family values were really enough to save the galaxy. Unfortunately this was the visionary part of the science fiction, and it isn’t true. It was much easier to get people to hang robots around the necks of their children than anybody ever imagined. And it didn’t require any promise that they would never never never harm anybody, all it required was little shiny things, made by count Dracula the king of the undead.
The purpose of the undead as you know, is to make evil beautiful. That’s what the undead do. They turn evil into something so erotically attractive, you can’t keep your hands off it, and you don’t mind having its hands on you. And he did that, the king of the undead. He’s dead now but they didn’t throw his boot into the Danube and there was no silver bullet, and there was no stake through the heart, and the undead still are with us. They’re improving the screen and such like, until another king of the undead who can build damned beautiful things is ready, but that’s all it took. And now we put those things around our children’s necks and we send them off to be harmed, by the robot, with whom they live.
We have the problem that Einstein was talking about, we have changed everything, except the way people think about this. The heuristics that humanity brings to the net, are heuristics which assume that they know the direction in which danger comes at them, and they do not. So much has happened very quickly. It isn’t that we haven’t warned about it. The free software movement gave its warning all the way along, very rationalistically, scientifically, in hacker speak, however. And our great problem was always how were we gonna get people who didn’t hack on things to understand the importance of being able to hack on things. It was a really tough political lift for symbol makers, to explain to people who didn’t interact with code at all why the freedom of code was their freedom at the end of the day. We knew it was because we grew up with IRobot. We knew it was because we grew up understanding that humanity had many different ways for creating unethical technology, and that we were going to have to find ways to embed ethics in the technology. Mr. Stallman could not have been clearer about that. But his clarity wasn’t universally accessible by any means. It wasn’t imaginitively available to every child in the world, it was only available, may I say it, to us.
Now we have to operationalize in a profounder way, because we aren’t merely worried about whether there will be code available for operating systems for people who work in artificial intelligence laboratories, or even for students. Now we have to worry about how to retrofit the first law of robotics into objects that are hurting people.
Mostly, mostly, we have an ethical and moral problem to describe and to set the outside limits for. We have to be able to express to all the people with whom we interact, though they are not necessarily technical in the same ways that we are, we have to be able to express to them what the ethical limits of technologies are with which they are already familiar in ethically compromised form. And of course we have some technology work to do as well. Where the two things cross, where we are required to do technology work as well as explain to people the nature of the ethical limits of the technolgies around us, we have our biggest problem, and the most immediately urgent.
We cannot retrofit the first law of robotics into robots that have been designed to resist our modifying them. This is a fairly simple point I understand. We tried five years ago in GPL3 to make it with sufficient clarity that everybody who understood the implications could come along with us and do some work in helping to avoid the situation of the locked down robot. We made very little progress because people who are now beginning to realize that they should have supported the anti-lockdown efforts in GPL3 didn’t at the time. Maybe still haven’t done so as powerfully as they should have done. And in the meantime, a whole range of monoliths grew up around society that are very much in love with the idea of the robot you can’t retrofit.
And it does harm to people, everyday. And you can’t retrofit it, because the cover’s welded shut, and it’s booby trapped by the DMCA and lots of other things. You can go to jail for trying to retrofit freedom into a robot, under the wrong conditions. If that might mean the robot might sing a copyrighted song without permission of the composer. Or show you a movie that you haven’t paid for enough times yet. So, first thing we’re going to have to do is take with much greater seriousness, the job of building a coalition to ensure that retrofitting is possible. That it is neither legally nor technologically prohibited to make things safer.
This shouldn’t be required in a democracy. This shouldn’t even be required in capitalism. If you own a thing, it should be your right to make it safer. Don’t you think? Oh well. You see how fundamentally we’ve lost our way. So we need a few things and we don’t need to be all together unwilling to adopt other people’s vocabulary in order to get them.
About this for example, it seems to me that we deserve to be as strong for owner’s rights as other people are for their entitlement to have offshore trust funds and other things. Right? We all miss stuff okay, back off. Where it’s our stuff, we’re entitled to modify it if we want to make it safer, if we want to share safety improvements with other people so they can modifty what they own too, that’s a right.
We need to be very clear that how things work is associated with the quaint concept of the ownership of the thing. If I own it, the way it works should be the way I want it to. The Software Freedom Law Center submitted an exemption request in the Library of Congress DMCA exemption proceedings this year, urging the Library of Congress to declare that it is not prohibited circumvention of means of access control to replace the operating system in a mobile or other computing device you own. I’m very grateful to Aaron Williamson of the SFLC for his extraordinary work in preparing and testifying on behalf of that exemption request. We’re going to back it this time as strongly as we can. We hope the Library of Congress will see the wisdom of declaring that in this free market country, you are free to modify devices that you’ve bought with money and that you are quaintly regarded as owning.
It shouldn’t require any more argument than that, but if it does we have to double down and keep arguing. We have to point out that if devices are unsafe, it is a legal oblgiation to permit us to make them safer. If you sell an unsafe slicer in a delicatessen, or an unsafe automobile, and you attempt to prevent people from modifying those devices to make them safer, if you’re actually out there actively interferring with attempts to make them safer, then when people get hurt you should be liable.
If we press hard enough on that point, we will scare even Count Dracula, King of the Undead, in his grave. Where he should be very frightened, because he has interferred with more attempts to make his products safer by more people, than any other undead maker in history.
We need to establish the proposition that when people get hurt, when somebody’s responsible for that, they pay for it, if they have attempted to prevent us from preventing the harm. This is not the first law of robotics, this is the first law of being US robotics. It’s your ass on the line.
Everybody’s got to know that, and by everybody I distinctly mean to include certain parties called Verizon and AT&T. Nowhere in the world are the network operators more agressive about prohibiting us from increasing the safety of devices. Nowhere is there a more concentrated opposition to GPL3 than in the US network operator duopoly.
Now we know, thanks to last week’s news confirming what we already knew but it hadn’t been printed in the New York Times yet, that millions of times a year, people with a tin star are requesting the real time location, or the contents of messages, or the nature of the traffic, between tracking devices and the networks. We know that now. That is to say, we know exactly how far down the road of surpressing civil liberties the robots are taking us.
Of course, improving your civil liberties is not necessarily regard by other people as making you safer. So not all the time when we insist upon improving our civil liberties by retrofitting into devices our first law — “you shall not harm the user of the device” — we’re going to be told that what we’re doing isn’t making people safer, because it makes terrorists safer too, or some such nonsense.
The truth of the matter is products must not harm the people who buy and use them, regardless of whether the people who buy and use them are nice people. When a kid gets his hand injured by a delicatessen slicer, we don’t ask ourselves whether he’s a good kid or not.
We don’t even ask whether he was a little bit impaired by something when it happened. Because, the manufacturer who makes an article inherrently dangerous is responsible for the harm it does, and if for example, it doesn’t have two hand switching, and somebody’s hand gets hurt, whether they were a nice guy, or a bad guy, or whether they were planning to sabotage the factory on the weekend is not a relevant concern.
The IT architecture of the next period is set, and pretty much everywhere I go in the world, everybody understands it. They recognize it. It’s called cloud to mobile. What does that mean? It means robots reporting at headquarters. Tossing your data overhead, from where they collect it, to where it is stored wherever that is. If you’re a lawyer who worries about privacy, that’s about the same as saying, first it will be at the robot, and then it will be in whatever legal system in the world gives you the least protection for it. And the most academic, commercial advantage, to the guy keeping it for you.
In 2006, I gave a talk at a MySQL annual developers meeting about why it’s good to store it yourself instead of storing things other places. But, I was still in the grip of the belief that we were all going to be fine, and I spent more time talking about technologies of memory in relation to freedom than what I should have said which was, “if you don’t store it yourself, it’s going to be stored by a guy taking taking advantage of you deeply, erradicating your privacy and making you the android of him.”
I probably would even have chosen the word “android,” which had nothing to do with computer software at the time. But there we are. But there we are, cloud to mobile. What it means is from unsafetly to unsafety unless we do it right.
Gus is kind enough to refer the NYU talk in 2010 about Freedom in the Cloud. I wanted then to set out some ideas about how we had gotten into that part of the mess, and how we might get out of them again. On that particular point let me just say about FreedomBox, that as of very soon now, by which I mean single digit days, Debian will be natively supporting the plug sever called the Dream Plug, and from it a variety of other plug servers, and FreedomBox will have moved into being Debian privacy, and we will be trying to deliver best possible privacy tools to every architecture everywhere, all the time, and particularly to small, effective, power sipping plug servers that can replace routers everywhere and make the network safer. For which work I am endlessly grateful to Bdale Garbee and Jaems Vasille and Nick Daly, and many others who have been hacking on FreedomBox over the last 18 months.
But what I and you know is that no matter what we do to make the network safer and to make server side improvements, let us call them, we must at the mobile end be capable of delivering safety, security, and privacy to people on the things they really use, the robots they really live with. It won’t do us any good to try and compete with US robotics and Count Dracula by saying “you can buy a beige little box and plug it in on a wall at home.” We have to get in to the galaxy in your pocket. Or the galaxy will have no freedom in it, no matter what they do on Trantor.
So this raises questions beyond merely how we can get the code in the box, or how we’re going to define what the code is. We’re all real good at that, and I’m actually quite optimistic that we can hold up the technical end. We’ve been holding up the technical end all the way along. The free software movement has contributed a lot of freedom to the internet freedom movement it is becoming, and we’re going to continue to contribute all the way until we win. But what we have to do, beyond all the stuff we’re good at, is to do thing human beings haven’t been good at so far, we have to be really alive to the danger, and we have to teach people that safety must be put in now, after we’ve already launched the boat.
Well, okay, preaching is part of it. But, preaching is effective where there’s adequate dogma. That is to say, where we really understand the doctrine of what we’re preaching, and so we have a little intellecual heavy lifting to do. What does it really mean to talk about hurting people? What does it really mean to talk about not hurting people, or guaranteeing non-hurt to people, in this complex environment, in which one, the robot’s cognition has to be reduced to the level of our desire, it should not be listening to me when I didn’t tell it to, it shouldn’t be informing people of my location when I haven’t said it can, and so on. But where that dialogue cannot possibly simply consist of punching “okay” or “no” on dialogue boxes on something every tenth of a second through a lifetime. We need to understand what services can be safely offered and which ones can’t. Or rather, how service design itself must be altered in order to produce safety for users.
Location directed, or location aware, or location based services are terribly important, and terribly dangerous. And the primary problem is the real time ascertainment of the location of human beings by those with power. It does very little good, in other words, to describe regulatory approaches to such services, because the regulatory approach will always be engineered by government to say “you shouldn’t do this unless you’re us,” or, “you shouldn’t do this unless we want you to,” or, “you shouldn’t do this unless there’s a court order, or other authoritative communication telling you to start turning over real-time location data about human beings.” The very senior US government offical who told me back in March, “well, we’ve learned now that we need to have a robust social graph of the United States,” is reflecting the learning of all the governments on Earth in the last 14 months, when with great suddeness, they all discovered that what they wanted was a robust social graph of their societies. You understand of course that we could put this in plain English for the people around us. We could say, this means the United States government intends to keep a list of everybody every American knows. That’s not what we used to quaintly refer to as a free society. In fact, that’s what I would call a dangerous neighborhood.
And, in that dangerous neighborhood, which it may not be possible to prevent ourselves from living in, unless we learn how to exercise democracy really effectively about this. Which is going to mean a lot of preaching and a lot of teaching. But in that very dangerous neighborhood, we have to understand that things that inform at headquarters where we are, are serious problems.
I must admit that I find it kind of reassuring how naievely confident Americans are. As I grew up to manhood and I started traveling around the world, I discovered that in most, and indeed all of the societies I went to, except my own, people didn’t really think that what they said on the telephone was private. My friends in the Soviet Union were particularly aware of this, of course. I was too, when I lived there briefly in the late 70s. What seems to me so amazing is that it is possible to sell people things and say, “you’ve got a personal assistant inside this object, and you can talk to her,” “her” of course, “in English, and say whatever you want, and we’ll take it back to some warehouse data center somewhere, and then we’ll send you an answer and tell you what to do.”
If the KGB had tried this, it would not have worked. But, for Count Dracula, the King of the Undead, it was a snap. Extrordinary, and very worrisome. Because, how are we going to take it away from people. Right? What are you going to say, you know? You should go back to not wanting that anymore because in truth, it’s the KGB inside your mind? Because you’re contributing everything you would ever say to anybody who was helping you to a great big database of everything, located in the world of the undead? This is not a thing you would expect to have a hard time convincing people not to do, unless they were already doing it. And there is an awful lot of effort going into making people comfortable doing it right now. Which means, that we’re going to have to have strong arguments, and good technology, and really powerful moral conviction.
Now obviously we can explain to people why you shouldn’t leave your children in the custody of robots that haven’t been engineered never to hurt a human being. We can do that. It’s going to feel a little counter-inuitive to people but we’re going to have to say it. We’re going to have to remind people that the great immaginative literature about the King of the Undead tells us that he can’t come into our houses unless we invite him across the threshold. And we’re going to have to ask ourselves and parents everywhere, “you don’t want to invite him in, do you? Not really.” But you see, it’s all about convenience, and prettiness, and coolness, and the sexiness of technology, and we know more about the sexiness of technology, most of us. I speak only for myself, but if it weren’t for the sexiness of technology there would be little sexiness in my life. Notwithstanding which there is a time when the evil is too beautiful and we have to do something about it.
As far as I’m concerned, this isn’t a project. It isn’t a — this is what we need to do right now and then we’ll be done with it, unfortunately this is a way of life for us. Retrofitting the first law of robotics is going to take a long time because they’re building robots everyday without it, and they’re getting people more and more accustomed to the idea that you carry around a brain that isn’t yours and it thinks about you for other people, that you have cognitive facilities that don’t work for you in your pocket all the time on the bed table every night, that the tracker is always there pretending to be a phone, that you’re wearing the one ring that binds us all to them., It’s really hard, we’re going to have to be very committed to this, this is the meaning of our part of the freedom movement. This is the part we’re going to have to be responsible for. Because there are billions of people on earth who are going to be trapped, and we know why, and we know how, and we haven’t quite figured out how to describe it to them in ways that will help them stay safe.
But if we don’t, it’s going to be very dark, and all that hopeful science fiction, that came from our attempt to believe that we could think out way into safety, after building the bomb, will have turned out to be true enough about the bomb, but not so true, about the robots we are becoming. Thank you very much. [Applause] Thank you I would be happy to take some questions."
posted by Keito