Blog

  • CleanIT – Leak shows plans for large-scale, undemocratic surveillance of all communications

    posted by Keito
    2012-09-26 20:48:25
    'A leaked document from the CleanIT project shows just how far internal discussions in that initiative have drifted away from its publicly stated aims, as well as the most fundamental legal rules that underpin European democracy and the rule of law.

    The European Commission-funded CleanIT project claims that it wants to fight terrorism through voluntary self-regulatory measures that defends the rule of law.

    The initial meetings of the initiative, with their directionless and ill-informed discussions about doing “something” to solve unidentified online “terrorist” problems were mainly attended by filtering companies, who saw an interesting business opportunity. Their work has paid off, with numerous proposals for filtering by companies and governments, proposals for liability in case sufficiently intrusive filtering is not used, and calls for increased funding by governments of new filtering technologies.

    The leaked document contradicts a letter sent from CleanIT Coordinator But Klaasen to Dutch NGO Bits of Freedom in April of this year, which explained that the project would first identify problems before making policy proposals. The promise to defend the rule of law has been abandoned. There appears never to have been a plan to identify a specific problem to be solved – instead the initiative has become little more than a protection racket (use filtering or be held liable for terrorist offences) for the online security industry.

    The proposals urge Internet companies to ban unwelcome activity through their terms of service, but advise that these “should not be very detailed”. This already widespread approach results, for example, in Microsoft (as a wholly typical example of current industry practice) having terms of service that would ban pictures of the always trouserless Donald Duck as potential pornography (“depicts nudity of any sort ... in non-human forms such as cartoons”). The leaked paper also contradicts the assertion in the letter that the project “does not aim to restrict behaviour that is not forbidden by law” - the whole point of prohibiting content in terms of service that is theoretically prohibited by law, is to permit extra-judicial vigilantism by private companies, otherwise the democratically justified law would be enough. Worse, the only way for a company to be sure of banning everything that is banned by law, is to use terms that are more broad, less well defined and less predictable than real law.

    Moving still further into the realm of the absurd, the leaked document proposes the use of terms of service to remove content “which is fully legal”... although this is up to the “ethical or business” priorities of the company in question what they remove. In other words, if Donald Duck is displeasing to the police, they would welcome, but don't explicitly demand, ISPs banning his behaviour in their terms of service. Cooperative ISPs would then be rewarded by being prioritised in state-funded calls for tender.

    CleanIT (terrorism), financed by DG Home Affairs of the European Commission is duplicating much of the work of the CEO Coalition (child protection), which is financed by DG Communications Networks of the European Commission. Both are, independently and without coordination, developing policies on issues such as reporting buttons and flagging of possibly illegal material. Both CleanIT and the CEO Coalition are duplicating each other's work on creating “voluntary” rules for notification and removal of possibly illegal content and are jointly duplicating the evidence-based policy work being done by DG Internal Market of the European Commission, which recently completed a consultation on this subject. Both have also been discussing upload filtering, to monitor all content being put online by European citizens.

    CleanIT wants binding engagements from internet companies to carry out surveillance, to block and to filter (albeit only at “end user” - meaning local network - level). It wants a network of trusted online informants and, contrary to everything that they have ever said, they also want new, stricter legislation from Member States.

    Unsurprisingly, in EDRi's discussions with both law enforcement agencies and industry about CleanIT, the word that appears with most frequency is “incompetence”.

    The document linked below is distributed to participants on a “need to know” basis – we are sharing the document because citizens need to know what is being proposed.

    Key measures being proposed:

    -Removal of any legislation preventing filtering/surveillance of employees' Internet connections
    -Law enforcement authorities should be able to have content removed “without following the more labour-intensive and formal procedures for 'notice and action'”
    -“Knowingly” providing links to “terrorist content” (the draft does not refer to content which has been ruled to be illegal by a court, but undefined “terrorist content” in general) will be an offence “just like” the terrorist
    -Legal underpinning of “real name” rules to prevent anonymous use of online services
    -ISPs to be held liable for not making “reasonable” efforts to use technological surveillance to identify (undefined) “terrorist” use of the Internet
    -Companies providing end-user filtering systems and their customers should be liable for failing to report “illegal” activity identified by the filter
    -Customers should also be held liable for “knowingly” sending a report of content which is not illegal
    -Governments should use the helpfulness of ISPs as a criterion for awarding public contracts
    -The proposal on blocking lists contradict each other, on the one hand providing comprehensive details for each piece of illegal content and judicial references, but then saying that the owner can appeal (although if there was already a judicial ruling, the legal process would already have been at an end) and that filtering such be based on the “output” of the proposed content regulation body, the “European Advisory Foundation”
    -Blocking or “warning” systems should be implemented by social media platforms – somehow it will be both illegal to provide (undefined) “Internet services” to “terrorist persons” and legal to knowingly provide access to illegal content, while “warning” the end-user that they are accessing illegal content
    -The anonymity of individuals reporting (possibly) illegal content must be preserved... yet their IP address must be logged to permit them to be prosecuted if it is suspected that they are reporting legal content deliberately and to permit reliable informants' reports to be processed more quickly
    -Companies should implement upload filters to monitor uploaded content to make sure that content that is removed – or content that is similar to what is removed – is not re-uploaded
    -It proposes that content should not be removed in all cases but “blocked” (i.e. make inaccessible by the hosting provider – not “blocked” in the access provider sense) and, in other cases, left available online but with the domain name removed.'

    Leaked document: http://www.edri.org/files/cleanIT_sept2012.pdf

    CleanIT Project website: http://www.cleanitproject.eu/
  • 8 Things We Would Not Know Without WikiLeaks

    posted by Keito
    2012-09-04 20:47:45
  • Cory Doctorow: The Coming Civil War over General Purpose Computing

    posted by Keito
    2012-08-28 21:18:46
    'Even if we win the right to own and control our computers, a dilemma remains: what rights do owners owe users?



    This talk was delivered at Google in August, and for The Long Now Foundation in July 2012. A transcript of the notes follows.

    I gave a talk in late 2011 at 28C3 in Berlin called "The Coming War on General Purpose Computing"

    In a nutshell, its hypothesis was this:

    • Computers and the Internet are everywhere and the world is increasingly made of them.

    • We used to have separate categories of device: washing machines, VCRs, phones, cars, but now we just have computers in different cases. For example, modern cars are computers we put our bodies in and Boeing 747s are flying Solaris boxes, whereas hearing aids and pacemakers are computers we put in our body.

    • This means that all of our sociopolitical problems in the future will have a computer inside them, too—and a would-be regulator saying stuff like this:

    "Make it so that self-driving cars can't be programmed to drag race"

    "Make it so that bioscale 3D printers can't make harmful organisms or restricted compounds"

    Which is to say: "Make me a general-purpose computer that runs all programs except for one program that freaks me out."

    But there's a problem. We don't know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal.

    The closest approximation we have for such a device is a computer with spyware on it— a computer that, if you do the wrong thing, can intercede and say, "I can't let you do that, Dave."

    Such a a computer runs programs designed to be hidden from the owner of the device, and which the owner can't override or kill. In other words: DRM. Digital Rights Managment.

    These computers are a bad idea for two significant reasons. First, they won't solve problems. Breaking DRM isn't hard for bad guys. The copyright wars' lesson is that DRM is always broken with near-immediacy.

    DRM only works if the "I can't let you do that, Dave" program stays a secret. Once the most sophisticated attackers in the world liberate that secret, it will be available to everyone else, too.

    Second, DRM has inherently weak security, which thereby makes overall security weaker.

    Certainty about what software is on your computer is fundamental to good computer security, and you can't know if your computer's software is secure unless you know what software it is running.

    Designing "I can't let you do that, Dave" into computers creates an enormous security vulnerability: anyone who hijacks that facility can do things to your computer that you can't find out about.

    Moreover, once a government thinks it has "solved" a problem with DRM—with all its inherent weaknesses—that creates a perverse incentive to make it illegal to tell people things that might undermine the DRM.

    You know, things like how the DRM works. Or "here's a flaw in the DRM which lets an attacker secretly watch through your webcam or listen through your mic."

    I've had a lot of feedback from various distinguished computer scientists, technologists, civil libertarians and security researchers after 28C3. Within those fields, there is a widespread consensus that, all other things being equal, computers are more secure and society is better served when owners of computers can control what software runs on them.

    Let's examine for a moment what that would mean.

    Most computers today are fitted with Trusted Platform Module. This is a secure co-processor mounted on the motherboard. The specification of TPMs are published, and an industry body certifies compliance with those specifications. To the extent that the spec is good (and the industry body is diligent), it's possible to be reasonably certain that you've got a real, functional, TPM in your computer that faithfully implements the spec.

    How is the TPM secure? It contains secrets: cryptographic keys. But it's also secure in that it's designed to be tamper-evident. If you try to extract the keys from a TPM, or remove the TPM from a computer and replace it with a gimmicked one, it will be very obvious to the computer's owner.

    One threat to TPM is that a crook (or a government, police force or other adversary) might try to compromise your computer — tamper-evidence is what lets you know when your TPM has been fiddled with.

    Another TPM threat-model is that a piece of malicious software will infect your computer

    Now, once your computer is compromised this way, you could be in great trouble. All of the sensors attached to the computer—mic, camera, accelerometer, fingerprint reader, GPS—might be switched on without your knowledge. Off goes the data to the bad guys.

    All the data on your computer (sensitive files, stored passwords and web history)? Off it goes to the bad guys—or erased.

    All the keystrokes into your computer—your passwords!—might be logged. All the peripherals attached to your computer—printers, scanners, SCADA controllers, MRI machines, 3D printers— might be covertly operated or subtly altered.

    Imagine if those "other peripherals" included cars or avionics. Or your optic nerve, your cochlea, the stumps of your legs.

    When your computer boots up, the TPM can ask the bootloader for a signed hash of itself and verify that the signature on the hash comes from a trusted party. Once you trust the bootloader to faithfully perform its duties, you can ask it to check the signatures on the operating system, which, once verified, can check the signatures on the programs that run on it.

    Ths ensures that you know which programs are running on your computer—and that any programs running in secret have managed the trick by leveraging a defect in the bootloader, operating system or other components, and not because a new defect has been inserted into your system to create a facility for hiding things from you.

    This always reminds me of Descartes: he starts off by saying that he can't tell what's true and what's not true, because he's not sure if he really exists.

    He finds a way of proving that he exists, and that he can trust his senses and his faculty for reason.

    Having found a tiny nub of stable certainty on which to stand, he builds a scaffold of logic that he affixes to it, until he builds up an entire edifice.

    Likewise, a TPM is a nub of stable certainty: if it's there, it can reliably inform you about the code on your computer.

    Now, you may find it weird to hear someone like me talking warmly about TPMs. After all, these are the technologies that make it possible to lock down phones, tablets, consoles and even some PCs so that they can't run software of the owner's choosing.

    Jailbreaking" usually means finding some way to defeat a TPM or TPM-like technology. So why on earth would I want a TPM in my computer?

    As with everything important, the devil is in the details.

    Imagine for a moment two different ways of implementing a TPM:

    1. Lockdown

    Your TPM comes with a set of signing keys it trusts, and unless your bootloader is signed by a TPM-trusted party, you can't run it. Moreover, since the bootloader determines which OS launches, you don't get to control the software in your machine.

    2. Certainty

    You tell your TPM which signing keys you trust—say, Ubuntu, EFF, ACLU and Wikileaks—and it tells you whether the bootloaders it can find on your disk have been signed by any of those parties. It can faithfully report the signature on any other bootloaders it finds, and it lets you make up your own damn mind about whether you want to trust any or all of the above.

    Approximately speaking, these two scenarios correspond to the way that iOS and Android work: iOS only lets you run Apple-approved code; Android lets you tick a box to run any code you want. Critically, however, Android lacks the facility to do some crypto work on the software before boot-time and tell you whether the code you think you're about to run is actually what you're about to run.

    It's freedom, but not certainty.

    In a world where the computers we're discussing can see and hear you, where we insert our bodies into them, where they are surgically implanted into us, and where they fly our planes and drive our cars, certainty is a big deal.

    This is why I like the idea of a TPM, assuming it is implemented in the "certainty" mode and not the "lockdown" mode.

    If that's not clear, think of it this way: a "war on general-purpose computing" is what happens when the control freaks in government and industry demand the ability to remotely control your computers

    The defenders against that attack are also control freaks—like me—but they happen to believe that device-owners should have control over their computers

    Both sides want control, but differ on which side should have control.

    Control requires knowledge. If you want to be sure that songs can only moved onto an iPod, but not off of an iPod, the iPod needs to know that the instructions being given to it by the PC (to which it is tethered) are emanating from an Apple-approved iTunes. It needs to know they're not from something that impersonates iTunes in order to get the iPod to give it access to those files.

    If you want to be sure that my PVR won't record the watch-once video-on-demand movie that I've just paid for, you need to be able to ensure that the tuner receiving the video will only talk to approved devices whose manufacturers have promised to honor "do-not-record" flags in the programmes.

    If I want to be sure that you aren't watching me through my webcam, I need to know what the drivers are and whether they honor the convention that the little green activity light is always switched on when my camera is running.

    If I want to be sure that you aren't capturing my passwords through my keyboard, I need to know that the OS isn't lying when it says there aren't any keyloggers on my system.

    Whether you want to be free—or want to enslave—you need control. And for that, you need this knowledge.

    That's the coming war on general purpose computing. But now I want to investigate what happens if we win it.

    We could face a interesting prospect. This I call the coming civil war over general purpose computing.

    Let's stipulate that a victory for the "freedom side" in the war on general purpose computing would result in computers that let their owners know what was running on them. Computers would faithfully report the hash and associated signatures for any bootloaders they found, control what was running on computers, and allow their owners to specify who was allowed to sign their bootloaders, operating systems, and so on.

    There are two arguments that we can make for this:

    1. Human rights

    If your world is made of computers, then designing computers to override their owners' decisions has significant human rights implications. Today we worry that the Iranian government might demand import controls on computers, so that only those capable of undetectable surveillance are operable within its borders. Tomorrow we might worry about whether the British government would demand that NHS-funded cochlear implants be designed to block reception of "extremist" language, to log and report it, or both.

    2. Property rights

    The doctrine of first sale is an important piece of consumer law. It says that once you buy something, it belongs to you, and you should have the freedom to do anything you want with it, even if that hurts the vendor's income. Opponents of DRM like the slogan, "You bought it, you own it."

    Property rights are an incredibly powerful argument. This goes double in America, where strong property rights enforcement is seen as the foundation of all social remedies.

    This goes triple for Silicon Valley, where you can't swing a cat without hitting a libertarian who believes that the major — or only — legitimate function of a state is to enforce property rights and contracts around them.

    Which is to say that if you want to win a nerd fight, property rights are a powerful weapon to have in your arsenal. And not just nerd fights!

    That's why copyfighters are so touchy about the term "Intellectual Property". This synthetic, ideologically-loaded term was popularized in the 1970s as a replacement for "regulatory monopolies" or "creators' monopolies" — because it's a lot easier to get Congress to help you police your property than it is to get them to help enforce your monopoly.

    Here is where the civil war part comes in.

    Human rights and property rights both demand that computers not be designed for remote control by governments, corporations, or other outside institutions. Both ensure that owners be allowed to specify what software they're going to run. To freely choose the nub of certainty from which they will suspend the scaffold of their computer's security.

    Remember that security is relative: you are secured from attacks on your ability to freely use your music if you can control your computing environment. This, however, erodes the music industry's own security to charge you some kind of rent, on a use-by-use basis, for your purchased music.

    If you get to choose the nub from which the scaffold will dangle, you get control and the power to secure yourself against attackers. If the the government, the RIAA or Monsanto chooses the nub, they get control and the power to secure themselves against you.

    In this dilemma, we know what side we fall on. We agree that at the very least, owners should be allowed to know and control their computers.

    But what about users?

    Users of computers don't always have the same interests as the owners of computers— and, increasingly, we will be users of computers that we don't own.

    Where you come down on conflicts between owners and users is going to be one of the most meaningful ideological questions in technology's history. There's no easy answer that I know about for guiding these decisions.

    Let's start with a total pro-owner position: "property maximalism".

    • If it's my computer, I should have the absolute right to dictate the terms of use to anyone who wants to use it. If you don't like it, find someone else's computer to use.

    How would that work in practice? Through some combination of an initialization routine, tamper evidence, law, and physical control. For example, when you turn on your computer for the first time, you initialize a good secret password, possibly signed by your private key.

    Without that key, no-one is allowed to change the list of trusted parties from which your computer's TPM will accept bootloaders. We could make it illegal to subvert this system for the purpose of booting an operating system that the device's owner has not approved. Such as law would make spyware really illegal, even moreso than now, and would also ban the secret installation of DRM.

    We could design the TPM so that if you remove it, or tamper with it, it's really obvious — give it a fragile housing, for example, which is hard to replace after the time of manufacture, so it's really obvious to a computer's owner that someone has modified the device, possibly putting it in an unknown and untrustworthy state. We could even put a lock on the case.

    I can see a lot of benefits to this, but there downsides, too.

    Consider self-driving cars. There's a lot of these around already, of course, designed by Google and others. It's easy to understand, how, on the one hand, self-driving cars are an incredibly great development. We are terrible drivers, and cars kill the shit out of us. It's the number 1 cause of death in America for people aged 5-34.

    I've been hit by a car. I've cracked up a car. I'm willing to stipulate that humans have no business driving at all.

    It's also easy to understand how we might be nervous about people being able to homebrew their own car firmware. On one hand, we'd want the source to cars to be open because we'd want to subject it to wide scrutiny. On the other hand, it will be plausible to say, "Cars are safer if they use a locked bootloader that only trusts government-certified firmware".

    And now we're back to whether you get to decide what your computer is doing.

    But there are two problems with this solution:

    First, it won't work. As the copyright wars have shown up, firmware locks aren't very effective against dedicated attackers. People who want to spread mayhem with custom firmware will be able to just that.

    What's more, it's not a good security approach: if vehicular security models depend on all the other vehicles being well-behaved and the unexpected never arising, we are dead meat.

    Self-driving cars must be conservative in their approach to their own conduct, and liberal in their expectations of others' conduct.

    This is the same advice you get in your first day of driver's ed, and it remains good advice even if the car is driving itself.

    Second, it invites some pretty sticky parallels. Remember the "information superhighway"?

    Say we try to secure our physical roads by demanding that the state (or a state-like entity) gets to certify the firmware of the devices that cruise its lanes. How would we articulate a policy addressing the devices on our (equally vital) metaphorical roads—with comparable firmware locks for PCs, phones, tablets, and other devices?

    After all, the general-purpose network means that MRIs, space-ships, and air-traffic control systems share the "information superhighway" with game consoles, Arduino-linked fart machines, and dodgy voyeur cams sold by spammers from the Pearl River Delta.

    And consider avionics and power-station automation.

    This is a much trickier one. If the FAA mandates a certain firmware for 747s, it's probably going to want those 747s designed so that it and it alone controls the signing keys for their bootloaders. Likewise, the Nuclear Regulatory Commission will want the final say on the firmware for the reactor piles.

    This may be a problem for the same reason that a ban on modifying car firmware is: it establishes the idea that a good way to solve problems is to let "the authorities" control your software.

    But it may be that airplanes and nukes are already so regulated that an additional layer of regulation wouldn't leak out into other areas of daily life — nukes and planes are subject to an extraordinary amount of no-notice inspection and reporting requirements that are unique to their industries.

    Second, there's a bigger problem with "owner controls": what about people who use computers, but don't own them?

    This is not a group of people that the IT industry has a lot of sympathy for, on the whole.

    An enormous amount of energy has been devoted to stopping non-owning users from inadvertently breaking the computers they are using, downloading menu-bars, typing random crap they find on the Internet into the terminal, inserting malware-infected USB sticks, installing plugins or untrustworthy certificates, or punching holes in the network perimeter.

    Energy is also spent stopping users from doing deliberately bad things, too. They install keyloggers and spyware to ensnare future users, misappropriate secrets, snoop on network traffic, break their machines and disable the firewalls.

    There's a symmetry here. DRM and its cousins are deployed by people who believe you can't and shouldn't be trusted to set policy on the computer you own. Likewise, IT systems are deployed by computer owners who believe that computer users can't be trusted to set policy on the computers they use.

    As a former sysadmin and CIO, I'm not going to pretend that users aren't a challenge. But there are good reasons to treat users as having rights to set policy on computers they don't own.

    Let's start with the business case.

    When we demand freedom for owners, we do so for lots of reasons, but an important one is that computer programmers can't anticipate all the contingencies that their code might run up against — that when the computer says yes, you might need to still say no.

    This is the idea that owners possess local situational awareness that can't be perfectly captured by a series of nested if/then statements.

    It's also where communist and libertarianis principles converge:

    • Friedrich Hayek thought that expertise was a diffuse thing, and that you were more likely to find the situational awareness necessary for good decisionmaking very close to the decision itself — devolution gives better results that centralization.

    • Karl Marx believed in the legitimacy of workers' claims over their working environment, saying that the contribution of labor was just as important as the contibution of capital, and demanded that workers be treated as the rightful "owners" of their workplace, with the power to set policy.

    For totally opposite reasons, they both believed that the people at the coalface should be given as much power as possible.

    The death of mainframes was attended by an awful lot of concern over users and what they might do to the enterprise. In those days, users were even more constrained than they are today. They could only see the screens the mainframe let them see, and only undertake the operations the mainframe let them undertake.

    When the PC and Visicalc and Lotus 1-2-3 appeared, employees risked termination by bringing those machines into the office— or by taking home office data to use with those machines.

    Workers developed computing needs that couldn't be met within the constraints set by the firm and its IT department, and didn't think that the legitimacy of their needs would be recognized.

    The standard responses would involve some combination of the following:

    • Our regulatory compliance prohibits the thing that will help you do your job better.

    • If you do your job that way, we won't know if your results are correct.

    • You only think you want to do that.

    • It is impossible to make a computer do what you want it to do.

    • Corporate policy prohibits this.

    These may be true. But often they aren't, and even when they are, they're the kind of "truths" that we give bright young geeks millions of dollars in venture capital to falsify—even as middle-aged admin assistants get written up by HR for trying to do the same thing.

    The personal computer arrived in the enterprise by the back door, over the objections of IT, without the knowledge of management, at the risk of censure and termination. Then it made the companies that fought it billions. Trillions.

    Giving workers powerful, flexible tools was good for firms because people are generally smart and want to do their jobs well. They know stuff their bosses don't know.

    So, as an owner, you don't want the devices you buy to be locked, because you might want to do something the designer didn't anticipate.

    And employees don't want the devices they use all day locked, because they might want to do something useful that the IT dept didn't anticipate.

    This is the soul of Hayekism — we're smarter at the edge than we are in the middle.

    The business world pays a lot of lip service to Hayek's 1940s ideas about free markets. But when it comes to freedom within the companies they run, they're stuck a good 50 years earlier, mired in the ideology of Frederick Winslow Taylor and his "scientific management". In this way of seeing things, workers are just an unreliable type of machine whose movements and actions should be scripted by an all-knowing management consultant, who would work with the equally-wise company bosses to determine the one true way to do your job. It's about as "scientific" as trepanation or Myers-Briggs personality tests; it's the ideology that let Toyota cream Detroit's big three.

    So, letting enterprise users do the stuff they think will allow them to make more money for their companies will sometimes make their companies more money.

    That's the business case for user rights. It's a good one, but really I just wanted to get it out of the way so that I could get down to the real meat: Human rights.

    This may seem a little weird on its face, but bear with me.

    Earlier this year, I saw a talk by Hugh Herr, Director of the Biomechatronics group at The MIT Media Lab. Herr's talks are electrifying. He starts out with a bunch of slides of cool prostheses: Legs and feet, hands and arms, and even a device that uses focused magnetism to suppress activity in the brains of people with severe, untreatable depression, to amazing effect.

    Then he shows this slide of him climbing a mountain. He's buff, he's clinging to the rock like a gecko. And he doesn't have any legs: just these cool mountain climbing prostheses. Herr looks at the audience from where he's standing, and he says, "Oh yeah, didn't I mention it? I don't have any legs, I lost them to frostbite."

    He rolls up his trouser legs to show off these amazing robotic gams, and proceeds to run up and down the stage like a mountain goat.

    The first question anyone asked was, "How much did they cost?"

    He named a sum that would buy you a nice brownstone in central Manhattan or a terraced Victorian in zone one in London.

    The second question asked was, "Well, who will be able to afford these?

    To which Herr answered "Everyone. If you have to choose between a 40-year mortgage on a house and a 40-year mortgage on legs, you're going to choose legs"

    So it's easy to consider the possibility that there are going to be people — potentially a lot of people — who are "users" of computers that they don't own, and where those computers are part of their bodies.

    Most of the tech world understands why you, as the owner of your cochlear implants, should be legally allowed to choose the firmware for them. After all, when you own a device that is surgically implanted in your skull, it makes a lot of sense that you have the freedom to change software vendors.

    Maybe the company that made your implant has the very best signal processing algorithm right now, but if a competitor patents a superior algorithm next year, should you be doomed to inferior hearing for the rest of your life?

    And what if the company that made your ears went bankrupt? What if sloppy or sneaky code let bad guys do bad things to your hearing?

    These problems can only be overcome by the unambiguous right to change the software, even if the company that made your implants is still a going concern.

    That will help owners. But what about users?

    Consider some of the following scenarios:

    • You are a minor child and your deeply religious parents pay for your cochlear implants, and ask for the software that makes it impossible for you to hear blasphemy.

    • You are broke, and a commercial company wants to sell you ad-supported implants that listen in on your conversations and insert "discussions about the brands you love".

    • Your government is willing to install cochlear implants, but they will archive everything you hear and review it without your knowledge or consent.

    Far-fetched? The Canadian border agency was just forced to abandon a plan to fill the nation's airports with hidden high-sensitivity mics that were intended to record everyone's conversations.

    Will the Iranian government, or Chinese government, take advantage of this if they get the chance?

    Speaking of Iran and China, there are plenty of human rights activists who believe that boot-locking is the start of a human rights disaster. It's no secret that high-tech companies have been happy to build "lawful intercept" back-doors into their equipment to allow for warrantless, secret access to communications. As these backdoors are now standard, the capability is still there even if your country doesn't want it.

    In Greece, there is no legal requirement for lawful intercept on telcoms equipment.

    During the 2004/5 Olympic bidding process, an unknown person or agency switched on the dormant capability, harvested an unknown quantity of private communications from the highest level, and switched it off again

    Surveillance in the middle of the network is nowhere near as interesting as surveillance at the edge. As the ghosts of Messrs Hayek and Marx will tell you, there's a lot of interesting stuff happening at the coal-face that never makes it back to the central office.

    Even "democratic" governments know this. That's why the Bavarian government was illegally installing the "bundestrojan" — literally, state-trojan — on peoples' computers, gaining access to their files and keystrokes and much else besides. So it's a safe bet that the totalitarian governments will happily take advantage of boot-locking and move the surveillance right into the box.

    You may not import a computer into Iran unless you limit its trust-model so that it only boots up operating systems with lawful intercept backdoors built into it.

    Now, with an owner-controls model, the first person to use a machine gets to initialize the list of trusted keys and then lock it with a secret or other authorization token. What this means is that the state customs authority must initialize each machine before it passes into the country.

    Maybe you'll be able to do something to override the trust model. But by design, such a system will be heavily tamper-evident, meaning that a secret policeman or informant can tell at a glance whether you've locked the state out of your computer. And it's not just repressive states, of course, who will be interested in this.

    Remember that there are four major customers for the existing censorware/spyware/lockware industry: repressive governments, large corporations, schools, and paranoid parents.

    The technical needs of helicopter mums, school systems and enterprises are convergent with those of the governments of Syria and China. They may not share ideological ends, but they have awfully similar technical means to those ends.

    We are very forgiving of these institutions as they pursue their ends; you can do almost anything if you're protecting shareholders or children.

    For example, remember the widespread indignation, from all sides, when it was revealed that some companies were requiring prospective employees to hand over their Facebook login credentials as a condition of employment?

    These employers argued that they needed to review your lists of friends, and what you said to them in private, before determining whether you were suitable for employment.

    Facebook checks are the workplace urine test of the 21st century. They're a means of ensuring that your private life doesn't have any unsavoury secrets lurking in it, secrets that might compromise your work.

    The nation didn't buy this. From senate hearings to newspaper editorials, the country rose up against the practice.

    But no one seems to mind that many employers routinely insert their own intermediate keys into their employees' devices — phones, tablets and computers. This allows them to spy on your Internet traffic, even when it is "secure", with a lock showing in the browser.

    It gives your employer access to any sensitive site you access on the job, from your union's message board to your bank to Gmail to your HMO or doctor's private patient repository. And, of course, to everything on your Facebook page.

    There's wide consensus that this is OK, because the laptop, phone and tablet your employer issues to you are not your property. They are company property.

    And yet, the reason employers give us these mobile devices is because there is no longer any meaningful distinction between work and home.

    Corporate sociologists who study the way that we use our devices find time and again that employees are not capable of maintaining strict divisions between "work" and "personal" accounts and devices.

    America is the land of the 55-hour work-week, a country where few professionals take any meaningful vacation time, and when they do get away for a day or two, take their work-issued devices with them.

    Even in traditional workplaces, we recognized human rights. We don't put cameras in the toilets to curtail employee theft. If your spouse came by the office on your lunch break and the two of you went into the parking lot so that she or he could tell you that the doctor says the cancer is terminal, you'd be aghast and furious to discover that your employer had been spying on you with a hidden mic.

    But if you used your company laptop to access Facebook on your lunchbreak, wherein your spouse conveys to you that the cancer is terminal, you're supposed to be OK with the fact that your employer has been running a man-in-the-middle attack on your machine and now knows the most intimate details of your life.

    There are plenty of instances in which rich and powerful people — not just workers and children and prisoners — will be users instead of owners.

    Every car-rental agency would love to be able to lo-jack the cars they rent to you; remember, an automobile is just a computer you put your body into. They'd love to log all the places you drive to for "marketing" purposes and analytics.

    There's money to be made in finagling the firmware on the rental-car's GPS to ensure that your routes always take you past certain billboards or fast-food restaurants.

    But in general, the poorer and younger you are, the more likely you are to be a tenant farmer in some feudal lord's computational lands. The poorer and younger you are, the more likely it'll be that your legs will cease to walk if you get behind on payments.

    What this means is that any thug who buys your debts from a payday lender could literally — and legally — threaten to take your legs (or eyes, or ears, or arms, or insulin, or pacemaker) away if you failed to come up with the next installment.

    Earlier, I discussed how an owner override would work. It would involve some combination of physical access-control and tamper-evidence, designed to give owners of computers the power to know and control what bootloader and OS was running on their machine.

    How would a user-override work? An effective user-override would have to leave the underlying computer intact, so that when the owner took it back, she could be sure that it was in the state she believed it to be in. In other words, we need to protect users from owners and owners from users.

    Here's one model for that:

    Imagine that there is a bootloader that can reliably and accurately report on the kernels and OSes it finds on the drive. This is the prerequisite for state/corporate-controlled systems, owner-controlled systems, and user-controlled systems.

    Now, give the bootloader the power to suspend any running OS to disk, encrypting all its threads and parking them, and the power to select another OS from the network or an external drive.

    Say I walk into an Internet cafe, and there's an OS running that I can verify. It has a lawful interception back-door for the police, storing all my keystrokes, files and screens in an encrypted blob which the state can decrypt.

    I'm an attorney, doctor, corporate executive, or merely a human who doesn't like the idea of his private stuff being available to anyone who is friends with a dirty cop.

    So, at this point, I give the three-finger salute with the F-keys. This drops the computer into a minimal bootloader shell, one that invites me to give the net-address of an alternative OS, or to insert my own thumb-drive and boot into an operating system there instead.

    The cafe owner's OS is parked and I can't see inside it. But the bootloader can assure me that it is dormant and not spying on me as my OS fires up. When it's done, all my working files are trashed, and the minimal bootloader confirms it.

    This keeps the computer's owner from spying on me, and keeps me from leaving malware on the computer to attack its owner.

    There will be technological means of subverting this, but there is a world of difference between starting from a design spec that aims to protect users from owners (and vice-versa) than one that says that users must always be vulnerable to owners' dictates.

    Fundamentally, this is the difference between freedom and openness — between free software and open source.

    Now, human rights and property rights often come into conflict with one another. For example, landlords aren't allowed to enter your home without adequate notice. In many places, hotels can't throw you out if you overstay your reservation, provided that you pay the rack-rate for the rooms — that's why you often see these posted on the back of the room-door

    Reposession of leased goods — cars, for example — are limited by procedures that require notice and the opportunity to rebut claims of delinquent payments.

    When these laws are "streamlined" to make them easier for property holders, we often see human rights abuses. Consider robo-signing eviction mills, which used fraudulent declarations to evict homeowners who were up to date on their mortgages—and even some who didn't have mortgages.

    The potential for abuse in a world made of computers is much greater: your car drives itself to the repo yard. Your high-rise apartment building switches off its elevators and climate systems, stranding thousands of people until a disputed license payment is settled.

    Sounds fanciful? This has already happened with multi-level parking garages.

    Back in 2006, a 314-car Robotic Parking model RPS1000 garage in Hoboken, New Jersey, took all the cars in its guts hostage, locking down the software until the garage's owners paid a licensing bill that they disputed.

    They had to pay it, even as they maintained that they didn't owe anything. What the hell else were they going to do?

    And what will you do when your dispute with a vendor means that you go blind, or deaf, or lose the ability to walk, or become suicidally depressed?

    The negotiating leverage that accrues to owners over users is total and terrifying.

    Users will be strongly incentivized to settle quickly, rather than face the dreadful penalties that could be visited on them in the event of dispute. And when the owner of the device is the state or a state-sized corporate actor, the potential for human rights abuses skyrockets.

    This is not to say that owner override is an unmitigated evil. Think of smart meters that can override your thermostat at peak loads.

    Such meters allow us to switch off coal and other dirty power sources that can be varied up at peak times.

    But they work best if users — homeowners who have allowed the power-company to install a smart-meter — can't override the meters. What happens when griefers, crooks, or governments trying to quell popular rebellion use this to turn heat off during a hundred year storm? Or to crank heat to maximum during a heat-wave?

    The HVAC in your house can hold the power of life and death over you — do we really want it designed to allow remote parties to do stuff with it even if you disagree?

    The question is simple. Once we create a design norm of devices that users can't override, how far will that creep?

    Especially risky would be the use of owner override to offer payday loan-style services to vulnerable people: Can't afford artificial eyes for your kids? We'll subsidize them if you let us redirect their focus to sponsored toys and sugar-snacks at the store.

    Foreclosing on owner override, however, has its own downside. It probably means that there will be poor people who will not be offered some technology at all.

    If I can lo-jack your legs, I can lease them to you with the confidence of my power to repo them if you default on payments. If I can't, I may not lease you legs unless you've got a lot of money to begin with.

    But if your legs can decide to walk to the repo-depot without your consent, you will be totally screwed the day that muggers, rapists, griefers or the secret police figure out how to hijack that facility.

    It gets even more complicated, too, because you are the "user" of many systems in the most transitory ways: subway turnstiles, elevators, the blood-pressure cuff at the doctor's office, public buses or airplanes. It's going to be hard to figure out how to create "user overrides" that aren't nonsensical. We can start, though, by saying a "user" is someone who is the sole user of a device for a certain amount of time.

    This isn't a problem I know how to solve. Unlike the War on General Purpose Computers, the Civil War over them presents a series of conundra without (to me) any obvious solutions.

    These problems are a way off, and they only arise if we win the war over general purpose computing first

    But come victory day, when we start planning the constitutional congress for a world where regulating computers is acknowledged as the wrong way to solve problems, let's not paper over the division between property rights and human rights.

    This is the sort of division that, while it festers, puts the most vulnerable people in our society in harm's way. Agreeing to disagree on this one isn't good enough. We need to start thinking now about the principles we'll apply when the day comes.

    If we don't start now, it'll be too late.'

    http://boingboing.net/2012/08/23/civilwar.html
  • Chess master Garry Kasparov: "When Putin's Thugs Came for Me"

    posted by Keito
    2012-08-18 13:24:18
    I was dragged away Friday by a group of police—in fact carried away with one on each arm and leg.

    The only surprise to come out of Friday's guilty verdict in the trial here of the Russian punk band Pussy Riot was how many people acted surprised. Three young women were sentenced to two years in prison for the prank of singing an anti-Putin "prayer" in the Cathedral of Christ the Savior. Their jailing was the next logical step for Vladimir Putin's steady crackdown on "acts against the social order," the Kremlin's expansive term for any public display of resistance.

    In the 100 days since Mr. Putin's re-election as president, severe new laws against public protest have been passed and the homes of opposition leaders have been raided. These are not the actions of a regime prepared to grant leniency to anyone who offends Mr. Putin's latest ally, the Orthodox Church and its patriarch.

    Unfortunately, I was not there to hear the judge's decision, which she took several hours to read. The crowds outside the court building made entry nearly impossible, so I stood in a doorway and took questions from journalists. Suddenly, I was dragged away by a group of police—in fact carried away with one policeman on each arm and leg.

    The men refused to tell me why I was being arrested and shoved me into a police van. When I got up to again ask why I had been detained, things turned violent. I was restrained, choked and struck several times by a group of officers before being driven to the police station with dozens of other protesters. After several hours I was released, but not before they told me I was being criminally investigated for assaulting a police officer who claimed I had bitten him.

    It would be easy to laugh at such a bizarre charge when there are already so many videos and photos of the police assaulting me. But in a country where you can be imprisoned for two years for singing a song, laughter does not come easily. My bruises will heal long before the members of Pussy Riot are free to see their young children again. In the past, Mr. Putin's critics and enemies have been jailed on a wide variety of spurious criminal charges, from fraud to terrorism.

    But now the masks are off. Unlikely as it may be, the three members of Pussy Riot have become our first true political prisoners.

    Such a brazen step should raise alarms, but the leaders of the Free World are clearly capable of sleeping through any wake-up call. If this was all business as usual for the Putin justice system, the same was true for the international reaction. A spokesman for the Obama administration called the sentence "disproportionate," as if the length of the prison term were the only problem with open repression of political speech. The Russian Constitution is freely available online, but this was a medieval show trial with no connection to the criminal code.

    Mr. Putin is not worried about what the Western press says, or about celebrities tweeting their support for Pussy Riot. These are not the constituencies that concern him. Friday, the Russian paper Vedomosti reported that former Deutsche Bank CEO Josef Ackermann could be put in charge of managing the hundreds of billions of dollars in the Russian sovereign wealth fund. As long as bankers and other Western elites eagerly line up to do Mr. Putin's bidding, the situation in Russia will only get worse.

    If officials at the U.S. State Department are as "seriously concerned" about free speech in Russia as they say, I suggest they drop their opposition to the Magnitsky Act pending in the Senate. That legislation would bring financial and travel sanctions against the functionaries who enact the Kremlin's agenda of repression. Mouthing concern only reinforces the fact that no action will be taken.

    Mr. Putin could not care less about winning public-relations battles in the Western press, or about fighting them at all. He and his cronies care only about money and power. Today's events make it clear that they will fight for those things until Russia's jails are full.

    Mr. Kasparov, a contributing editor of The Wall Street Journal, is the leader of the Russian pro-democracy group United Civil Front and chairman of the U.S.-based Human Rights Foundation. He resides in Moscow.'
  • The UK Government shames us all...

    posted by Keito
    2012-08-17 19:46:27
    I'm figuratively blown away by the amount of money being wasted to try to get hold of Julian Assange. What new depths will the UK stoop to? According to some reports it could cost the taxpayer £50,000/day. Whoever thinks this is a good way for our government to spend money needs their head examining.

    The government is so determined to secure his arrest, they plan to implement heat detection technologies to ensure he isn't smuggled out of the embassy via the use of a diplomatic package (which should be immune from interference anyway).

    I think we're about to see just how little the UK cares for abiding by international treaties and laws.

    Our arsehole government pandered to Pinochet, whose crimes were insurmountable. Yet, when it comes to extradition of a man for the lack of wearing a condom, well, the corrupt UK politicians seem willing to lube up and bend over for their White House overlords at the earliest possible convenience.

    Anyone who still thinks this whole Assange debacle has anything to do with a bullshit sex-crime is living in a dreamworld.

    Our politicians actions are defining a generation, our future... our children's future.

    They are paving the way for secrets, lies, war-crimes, injustice, corruption, corporate greed and control over our political stage by unscrupulous men and women. They are trying to send a message to any and all future human rights activists and freedom advocates.... You can't touch the evil governments of this world, they'll go to great lengths to shut you up. They'll break international law, they'll drop any and all diplomatic principles in their pursuit.

    Apparently the UK deems upholding some extradition law (over a bullshit made-up charge) is more important than internationally agreed treaties, laws and relations. We'd be willing to harm relations with an entire nation - at great risk, in order to see that one man is sent to a country to get - at worst - a fine.

    But we all know Assange won't get a fine, should he get extradited... he'll get some American Gulag, or the death penalty... all for showing the world that our beloved 'freedom-loving' western governments have much blood on their hands.

    Our politicians deserve locking up. Their actions are an outrage.