Blog

  • Children Has The Right To Music

    posted by Keito
    2012-09-04 21:14:44
    'Bruce Willis 'considering iTunes legal action' against Apple...

    Bruce Willis, the Hollywood actor, is said to be considering legal action against Apple so he can leave his iTunes music collection to his three daughters.

    The 57-year-old action star has reportedly spent thousands of dollars on digital music, which he wants to leave to daughters Rumer, 24, Scout, 20, and Tallaluh, 18.

    Existing iTunes rules mean he cannot do so however, as purchased music is only “borrowed” under a license.

    If Willis is able to successfully challenge the small print, it could benefit millions of frustrated iTunes users who haven’t had the resources to fight the technology giant.

    He is said to be considering two approaches to the digital battle. His first option would be asking his lawyers to establish a family trust to hold the downloads.

    A second approach would be supporting ongoing legal tussles in other US states, where complainants are already seeking to gain more rights to their music.

    With more and more people buying digital media products, the issue of ownership is becoming an increasing problem with many not realising they do not hold the rights to their books, music, films or games.

    Solicitor Chris Walton told The Daily Mail: “Lots of people will be surprised on learning all those tracks and books they have bought over the years don’t actually belong to them. It’s only natural you would want to pass them on to a loved one.

    “The law will catch up, but ideally Apple and the like will update their policies and work out the best solution for their customers.”'

    http://apple.slashdot.org/story/12/09/03/153220/bruce-willis-considering-legal-action-against-apple-over-itunes-collection

    UPDATE: According to a tweet made by Bruce's wife this story isn't actually true... shame!
  • Cory Doctorow: The Coming Civil War over General Purpose Computing

    posted by Keito
    2012-08-28 21:18:46
    'Even if we win the right to own and control our computers, a dilemma remains: what rights do owners owe users?



    This talk was delivered at Google in August, and for The Long Now Foundation in July 2012. A transcript of the notes follows.

    I gave a talk in late 2011 at 28C3 in Berlin called "The Coming War on General Purpose Computing"

    In a nutshell, its hypothesis was this:

    • Computers and the Internet are everywhere and the world is increasingly made of them.

    • We used to have separate categories of device: washing machines, VCRs, phones, cars, but now we just have computers in different cases. For example, modern cars are computers we put our bodies in and Boeing 747s are flying Solaris boxes, whereas hearing aids and pacemakers are computers we put in our body.

    • This means that all of our sociopolitical problems in the future will have a computer inside them, too—and a would-be regulator saying stuff like this:

    "Make it so that self-driving cars can't be programmed to drag race"

    "Make it so that bioscale 3D printers can't make harmful organisms or restricted compounds"

    Which is to say: "Make me a general-purpose computer that runs all programs except for one program that freaks me out."

    But there's a problem. We don't know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal.

    The closest approximation we have for such a device is a computer with spyware on it— a computer that, if you do the wrong thing, can intercede and say, "I can't let you do that, Dave."

    Such a a computer runs programs designed to be hidden from the owner of the device, and which the owner can't override or kill. In other words: DRM. Digital Rights Managment.

    These computers are a bad idea for two significant reasons. First, they won't solve problems. Breaking DRM isn't hard for bad guys. The copyright wars' lesson is that DRM is always broken with near-immediacy.

    DRM only works if the "I can't let you do that, Dave" program stays a secret. Once the most sophisticated attackers in the world liberate that secret, it will be available to everyone else, too.

    Second, DRM has inherently weak security, which thereby makes overall security weaker.

    Certainty about what software is on your computer is fundamental to good computer security, and you can't know if your computer's software is secure unless you know what software it is running.

    Designing "I can't let you do that, Dave" into computers creates an enormous security vulnerability: anyone who hijacks that facility can do things to your computer that you can't find out about.

    Moreover, once a government thinks it has "solved" a problem with DRM—with all its inherent weaknesses—that creates a perverse incentive to make it illegal to tell people things that might undermine the DRM.

    You know, things like how the DRM works. Or "here's a flaw in the DRM which lets an attacker secretly watch through your webcam or listen through your mic."

    I've had a lot of feedback from various distinguished computer scientists, technologists, civil libertarians and security researchers after 28C3. Within those fields, there is a widespread consensus that, all other things being equal, computers are more secure and society is better served when owners of computers can control what software runs on them.

    Let's examine for a moment what that would mean.

    Most computers today are fitted with Trusted Platform Module. This is a secure co-processor mounted on the motherboard. The specification of TPMs are published, and an industry body certifies compliance with those specifications. To the extent that the spec is good (and the industry body is diligent), it's possible to be reasonably certain that you've got a real, functional, TPM in your computer that faithfully implements the spec.

    How is the TPM secure? It contains secrets: cryptographic keys. But it's also secure in that it's designed to be tamper-evident. If you try to extract the keys from a TPM, or remove the TPM from a computer and replace it with a gimmicked one, it will be very obvious to the computer's owner.

    One threat to TPM is that a crook (or a government, police force or other adversary) might try to compromise your computer — tamper-evidence is what lets you know when your TPM has been fiddled with.

    Another TPM threat-model is that a piece of malicious software will infect your computer

    Now, once your computer is compromised this way, you could be in great trouble. All of the sensors attached to the computer—mic, camera, accelerometer, fingerprint reader, GPS—might be switched on without your knowledge. Off goes the data to the bad guys.

    All the data on your computer (sensitive files, stored passwords and web history)? Off it goes to the bad guys—or erased.

    All the keystrokes into your computer—your passwords!—might be logged. All the peripherals attached to your computer—printers, scanners, SCADA controllers, MRI machines, 3D printers— might be covertly operated or subtly altered.

    Imagine if those "other peripherals" included cars or avionics. Or your optic nerve, your cochlea, the stumps of your legs.

    When your computer boots up, the TPM can ask the bootloader for a signed hash of itself and verify that the signature on the hash comes from a trusted party. Once you trust the bootloader to faithfully perform its duties, you can ask it to check the signatures on the operating system, which, once verified, can check the signatures on the programs that run on it.

    Ths ensures that you know which programs are running on your computer—and that any programs running in secret have managed the trick by leveraging a defect in the bootloader, operating system or other components, and not because a new defect has been inserted into your system to create a facility for hiding things from you.

    This always reminds me of Descartes: he starts off by saying that he can't tell what's true and what's not true, because he's not sure if he really exists.

    He finds a way of proving that he exists, and that he can trust his senses and his faculty for reason.

    Having found a tiny nub of stable certainty on which to stand, he builds a scaffold of logic that he affixes to it, until he builds up an entire edifice.

    Likewise, a TPM is a nub of stable certainty: if it's there, it can reliably inform you about the code on your computer.

    Now, you may find it weird to hear someone like me talking warmly about TPMs. After all, these are the technologies that make it possible to lock down phones, tablets, consoles and even some PCs so that they can't run software of the owner's choosing.

    Jailbreaking" usually means finding some way to defeat a TPM or TPM-like technology. So why on earth would I want a TPM in my computer?

    As with everything important, the devil is in the details.

    Imagine for a moment two different ways of implementing a TPM:

    1. Lockdown

    Your TPM comes with a set of signing keys it trusts, and unless your bootloader is signed by a TPM-trusted party, you can't run it. Moreover, since the bootloader determines which OS launches, you don't get to control the software in your machine.

    2. Certainty

    You tell your TPM which signing keys you trust—say, Ubuntu, EFF, ACLU and Wikileaks—and it tells you whether the bootloaders it can find on your disk have been signed by any of those parties. It can faithfully report the signature on any other bootloaders it finds, and it lets you make up your own damn mind about whether you want to trust any or all of the above.

    Approximately speaking, these two scenarios correspond to the way that iOS and Android work: iOS only lets you run Apple-approved code; Android lets you tick a box to run any code you want. Critically, however, Android lacks the facility to do some crypto work on the software before boot-time and tell you whether the code you think you're about to run is actually what you're about to run.

    It's freedom, but not certainty.

    In a world where the computers we're discussing can see and hear you, where we insert our bodies into them, where they are surgically implanted into us, and where they fly our planes and drive our cars, certainty is a big deal.

    This is why I like the idea of a TPM, assuming it is implemented in the "certainty" mode and not the "lockdown" mode.

    If that's not clear, think of it this way: a "war on general-purpose computing" is what happens when the control freaks in government and industry demand the ability to remotely control your computers

    The defenders against that attack are also control freaks—like me—but they happen to believe that device-owners should have control over their computers

    Both sides want control, but differ on which side should have control.

    Control requires knowledge. If you want to be sure that songs can only moved onto an iPod, but not off of an iPod, the iPod needs to know that the instructions being given to it by the PC (to which it is tethered) are emanating from an Apple-approved iTunes. It needs to know they're not from something that impersonates iTunes in order to get the iPod to give it access to those files.

    If you want to be sure that my PVR won't record the watch-once video-on-demand movie that I've just paid for, you need to be able to ensure that the tuner receiving the video will only talk to approved devices whose manufacturers have promised to honor "do-not-record" flags in the programmes.

    If I want to be sure that you aren't watching me through my webcam, I need to know what the drivers are and whether they honor the convention that the little green activity light is always switched on when my camera is running.

    If I want to be sure that you aren't capturing my passwords through my keyboard, I need to know that the OS isn't lying when it says there aren't any keyloggers on my system.

    Whether you want to be free—or want to enslave—you need control. And for that, you need this knowledge.

    That's the coming war on general purpose computing. But now I want to investigate what happens if we win it.

    We could face a interesting prospect. This I call the coming civil war over general purpose computing.

    Let's stipulate that a victory for the "freedom side" in the war on general purpose computing would result in computers that let their owners know what was running on them. Computers would faithfully report the hash and associated signatures for any bootloaders they found, control what was running on computers, and allow their owners to specify who was allowed to sign their bootloaders, operating systems, and so on.

    There are two arguments that we can make for this:

    1. Human rights

    If your world is made of computers, then designing computers to override their owners' decisions has significant human rights implications. Today we worry that the Iranian government might demand import controls on computers, so that only those capable of undetectable surveillance are operable within its borders. Tomorrow we might worry about whether the British government would demand that NHS-funded cochlear implants be designed to block reception of "extremist" language, to log and report it, or both.

    2. Property rights

    The doctrine of first sale is an important piece of consumer law. It says that once you buy something, it belongs to you, and you should have the freedom to do anything you want with it, even if that hurts the vendor's income. Opponents of DRM like the slogan, "You bought it, you own it."

    Property rights are an incredibly powerful argument. This goes double in America, where strong property rights enforcement is seen as the foundation of all social remedies.

    This goes triple for Silicon Valley, where you can't swing a cat without hitting a libertarian who believes that the major — or only — legitimate function of a state is to enforce property rights and contracts around them.

    Which is to say that if you want to win a nerd fight, property rights are a powerful weapon to have in your arsenal. And not just nerd fights!

    That's why copyfighters are so touchy about the term "Intellectual Property". This synthetic, ideologically-loaded term was popularized in the 1970s as a replacement for "regulatory monopolies" or "creators' monopolies" — because it's a lot easier to get Congress to help you police your property than it is to get them to help enforce your monopoly.

    Here is where the civil war part comes in.

    Human rights and property rights both demand that computers not be designed for remote control by governments, corporations, or other outside institutions. Both ensure that owners be allowed to specify what software they're going to run. To freely choose the nub of certainty from which they will suspend the scaffold of their computer's security.

    Remember that security is relative: you are secured from attacks on your ability to freely use your music if you can control your computing environment. This, however, erodes the music industry's own security to charge you some kind of rent, on a use-by-use basis, for your purchased music.

    If you get to choose the nub from which the scaffold will dangle, you get control and the power to secure yourself against attackers. If the the government, the RIAA or Monsanto chooses the nub, they get control and the power to secure themselves against you.

    In this dilemma, we know what side we fall on. We agree that at the very least, owners should be allowed to know and control their computers.

    But what about users?

    Users of computers don't always have the same interests as the owners of computers— and, increasingly, we will be users of computers that we don't own.

    Where you come down on conflicts between owners and users is going to be one of the most meaningful ideological questions in technology's history. There's no easy answer that I know about for guiding these decisions.

    Let's start with a total pro-owner position: "property maximalism".

    • If it's my computer, I should have the absolute right to dictate the terms of use to anyone who wants to use it. If you don't like it, find someone else's computer to use.

    How would that work in practice? Through some combination of an initialization routine, tamper evidence, law, and physical control. For example, when you turn on your computer for the first time, you initialize a good secret password, possibly signed by your private key.

    Without that key, no-one is allowed to change the list of trusted parties from which your computer's TPM will accept bootloaders. We could make it illegal to subvert this system for the purpose of booting an operating system that the device's owner has not approved. Such as law would make spyware really illegal, even moreso than now, and would also ban the secret installation of DRM.

    We could design the TPM so that if you remove it, or tamper with it, it's really obvious — give it a fragile housing, for example, which is hard to replace after the time of manufacture, so it's really obvious to a computer's owner that someone has modified the device, possibly putting it in an unknown and untrustworthy state. We could even put a lock on the case.

    I can see a lot of benefits to this, but there downsides, too.

    Consider self-driving cars. There's a lot of these around already, of course, designed by Google and others. It's easy to understand, how, on the one hand, self-driving cars are an incredibly great development. We are terrible drivers, and cars kill the shit out of us. It's the number 1 cause of death in America for people aged 5-34.

    I've been hit by a car. I've cracked up a car. I'm willing to stipulate that humans have no business driving at all.

    It's also easy to understand how we might be nervous about people being able to homebrew their own car firmware. On one hand, we'd want the source to cars to be open because we'd want to subject it to wide scrutiny. On the other hand, it will be plausible to say, "Cars are safer if they use a locked bootloader that only trusts government-certified firmware".

    And now we're back to whether you get to decide what your computer is doing.

    But there are two problems with this solution:

    First, it won't work. As the copyright wars have shown up, firmware locks aren't very effective against dedicated attackers. People who want to spread mayhem with custom firmware will be able to just that.

    What's more, it's not a good security approach: if vehicular security models depend on all the other vehicles being well-behaved and the unexpected never arising, we are dead meat.

    Self-driving cars must be conservative in their approach to their own conduct, and liberal in their expectations of others' conduct.

    This is the same advice you get in your first day of driver's ed, and it remains good advice even if the car is driving itself.

    Second, it invites some pretty sticky parallels. Remember the "information superhighway"?

    Say we try to secure our physical roads by demanding that the state (or a state-like entity) gets to certify the firmware of the devices that cruise its lanes. How would we articulate a policy addressing the devices on our (equally vital) metaphorical roads—with comparable firmware locks for PCs, phones, tablets, and other devices?

    After all, the general-purpose network means that MRIs, space-ships, and air-traffic control systems share the "information superhighway" with game consoles, Arduino-linked fart machines, and dodgy voyeur cams sold by spammers from the Pearl River Delta.

    And consider avionics and power-station automation.

    This is a much trickier one. If the FAA mandates a certain firmware for 747s, it's probably going to want those 747s designed so that it and it alone controls the signing keys for their bootloaders. Likewise, the Nuclear Regulatory Commission will want the final say on the firmware for the reactor piles.

    This may be a problem for the same reason that a ban on modifying car firmware is: it establishes the idea that a good way to solve problems is to let "the authorities" control your software.

    But it may be that airplanes and nukes are already so regulated that an additional layer of regulation wouldn't leak out into other areas of daily life — nukes and planes are subject to an extraordinary amount of no-notice inspection and reporting requirements that are unique to their industries.

    Second, there's a bigger problem with "owner controls": what about people who use computers, but don't own them?

    This is not a group of people that the IT industry has a lot of sympathy for, on the whole.

    An enormous amount of energy has been devoted to stopping non-owning users from inadvertently breaking the computers they are using, downloading menu-bars, typing random crap they find on the Internet into the terminal, inserting malware-infected USB sticks, installing plugins or untrustworthy certificates, or punching holes in the network perimeter.

    Energy is also spent stopping users from doing deliberately bad things, too. They install keyloggers and spyware to ensnare future users, misappropriate secrets, snoop on network traffic, break their machines and disable the firewalls.

    There's a symmetry here. DRM and its cousins are deployed by people who believe you can't and shouldn't be trusted to set policy on the computer you own. Likewise, IT systems are deployed by computer owners who believe that computer users can't be trusted to set policy on the computers they use.

    As a former sysadmin and CIO, I'm not going to pretend that users aren't a challenge. But there are good reasons to treat users as having rights to set policy on computers they don't own.

    Let's start with the business case.

    When we demand freedom for owners, we do so for lots of reasons, but an important one is that computer programmers can't anticipate all the contingencies that their code might run up against — that when the computer says yes, you might need to still say no.

    This is the idea that owners possess local situational awareness that can't be perfectly captured by a series of nested if/then statements.

    It's also where communist and libertarianis principles converge:

    • Friedrich Hayek thought that expertise was a diffuse thing, and that you were more likely to find the situational awareness necessary for good decisionmaking very close to the decision itself — devolution gives better results that centralization.

    • Karl Marx believed in the legitimacy of workers' claims over their working environment, saying that the contribution of labor was just as important as the contibution of capital, and demanded that workers be treated as the rightful "owners" of their workplace, with the power to set policy.

    For totally opposite reasons, they both believed that the people at the coalface should be given as much power as possible.

    The death of mainframes was attended by an awful lot of concern over users and what they might do to the enterprise. In those days, users were even more constrained than they are today. They could only see the screens the mainframe let them see, and only undertake the operations the mainframe let them undertake.

    When the PC and Visicalc and Lotus 1-2-3 appeared, employees risked termination by bringing those machines into the office— or by taking home office data to use with those machines.

    Workers developed computing needs that couldn't be met within the constraints set by the firm and its IT department, and didn't think that the legitimacy of their needs would be recognized.

    The standard responses would involve some combination of the following:

    • Our regulatory compliance prohibits the thing that will help you do your job better.

    • If you do your job that way, we won't know if your results are correct.

    • You only think you want to do that.

    • It is impossible to make a computer do what you want it to do.

    • Corporate policy prohibits this.

    These may be true. But often they aren't, and even when they are, they're the kind of "truths" that we give bright young geeks millions of dollars in venture capital to falsify—even as middle-aged admin assistants get written up by HR for trying to do the same thing.

    The personal computer arrived in the enterprise by the back door, over the objections of IT, without the knowledge of management, at the risk of censure and termination. Then it made the companies that fought it billions. Trillions.

    Giving workers powerful, flexible tools was good for firms because people are generally smart and want to do their jobs well. They know stuff their bosses don't know.

    So, as an owner, you don't want the devices you buy to be locked, because you might want to do something the designer didn't anticipate.

    And employees don't want the devices they use all day locked, because they might want to do something useful that the IT dept didn't anticipate.

    This is the soul of Hayekism — we're smarter at the edge than we are in the middle.

    The business world pays a lot of lip service to Hayek's 1940s ideas about free markets. But when it comes to freedom within the companies they run, they're stuck a good 50 years earlier, mired in the ideology of Frederick Winslow Taylor and his "scientific management". In this way of seeing things, workers are just an unreliable type of machine whose movements and actions should be scripted by an all-knowing management consultant, who would work with the equally-wise company bosses to determine the one true way to do your job. It's about as "scientific" as trepanation or Myers-Briggs personality tests; it's the ideology that let Toyota cream Detroit's big three.

    So, letting enterprise users do the stuff they think will allow them to make more money for their companies will sometimes make their companies more money.

    That's the business case for user rights. It's a good one, but really I just wanted to get it out of the way so that I could get down to the real meat: Human rights.

    This may seem a little weird on its face, but bear with me.

    Earlier this year, I saw a talk by Hugh Herr, Director of the Biomechatronics group at The MIT Media Lab. Herr's talks are electrifying. He starts out with a bunch of slides of cool prostheses: Legs and feet, hands and arms, and even a device that uses focused magnetism to suppress activity in the brains of people with severe, untreatable depression, to amazing effect.

    Then he shows this slide of him climbing a mountain. He's buff, he's clinging to the rock like a gecko. And he doesn't have any legs: just these cool mountain climbing prostheses. Herr looks at the audience from where he's standing, and he says, "Oh yeah, didn't I mention it? I don't have any legs, I lost them to frostbite."

    He rolls up his trouser legs to show off these amazing robotic gams, and proceeds to run up and down the stage like a mountain goat.

    The first question anyone asked was, "How much did they cost?"

    He named a sum that would buy you a nice brownstone in central Manhattan or a terraced Victorian in zone one in London.

    The second question asked was, "Well, who will be able to afford these?

    To which Herr answered "Everyone. If you have to choose between a 40-year mortgage on a house and a 40-year mortgage on legs, you're going to choose legs"

    So it's easy to consider the possibility that there are going to be people — potentially a lot of people — who are "users" of computers that they don't own, and where those computers are part of their bodies.

    Most of the tech world understands why you, as the owner of your cochlear implants, should be legally allowed to choose the firmware for them. After all, when you own a device that is surgically implanted in your skull, it makes a lot of sense that you have the freedom to change software vendors.

    Maybe the company that made your implant has the very best signal processing algorithm right now, but if a competitor patents a superior algorithm next year, should you be doomed to inferior hearing for the rest of your life?

    And what if the company that made your ears went bankrupt? What if sloppy or sneaky code let bad guys do bad things to your hearing?

    These problems can only be overcome by the unambiguous right to change the software, even if the company that made your implants is still a going concern.

    That will help owners. But what about users?

    Consider some of the following scenarios:

    • You are a minor child and your deeply religious parents pay for your cochlear implants, and ask for the software that makes it impossible for you to hear blasphemy.

    • You are broke, and a commercial company wants to sell you ad-supported implants that listen in on your conversations and insert "discussions about the brands you love".

    • Your government is willing to install cochlear implants, but they will archive everything you hear and review it without your knowledge or consent.

    Far-fetched? The Canadian border agency was just forced to abandon a plan to fill the nation's airports with hidden high-sensitivity mics that were intended to record everyone's conversations.

    Will the Iranian government, or Chinese government, take advantage of this if they get the chance?

    Speaking of Iran and China, there are plenty of human rights activists who believe that boot-locking is the start of a human rights disaster. It's no secret that high-tech companies have been happy to build "lawful intercept" back-doors into their equipment to allow for warrantless, secret access to communications. As these backdoors are now standard, the capability is still there even if your country doesn't want it.

    In Greece, there is no legal requirement for lawful intercept on telcoms equipment.

    During the 2004/5 Olympic bidding process, an unknown person or agency switched on the dormant capability, harvested an unknown quantity of private communications from the highest level, and switched it off again

    Surveillance in the middle of the network is nowhere near as interesting as surveillance at the edge. As the ghosts of Messrs Hayek and Marx will tell you, there's a lot of interesting stuff happening at the coal-face that never makes it back to the central office.

    Even "democratic" governments know this. That's why the Bavarian government was illegally installing the "bundestrojan" — literally, state-trojan — on peoples' computers, gaining access to their files and keystrokes and much else besides. So it's a safe bet that the totalitarian governments will happily take advantage of boot-locking and move the surveillance right into the box.

    You may not import a computer into Iran unless you limit its trust-model so that it only boots up operating systems with lawful intercept backdoors built into it.

    Now, with an owner-controls model, the first person to use a machine gets to initialize the list of trusted keys and then lock it with a secret or other authorization token. What this means is that the state customs authority must initialize each machine before it passes into the country.

    Maybe you'll be able to do something to override the trust model. But by design, such a system will be heavily tamper-evident, meaning that a secret policeman or informant can tell at a glance whether you've locked the state out of your computer. And it's not just repressive states, of course, who will be interested in this.

    Remember that there are four major customers for the existing censorware/spyware/lockware industry: repressive governments, large corporations, schools, and paranoid parents.

    The technical needs of helicopter mums, school systems and enterprises are convergent with those of the governments of Syria and China. They may not share ideological ends, but they have awfully similar technical means to those ends.

    We are very forgiving of these institutions as they pursue their ends; you can do almost anything if you're protecting shareholders or children.

    For example, remember the widespread indignation, from all sides, when it was revealed that some companies were requiring prospective employees to hand over their Facebook login credentials as a condition of employment?

    These employers argued that they needed to review your lists of friends, and what you said to them in private, before determining whether you were suitable for employment.

    Facebook checks are the workplace urine test of the 21st century. They're a means of ensuring that your private life doesn't have any unsavoury secrets lurking in it, secrets that might compromise your work.

    The nation didn't buy this. From senate hearings to newspaper editorials, the country rose up against the practice.

    But no one seems to mind that many employers routinely insert their own intermediate keys into their employees' devices — phones, tablets and computers. This allows them to spy on your Internet traffic, even when it is "secure", with a lock showing in the browser.

    It gives your employer access to any sensitive site you access on the job, from your union's message board to your bank to Gmail to your HMO or doctor's private patient repository. And, of course, to everything on your Facebook page.

    There's wide consensus that this is OK, because the laptop, phone and tablet your employer issues to you are not your property. They are company property.

    And yet, the reason employers give us these mobile devices is because there is no longer any meaningful distinction between work and home.

    Corporate sociologists who study the way that we use our devices find time and again that employees are not capable of maintaining strict divisions between "work" and "personal" accounts and devices.

    America is the land of the 55-hour work-week, a country where few professionals take any meaningful vacation time, and when they do get away for a day or two, take their work-issued devices with them.

    Even in traditional workplaces, we recognized human rights. We don't put cameras in the toilets to curtail employee theft. If your spouse came by the office on your lunch break and the two of you went into the parking lot so that she or he could tell you that the doctor says the cancer is terminal, you'd be aghast and furious to discover that your employer had been spying on you with a hidden mic.

    But if you used your company laptop to access Facebook on your lunchbreak, wherein your spouse conveys to you that the cancer is terminal, you're supposed to be OK with the fact that your employer has been running a man-in-the-middle attack on your machine and now knows the most intimate details of your life.

    There are plenty of instances in which rich and powerful people — not just workers and children and prisoners — will be users instead of owners.

    Every car-rental agency would love to be able to lo-jack the cars they rent to you; remember, an automobile is just a computer you put your body into. They'd love to log all the places you drive to for "marketing" purposes and analytics.

    There's money to be made in finagling the firmware on the rental-car's GPS to ensure that your routes always take you past certain billboards or fast-food restaurants.

    But in general, the poorer and younger you are, the more likely you are to be a tenant farmer in some feudal lord's computational lands. The poorer and younger you are, the more likely it'll be that your legs will cease to walk if you get behind on payments.

    What this means is that any thug who buys your debts from a payday lender could literally — and legally — threaten to take your legs (or eyes, or ears, or arms, or insulin, or pacemaker) away if you failed to come up with the next installment.

    Earlier, I discussed how an owner override would work. It would involve some combination of physical access-control and tamper-evidence, designed to give owners of computers the power to know and control what bootloader and OS was running on their machine.

    How would a user-override work? An effective user-override would have to leave the underlying computer intact, so that when the owner took it back, she could be sure that it was in the state she believed it to be in. In other words, we need to protect users from owners and owners from users.

    Here's one model for that:

    Imagine that there is a bootloader that can reliably and accurately report on the kernels and OSes it finds on the drive. This is the prerequisite for state/corporate-controlled systems, owner-controlled systems, and user-controlled systems.

    Now, give the bootloader the power to suspend any running OS to disk, encrypting all its threads and parking them, and the power to select another OS from the network or an external drive.

    Say I walk into an Internet cafe, and there's an OS running that I can verify. It has a lawful interception back-door for the police, storing all my keystrokes, files and screens in an encrypted blob which the state can decrypt.

    I'm an attorney, doctor, corporate executive, or merely a human who doesn't like the idea of his private stuff being available to anyone who is friends with a dirty cop.

    So, at this point, I give the three-finger salute with the F-keys. This drops the computer into a minimal bootloader shell, one that invites me to give the net-address of an alternative OS, or to insert my own thumb-drive and boot into an operating system there instead.

    The cafe owner's OS is parked and I can't see inside it. But the bootloader can assure me that it is dormant and not spying on me as my OS fires up. When it's done, all my working files are trashed, and the minimal bootloader confirms it.

    This keeps the computer's owner from spying on me, and keeps me from leaving malware on the computer to attack its owner.

    There will be technological means of subverting this, but there is a world of difference between starting from a design spec that aims to protect users from owners (and vice-versa) than one that says that users must always be vulnerable to owners' dictates.

    Fundamentally, this is the difference between freedom and openness — between free software and open source.

    Now, human rights and property rights often come into conflict with one another. For example, landlords aren't allowed to enter your home without adequate notice. In many places, hotels can't throw you out if you overstay your reservation, provided that you pay the rack-rate for the rooms — that's why you often see these posted on the back of the room-door

    Reposession of leased goods — cars, for example — are limited by procedures that require notice and the opportunity to rebut claims of delinquent payments.

    When these laws are "streamlined" to make them easier for property holders, we often see human rights abuses. Consider robo-signing eviction mills, which used fraudulent declarations to evict homeowners who were up to date on their mortgages—and even some who didn't have mortgages.

    The potential for abuse in a world made of computers is much greater: your car drives itself to the repo yard. Your high-rise apartment building switches off its elevators and climate systems, stranding thousands of people until a disputed license payment is settled.

    Sounds fanciful? This has already happened with multi-level parking garages.

    Back in 2006, a 314-car Robotic Parking model RPS1000 garage in Hoboken, New Jersey, took all the cars in its guts hostage, locking down the software until the garage's owners paid a licensing bill that they disputed.

    They had to pay it, even as they maintained that they didn't owe anything. What the hell else were they going to do?

    And what will you do when your dispute with a vendor means that you go blind, or deaf, or lose the ability to walk, or become suicidally depressed?

    The negotiating leverage that accrues to owners over users is total and terrifying.

    Users will be strongly incentivized to settle quickly, rather than face the dreadful penalties that could be visited on them in the event of dispute. And when the owner of the device is the state or a state-sized corporate actor, the potential for human rights abuses skyrockets.

    This is not to say that owner override is an unmitigated evil. Think of smart meters that can override your thermostat at peak loads.

    Such meters allow us to switch off coal and other dirty power sources that can be varied up at peak times.

    But they work best if users — homeowners who have allowed the power-company to install a smart-meter — can't override the meters. What happens when griefers, crooks, or governments trying to quell popular rebellion use this to turn heat off during a hundred year storm? Or to crank heat to maximum during a heat-wave?

    The HVAC in your house can hold the power of life and death over you — do we really want it designed to allow remote parties to do stuff with it even if you disagree?

    The question is simple. Once we create a design norm of devices that users can't override, how far will that creep?

    Especially risky would be the use of owner override to offer payday loan-style services to vulnerable people: Can't afford artificial eyes for your kids? We'll subsidize them if you let us redirect their focus to sponsored toys and sugar-snacks at the store.

    Foreclosing on owner override, however, has its own downside. It probably means that there will be poor people who will not be offered some technology at all.

    If I can lo-jack your legs, I can lease them to you with the confidence of my power to repo them if you default on payments. If I can't, I may not lease you legs unless you've got a lot of money to begin with.

    But if your legs can decide to walk to the repo-depot without your consent, you will be totally screwed the day that muggers, rapists, griefers or the secret police figure out how to hijack that facility.

    It gets even more complicated, too, because you are the "user" of many systems in the most transitory ways: subway turnstiles, elevators, the blood-pressure cuff at the doctor's office, public buses or airplanes. It's going to be hard to figure out how to create "user overrides" that aren't nonsensical. We can start, though, by saying a "user" is someone who is the sole user of a device for a certain amount of time.

    This isn't a problem I know how to solve. Unlike the War on General Purpose Computers, the Civil War over them presents a series of conundra without (to me) any obvious solutions.

    These problems are a way off, and they only arise if we win the war over general purpose computing first

    But come victory day, when we start planning the constitutional congress for a world where regulating computers is acknowledged as the wrong way to solve problems, let's not paper over the division between property rights and human rights.

    This is the sort of division that, while it festers, puts the most vulnerable people in our society in harm's way. Agreeing to disagree on this one isn't good enough. We need to start thinking now about the principles we'll apply when the day comes.

    If we don't start now, it'll be too late.'

    http://boingboing.net/2012/08/23/civilwar.html
  • Life is not read-only

    posted by Keito
    2012-08-21 20:48:50
    They say it is piracy. Downright stealing from other people, that's what downloading is. You're taking something for sale and not paying for it. Do you shoplift, or break into houses? Why should you download for free?

    Making media is hard work: it cost three million dollars just to remaster, package, and advertise that latest compilation. How will artists make a living? How will real culture keep going?


    Well. Maybe you didn't exactly take something from someone. Maybe you didn't really discover that stuff on a shelf. Maybe you weren't going to spend all that money on that "copy-protected" thing anyway.

    And these things are sticky. Music you can't copy, films you can't tape, files with restrictions, and collections that vanish when you swap the music player... Some companies even build phones and computers on which they are the ones who decide which programs you may run.

    Things worsen when the law is changed to suit these practices: in several countries, it is illegal to circumvent such restrictions.

    What are you here for? What is really important in life? At the end of the year, what makes it good to you? Good time with friends and relatives? Discovering a great album? Expressing your love, or discontent? Learning new things? Having a great idea? An email from someone special?



    Less important things include: a larger number of pixels - a sleek but already outdated iPod - a "premium" subscription - a quickly absorbed pay rise - lots of high-res TV watching... That's good stuff we all enjoy, but in the end it doesn't quite count much.

    The chances are you are not going to be an exceptional astronaut. You are not going to swim across the Atlantic. You are not going to be a world leader. Life is right now. It is about sharing and expressing thoughts, ideas and feelings.

    Life is not read-only. It is made of bits that cannot be sold with locks on them. If you cannot choose, try, taste, witness, think, discover, make discover, express, share, debate, it's not worth a lot. Life should be read-and-write.

    Perhaps the copyright system isn't as legitimate as some would like you to believe. In fact Martin Luther King's speech I Have a Dream © is still copyrighted. You are not allowed to sing Happy Birthday To You © in a movie without paying rights. The expression Freedom of Expression™ is trademarked.

    File sharing is turned into a crime. People are trialed and jailed for developing technologies that enable others to share files (they call it: "Conspiracy to commit copyright infringement").

    But where are the people who invented and sold video recorders, photocopying machines, and cassette players with a record button? In fact, where are the people who invented digital music players?

    It is a disproportionate joke. While thousands work hard to make our children want to smoke, or to export even more landmines, sharing files -the very same files that are streamed on YouTube- makes you liable for $150,000 compensation per downloaded song.

    And it is said music industry is endangered.

    Who wants an industry for music? Such an industry anyway? Let industries be for canned food and cars - not for creativity.

    Culture is not damaged when you copy something. Creativity is not diminished when you discover something. Whole societies are improved when people learn and express things.

    Participate. Enjoy. Discover. Express. Share.

    So. A reasoned society where artists can make a living and you're not a criminal because you share music is possible. A few suggestions:

    ***Music***

    Listen to artists live (not if they charge $200)

    Check out what your favorite artists think and do (you might learn things)

    Buy music the intelligent way

    If the artist is dead now, save the expense of a CD and spend the money otherwise

    Sing along the lyrics

    Don't re-buy your music if you have it as CDs or LPs.



    ***Participate***

    Add your bit to Wikipedia

    Start a blog and express opinions

    Rip, mix, burn, sample, shuffle, remix your music.



    ***Movies***

    Go to small theatres (more likely to promote artists not industries)

    Don't buy into restrictive media players and DRM technologies

    Share good movies with your friends, those you believe everyone should see in their lives.



    ***Software***

    Learn what free software is

    then: install Firefox, then get Linux

    Write your work files in a standard open format



    ***Everyday***

    Think, read about copyright, and how copyable things differ from material objects

    Learn about Creative Commons. Browse all the good stuff and open your ears

    Keep a copy of your favourite texts

    Get doing things

    Don't let your culture get eaten by big brands

    Keep performing the song Happy Birthday without legal authorisation.



    http://www.lifesnotreadonly.net/
  • Guide to DRM-free Living

    posted by Keito
    2012-07-27 19:39:51
    The good folks over at Defective By Design have released a major update to their Guide to DRM-free Living, with dozens of new places to get ebooks, movies, and music without DRM.

    Check it out!

    http://www.defectivebydesign.org/node/2241
  • Japan: Police arrest "anti DRM" journalists

    posted by Keito
    2012-07-22 20:40:49
    4 journalists from SANSAI BOOKS have been arrested for selling, through the company website, a copy of a magazine published last year (with a free cover mounted disc) focused on how to backup/rip DVDs.

    "They violated Japan's Unfair Competition Prevention Law that recently has been revised to make illegal the sale of any DRM circumvention device or software.

    It's interesting to note that Japanese cyber Police could arrest the Amazon Japan CEO too as the online giant is selling a lot of magazines, books and software packages for DVD copy and ripping: exactly what put in trouble Sansai Books staff. But I bet Amazon Japan offices will not get any visit from the local police...

    The Japanese entertainment industry is getting full support from politicians for laws that make SOPA looks like a liberal legislation (from this October downloading a single illegal MP3 could land a Japanese p2p user in jail for 2 years).

    Among other things this law makes illegal all the Linux distributions which come pre-installed with libdvdcss like BackTrack, CrunchBang Linux, LinuxMCE, Linux Mint, PCLinuxOS, Puppy Linux 4.2.1, Recovery Is Possible, Slax, Super OS, Pardus, and XBMC Live.

    Looks like the entertainment industry wants to attack Sansai Books and make it an example for everyone because it is a publishing company focused on digital backup freedom.

    There is virtually no discussion among journalists and technology experts about 4 colleagues arrested. This makes me wonder how a country so advanced like Japan can progess without developing a cultural background about these issues."

    http://blog.wired.it/otakunews/2012/07/20/japan-police-arrest-anti-drm-journalists.html