Blog

  • Unity is Strength

    posted by Keito
    2012-09-04 21:22:19
    Telecomix Crypto Munitions Bureau works for the benefit of cipherspace. Cipherspace is the state of crypto anarchy. This means that your identity is anonymous as long as you stay protected. There are no identities or authorities in cipherspace, and it is not possible to enforce laws where there is no identity, or where there are no authorities.

    Today there are several threats to the inhabitants of the internet. The politicians of oppressive regimes in the east and in the west, in north and south, are imposing surveillance. Surveillance of the entire networks. What people say to each other, what information is transmitted between bots and humans alike.

    This aggression must be met with the strongest encryption algorithms available to modern computers. With onion and garlic routing it is possible to erect the fractal cipherspace. With distributed hash tables it is possible to create networks that has no central node. There is no one that controls the fractal cipherspace. Internet as we know it, turns into darknet.

    Telecomix Crypto Munitions Bureau recommends that you use the following software: i2p, for anonymous and secure communications, Gnu Privacy Guard, for direct and verified communication. The onion router, TOR, to access the internets.

    Telecomix Munitions is a defense bureau.

    You can change the future of the internets by joining us in defending the networks and creating cipherspace.

    You can help defending yourself and your friends, yes, all inhabitants of the networks.

    By learning a few skills you can take control over technology.

    Telecomix munitions are currently developing and promoting advanced security devices, which can endure even the harshest forms of government or corporation surveillance.

    Your personal computer is an encryption device. Modern hardware can transform plain text to ciphertext with ease. So rapidly you barely notice the difference between unencrypted and encrypted data.

    The laws of mathematics are infinitely stronger than the laws of nations and corporations, as the human laws are really only ink on paper. The laws of mathematics, on the other hand, are the laws that define our very universe. With the use of modern crypto algorithms we can use this fact to defend free speech and the integrity of both bots and humans. Information is nothing but numbers, numbers governed not by human laws, but by the laws of mathematics.

    Networks that utilize the power of cryptography already exist. It will not be possible to stop the spread of the fractal cipherspace.

    To find out more, come to cryptoanarchy.org
  • What developers can learn from Anonymous

    posted by Keito
    2012-08-29 20:59:29
    'The reason Anonymous has a permanent place in our collective imagination: For a time, its organizational model worked very well.


    I've been credited with coining the term "do-ocracy." When I've had the opportunity to lead an open source project, I've preferred to "run" it as a do-ocracy, which in essence means I might give my opinion, but you're free to ignore it. In other words, actual developers should be empowered to make all the low-level decisions themselves.

    When you think about it, the hacker group Anonymous is probably one of the world's most do-ocratic organizations. Regardless of where you stand on Anonymous' tactics, politics, or whatever, I think the group has something to teach developers and development organizations.

    As leader of an open source project, I can revoke committer access for anyone who misbehaves, but membership in Anonymous is a free-for-all. Sure, doing something in Anonymous' name that even a minority of "members" dislike would probably be a tactical mistake, but Anonymous has no trademark protection under the law; the organization simply has an overall vision and flavor. Its members carry out acts based on that mission. And it has enjoyed a great deal of success -- in part due to the lack of central control.

    Compare this to the level of control in many corporate development organizations. Some of that control is necessary, but often it's taken to gratuitous lengths. If you hire great developers, set general goals for the various parts of the project, and collect metrics, you probably don't need to exercise a lot of control to meet your requirements.

    Is it possible to apply do-ocracy outside of open source and hacktivism? Not to the same degree Anonymous does, but in moderate amounts, it could improve the overall quality of our software and our jobs.

    Vision and culture rule

    Anonymous members pick targets and carry out actions based on the general vision and culture of the group. Whether in a do-ocracy or not, vision goes a long way.

    Some years back I worked for a network equipment company. It was probably one of the worst jobs I've ever had, complete with rows of beige cubicles highlighted with sickly green trim. Not only was I told to write my Java classes mostly in caps, with few files and minimal whitespace, but each day we had hours of conference calls with a team in New Jersey. Our computers were vintage and our shell connection was slow. The "vision" was to try and catch up with whatever Cisco was doing.

    Internally, the project was considered a success, but to me it was clearly a failure. I'd be shocked if the company kept a single customer from leaving, and I'm virtually positive it didn't land new ones. The website was horribly confusing and unattractive. It was intended to be a B2B site. The dilapidated culture of the company and its hollow objective coupled with a bizarre need for control yielded predictable outcomes.

    Consider how Anonymous works. It started with a general vision of anarchistic attacks against centers of power. Over time, this has become specific to punishing "bad behavior" and grabbing attention. There is no five-year plan (that we know of). Something happens, folks come together -- in an IRC chat or other medium -- and collaborate on their work. Despite the lack of an overall plan, tactical successes occur.

    On the other hand, lack of a plan causes Anonymous to be a slave to the news cycle. While I'm not saying its activities at the height of the Arab Spring didn't contribute, key strategic objectives were not accomplished -- for instance, the repeated calls by freedom fighters to bring down Gadhafi's satellite TV channel. This is where a plan would be helpful. I've seen a lot of organizations function with neither shared vision or a plan. I've yet to see a successful software project without both.

    Control has its limits

    Many managers believe that if they aren't getting the results they want, they can just put pressure on the team. But as a developer who's transitioned to a management role, I can tell you that the more I push that button, the less effective it is.

    Consider the misadventures of our hacker anti-heroes. Where Anonymous has had a central nerve, it has been attacked, which has led to arrests. The effects have trickled down and negatively affected the group.

    We can also see this in server architecture. There are still clustering platforms managed through a central server -- the weak point in everything from Hadoop to WebSphere. Yet we're watching the evolution of these architectures away from central control. This results in less predictability in some circumstances, but makes them more robust in the long term.

    That metaphor is transferrable to the management of software projects. Yes, setting expectations, establishing norms, and spurring motivation can have great positive effect and avert crises. I am not advocating for anarchy. But the loose affiliation model of Anonymous, an organization notorious for wreaking chaos, has more to teach than many of us would like to admit.'

    https://www.infoworld.com/d/application-development/what-developers-can-learn-anonymous-200786
  • A Sullied Apple

    posted by Keito
    2012-08-17 19:03:59
  • Norvig vs. Chomsky and the Fight for the Future of AI

    posted by Keito
    2012-07-27 21:40:23
    "When the Director of Research for Google compares one of the most highly regarded linguists of all time to Bill O’Reilly, you know it is on. Recently, Peter Norvig, Google’s Director of Research and co-author of the most popular artificial intelligence textbook in the world, wrote a webpage extensively criticizing Noam Chomsky, arguably the most influential linguist in the world. Their disagreement points to a revolution in artificial intelligence that, like many revolutions, threatens to destroy as much as it improves. Chomsky, one of the old guard, wishes for an elegant theory of intelligence and language that looks past human fallibility to try to see simple structure underneath. Norvig, meanwhile, represents the new philosophy: truth by statistics, and simplicity be damned. Disillusioned with simple models, or even Chomsky’s relatively complex models, Norvig has of late been arguing that with enough data, attempting to fit any simple model at all is pointless. The disagreement between the two men points to how the rise of the Internet poses the same challenge to artificial intelligence that it has to human intelligence: why learn anything when you can look it up?

    Chomsky started the current argument with some remarks made at a symposium commemorating MIT’s 150th birthday. According to MIT’s Technology Review,

    Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science,” said Chomsky.

    To frame Chomsky’s position as scientific elegance versus complexity is not quite fair, because Chomsky’s theories have themselves become more and more complex over the years to account for all the variations in human language. Chomsky hypothesized that humans biologically know how to use language, besides just a few parameters that need to be set. But the number of parameters in his theory continued to multiply, never quite catching up to the number of exceptions, until it was no longer clear that Chomsky’s theories were elegant anymore. In fact, one could argue that the state of Chomskyan linguistics is like the state of astronomy circa Copernicus: it wasn’t that the geocentric model didn’t work, but the theory required so many additional orbits-within-orbits that people were finally willing to accept a different way of doing things. AI endeavored for a long time to work with elegant logical representations of language, and it just proved impossible to enumerate all the rules, or pretend that humans consistently followed them. Norvig points out that basically all successful language-related AI programs now use statistical reasoning (including IBM’s Watson, which I wrote about here previously).

    But Norvig is now arguing for an extreme pendulum swing in the other direction, one which is in some ways simpler, and in others, ridiculously more complex. Current speech recognition, machine translation, and other modern AI technologies typically use a model of language that would make Chomskyan linguists cry: for any sequence of words, there is some probability that it will occur in the English language, which we can measure by counting how often its parts appear on the internet. Forget nouns and verbs, rules of conjugation, and so on: deep parsing and logic are the failed techs of yesteryear. In their place is the assumption that, with enough data from the internet, you can reason statistically about what the next word in a sentence will be, right down to its conjugation, without necessarily knowing any grammatical rules or word meanings at all. The limited understanding employed in this approach is why machine translation occasionally delivers amusingly bad results. But the Google approach to this problem is not to develop a more sophisticated understanding of language; it is to try to get more data, and build bigger lookup tables. Perhaps somewhere on the internet, somebody has said exactly what you are saying right now, and all we need to do is go find it. AIs attempting to use language in this way are like elementary school children googling the answers to their math homework: they might find the answer, but one can’t help but feel it doesn’t serve them well in the long term.

    In his essay, Norvig argues that there are ways of doing statistical reasoning that are more sophisticated than looking at just the previous one or two words, even if they aren’t applied as often in practice. But his fundamental stance, which he calls the “algorithmic modeling culture,” is to believe that “nature’s black box cannot necessarily be described by a simple model.” He likens Chomsky’s quest for a more beautiful model to Platonic mysticism, and he compares Chomsky to Bill O’Reilly in his lack of satisfaction with answers that work. “Tide goes in, tide goes out. Never a miscommunication. You can’t explain that,” O’Reilly once said, apparently unsatisfied with physics as an explanation for anything. But is Chomsky’s dismissal of statistical approaches really as bad as O’Reilly’s dismissal of physics in general?

    I’ve been a Peter Norvig fan ever since I saw his talk he gave to the Singularity Institute patiently explaining why the Singularity is bunk, a position that most AI researchers believe but somehow haven’t effectively communicated to the popular media. So I found similar joy in Norvig’s dissection of Chomsky’s famous “colorless green ideas sleep furiously” sentence, providing citations to counter Chomsky’s claim that its parts had never been spoken before. But I can’t help but feel that an indifference to elegance and understanding is a shift in the scientific enterprise, as Chomsky claims.

    “Everything should be simple as possible, but no simpler,” Einstein once said, echoing William of Ockham’s centuries-old advice to scientists that entities should not be multiplied beyond necessity. The history of science is full of oversimplifications that turn out to be wrong: Kepler was right on the money with his Laws of Motion, but completely off-base in positing that the planets were nested in Platonic solids. Both models were motivated by Kepler’s desire to find harmony and simplicity hidden in complexity and chaos; in that sense, even his false steps were progress. In an age where petabytes of information can be stored cheaply, is an emphasis on brevity and simplicity an anachronism? If the solar system’s structure were open for debate today, AI algorithms could successfully predict the planets’ motion without ever discovering Kepler’s laws, and Google could just store all the recorded positions of the stars and planets in a giant database. But science seems to be about more than the accumulation of facts and the production of predictions.

    What seems to be a debate about linguistics and AI is actually a debate about the future of knowledge and science. Is human understanding necessary for making successful predictions? If the answer is “no,” and the best way to make predictions is by churning mountains of data through powerful algorithms, the role of the scientist may fundamentally change forever. But I suspect that the faith of Kepler and Einstein in the elegance of the universe will be vindicated in language and intelligence as well; and if not, we at least have to try."

    http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-fight-for-the-future-of-ai
  • DISOBEY [Ⓐ]

    posted by Keito
    2012-07-27 20:24:59