Joy, to the World
A techno-celebrity's childish manifesto.
Since its earliest days, Wired magazine has always had a genius for publicity. It lost its edge somewhat when Condé Nast took over two years ago and the new crew replaced Wired's Bay Area techno-exuberance with the New York publishing formulas that eschew ideas in favor of celebrities.
But the hype machine came roaring back with the April cover story–a long, long, long think piece by the hip software genius Bill Joy, chief scientist at Sun Microsystems. (The article can be read at www.wired.com/wired/archive/8.04/joy.html.) Joy was the key designer of Berkeley Unix and the man behind Sun's Java and Jini. He is a techno-celebrity of the first order.
But he's not a geeky geek. With his stylish all-black outfits and narrow eyeglass frames, Joy looks more Hollywood than Silicon Valley. (He actually lives in Aspen.) And he's as earnest and elitist as a Harvard professor. In short, he's the perfect authority to book on TV. And he had the perfect message: Technological research must be stopped.
Specifically, Joy is worried, really worried–20,000 words and five months of writing worried–that 21st-century technologies threaten to make human beings extinct. The threats are intelligent robots, nanotechnology (the ability to build things on the atomic level), and genetic engineering. All of them, he acknowledges, offer wonderful advantages, but they are, in his view, simply too dangerous to develop. We should stop investigating these ideas, he argues, before they become uncontrollable realities.
"The new Pandora's boxes of genetics, nanotechnology, and robotics are almost open. Ideas can't be put back in a box; unlike uranium or plutonium, they don't need to be mined or refined, and they can be freely copied," he writes. "Once they are out, they are out." So we'd better keep them safely unknown.
Joy's article starts with the idea that highly intelligent robots might supercede human beings. This could happen because a) the robots are so great we become dependent on them and essentially bore ourselves to death (a scenario Joy snagged from the Unabomber manifesto) or b) the robots outcompete us for economic resources, so that we can't afford enough food, water, land, energy, etc., to survive, just as placental mammals wiped out competing marsupials in the Americas (an idea from robotics researcher Hans Moravec).
There's also the possibility, outlined by Ray Kurzweil in The Age of Spiritual Machines, that we might gradually merge with our machines, enhancing our intelligence or uploading our consciousness, and thus change the nature of humanity. Kurzweil and his book inspired Joy's nightmares, but Joy doesn't really explore Kurzweil's scenario.
Instead, he moves quickly to a nearer-term threat–the possibility that someone might use genetic engineering to create a devastating plague. This scenario illustrates the fundamental problems of these new technologies. Unlike 20th-century weapons of mass destruction, notably nuclear bombs, these are hard to limit to a small cadre of government-approved scientists and engineers.
"For the first time," writes Joy, "these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them." We will face "a surprising and terrible empowerment of extreme individuals." In the future, the Unabomber will be a molecular biologist. And, once created, these technologies will have the ability to reproduce and spread, to self-replicate beyond the control of their creators and threaten widespread destruction.
To avoid that prospect, Joy advocates "relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge." He would enforce that limit through "a verification regime similar to that for biological weapons, but on an unprecedented scale," combined with a scientific code of conduct that forbids such research. That's the prescription that made TV bookers and newspaper editorialists sit up and take notice. A scientist was saying we should stop scientific research.
The technologies Joy fears do indeed have their dark side, and it does lie in their easy, decentralized production and subsequent self-replication. These technologies can be genuinely scary. But Joy's policy prescriptions are breathtakingly naive, blinkered, and totalitarian. They are contradicted by his own arguments, and by the uncomfortable historical realities Joy skirts even as he invokes them. Though it strains to seem wise, Joy's manifesto is at once childlike and childish.
Introducing Joy on Good Morning America, Charles Gibson explained the purpose of the article: "The contents of an article coming out this morning in Wired magazine are being compared to the 1939 letter that Albert Einstein wrote to President Roosevelt warning of the consequences of the atomic bomb. Today's warning that new technology could mean the extermination of mankind. It comes from a respected scientist who was one of the key architects of the information age, and that's why his warning is causing such a stir in The Washington Post and other newspapers."
The Einstein comparison actually came, rather grandiosely, from Joy himself, in an interview with the Post's Joel Garreau. Except, of course, that Einstein did not advise Roosevelt to renounce development of the bomb, or to end research in atomic physics. In 1939, even pacifist-leaning physicists couldn't pretend that you could shut down science and count on evildoers to leave it, and you, alone. Facing the Nazis, Joy blinks. Facing Stalin, he closes his eyes altogether.
"We should have learned a lesson from the making of the first atomic bomb and the resulting arms race," writes Joy. "We didn't do well then, and the parallels to our current situation are troubling."
Joy does not spell out exactly what lesson we should have learned, but by analogy it would be that the atomic bomb and the science underlying it should not have been pursued, because they were simply too dangerous. Physicists should have banded together in the 1930s to shut down this research. It sounds completely nuts, of course, to say that the 20th century's scientific giants should have abandoned their study of the fundamentals of matter and energy. So, Joy doesn't say that.
Instead, he retreats into the fantasy world of Cold War naifs. He cheers the idea, proposed after World War II, that the United Nations should have been given a monopoly on nuclear weapons research. Although he admits that Stalin would never have agreed to such a monopoly, he does not face the implications of that fact. He just wistfully refers to "distrust by the Soviets." Poor Stalin. He really needed to work on those trust issues. So, apparently, did Americans who feared Soviet intentions. Perhaps an Aspen Institute seminar would have helped.
Joy needs to spend some serious time in Baghdad. You simply cannot just will away technologies of mass destruction. With treaties and verification systems in place, 20th-century weapons aren't that hard to hide, especially if their owners control the territory in which they're hiding. And by Joy's own admission, 21st-century biotech and nanotech (let's leave aside the killer robots) will be accessible to just about anyone. You won't have to run a country to get them. And even current research wouldn't be that hard to conduct secretly.
Their value as weapons alone makes these technologies uncontrollable. But it isn't the only strong incentive to ignore a total ban. Preserving life and eliminating poverty are also pretty damned powerful incentives. We're not all Aspen-dwelling millionaires. You can bet the Chinese and Indians (a billion people each, with elites growing more prosperous and technologically savvy by the day) would pour resources into nanotechnology simply for its promise of material plenty. Add in the weapons potential, and you've got guaranteed development. And there will always be enough rich eccentrics who want to live forever, or to cure the diseases of their beloveds, to fund cheap biotech.
In other words, that "unprecedented" verification regime would require frequent inspection and constant surveillance of every garage, every basement, every bathroom, every mountain hideaway on the entire planet–a planet inhabited by six billion potential technology developers, not a small clique of a few dozen world-class physicists. Does Joy have any idea how big the world is? The authorities can't find an alleged abortion-clinic bomber hiding out in the hills of North Carolina, much less destroy every cocaine facility in Central America. And there's always Baghdad.
Bill Joy is a lot smarter than I'll ever be. But he is also incredibly foolish, in the parochial, reality-dodging way that geniuses sometimes are. And he is willing to sacrifice an awful lot of other people's lives and liberty to his fantasies of power and control.
If-then statements are a staple of computer programming. Here's one to consider: "If we could agree, as a species, what we wanted, where we were headed, and why, then we would make our future much less dangerous–then we might understand what we can and should relinquish." (Emphasis added.) How, exactly, does a species of more than six billion intelligent individuals, with their own plans and purposes, agree on anything? Joy imagines the world as a small, technological elite and assumes away the problems of politics. He and his friends will just get together and agree on what to do.
Even as he invokes the unintended consequences (invariably bad) of technology, he also imagines a world in which all consequences are known in advance and life proceeds according to a master plan. That isn't the world we live in, nor is it a world that anyone who values their own purposes would find congenial, even if it were possible.
Basing sweeping policy prescriptions on an if-then statement whose "if" doesn't hold is bizarre. Since we can't agree as a species on what we want, where we're headed, and why, then we can't understand what we can and should relinquish–no matter how much Joy wishes things were otherwise.
Indeed, that very lack of agreement, and all the threats (and promises) it entails, argues in favor of developing as much knowledge as possible. When we cannot anticipate or eliminate all hazards, we need to have as many ways to bounce back from destruction as possible–a portfolio of resilient responses. The United States relinquished biological weapons, as Joy says, but it did not eliminate the Centers for Disease Control or give up inoculating military troops.
Ultimately, Joy's article is not a serious exploration of how to deal with the potential threats of powerful, dispersed, self-replicating technologies. That would require far greater technical depth–Could nanotechnology be developed without the ability to self-replicate indefinitely, perhaps by making it dependent on external energy sources? What would it take to maintain a constantly adapting range of immunizations?–and much, much greater social sophistication. It would mean thinking a few steps ahead, and accepting the knowledge of good and evil, not reimagining a world made up of benevolent, unambitious, incurious Buddhas.
Serious consideration of these questions also means drawing on all sorts of social scientists, immunologists, arms control experts, cognitive scientists, historians, and science fiction novelists, not limiting your research to inventors, physicists, and the Dalai Lama. Joy is right that there is nothing wrong with being a generalist. But he is far too specialized to claim that title.
If Joy's article isn't really about coping with dangerous technologies, what is it? Read carefully, it is exactly what the man-bites-dog press accounts promised: a screed against unpredictable change, a call for a static world, and an assault on commerce and the individual desires it serves. It is the same old attack on the open society, just wrapped in cool clothes.
"We are being propelled into this new century with no plan, no control, no brakes," writes Joy, employing the classic rhetoric of those who distrust the "chaos" created by individual creativity and freedom. And later, "We must find alternative outlets for our creative forces, beyond the culture of perpetual economic growth; this growth has largely been a blessing for several hundred years, but it has not brought us unalloyed happiness, and we must now choose between the pursuit of unrestricted and undirected growth through science and technology and the clear accompanying dangers."
No culture can provide "unalloyed happiness." No one who uses that standard should be taken seriously. If Bill Joy needs a new outlet for his creativity, something that doesn't involve economic growth or technological innovation, he can quit his day job. But I wouldn't recommend a career in public policy.
Virginia Postrel (vpostrel@reason.com) is REASON's editor-at-large and the author of The Future and Its Enemies: The Growing Conflict Over Creativity, Enterprise, and Progress.
Show Comments (3)