Policy

Live from Extro-5

At their latest convention, the "Extropians" unveil their vision of a limitless future. Take a look at the details.

|

Who's got the right to tell you that you can't live a long time and have beautiful children? That rhetorical question was asked by science fiction novelist Greg Bear at the last session of the Extropy Institute's fifth annual conference, held this past weekend in the heart of Silicon Valley in downtown San Jose. The Extropians define themselves against entropy, or the idea that all things in the universe run down, collapse, and diminish over time. They're the ultimate in cockeyed optimists, believing the best is yet to come and that the future can be literally and figuratively limitless.

The point Bear was making is that he believes that Americans, steeped in individualism, will reject attempts to prohibit access to the health benefits of genetic engineering. During Extro-5, it was taken for granted by participants that genetic engineering offers far more benefits to humanity than it does harm. Such technological optimism pervaded all the sessions at the conference. It was a welcome relief from most of the yammerings from the radical left and the traditionalist right on the same general topics.

Extro-5 opened with a speech on Friday night by software entrepreneur Ray Kurzweil. Kurzweil, author of The Age of Spiritual Machines, is a powerful evangelist for that old-time gospel of techno-optimism. Kurzweil's speech was a preview of his next book, The Singularity is Near. "The Singularity" is a metaphor taken from astrophysics. It describes what happens when a person crosses the event horizon of a black hole and enters a realm where the familiar laws of physics break down and no longer apply. Kurzweil and many other Extropians think that humanity will cross an analogous threshold when the first superhuman artificial intelligence comes into existence. When that happens, says Kurzweil, all bets are off because it will be "a rupture in the fabric of human history." Thus, Kurzweil is certain that "the future will be more interesting than anything we can imagine." He argues that "the amount of technological progress in the 21st century will be greater than all the progress that has occurred in human history" so far.

Among some specific projections that Kurzweil makes: By 2010, ubiquitous high bandwidth connections will allow us to be online 24 hours a day, 7 days a week (hell, some of us are already there). By 2020, the computational power to emulate a human brain will cost only about $1,000. By 2030, nanotechnological devices circulating in the bloodstream will allow us to scan living human brains in detail. These same nanobots will provide wireless connections to all sorts of resources and allow the expansion of human intelligence by a factor of 1 million or more. "If we meet someone in 2040," predicted Kurzweil, "We will be interacting with a person operating with significant number of nonbiological processes. The nonbiological portion of our thinking will be so dominant that the biological portion won't be very important."

Sun Microsystems' chief scientist Bill Joy was much on the minds of the conference speakers and attendees. In a widely read Wired story and in subsequent interviews, Joy argued that some technologies, specifically biotechnology, nanotechnology, and robotics, are simply too dangerous for human beings to develop. Joy claimed that humanity must "relinquish" these future technologies. Nothing could be calculated to annoy Extropians as much as Joy's dystopian vision of technology. (For a stunning critique of Joy's ideas by REASON editor-at-large Virginia Postrel, click here).

Kurzweil often squares off against Joy in public forums. He told the Extro-5 audience of one such debate in which he started things off with a fake press release from Sun Microsystems that ran something like: "Sun announces today that in the future it is no longer planning to offer its customers faster computers nor improved software because doing so would lead to the eventual destruction of the human race." Kurzweil notes that Joy is not completely off base in warning that new technologies will have downsides to them. But he dismisses Joy, saying the Sun employee is arguing that "we should keep the good technologies and get rid of the dangerous ones." "They are the same ones," says Kurzweil. "Technologies that can cure millions of cancers or clean up the natural environment can also be used to create pathogens for biological warfare."

Kurzweil believes that humanity will be able to develop defensive technologies to limit the potential harms that new technologies might cause in the wrong hands. He cites discredited "expert" predictions that the proliferation of computer viruses would make networked computing impossible. Antivirus software has made viruses more of a nuisance than a threat, he says. Kurzweil notes that our rather lackadaisical but still effective response to computer viruses will pale in comparison to our defensive responses to any technology that we think might kill us.

The Saturday morning session opened with remarks by Extropy Institute head and philosopher Max More. "If you were in 1980 and had access to what's in the news today, wouldn't you think that you were reading a science fiction novel?" he asked. More pointed out that the Pope has just issued an authoritative statement on human cloning, women are having babies in their 50s and 60s, teenagers are living in a world where they are constantly connected wirelessly with their friends, quantum computing is being developed, the Soviet Union has collapsed, and people now run marathons using prosthetic limbs.

The next Saturday session was devoted to "Ensuring Friendly Super Intelligence." Panelist Anders Sandberg, founder of Eudoxa, a commercial think tank in Sweden, defined "friendliness" as "the production of human-benefiting, non-human harming actions." The panel discussed specific software design features that they believe will help insure that superhuman artificial intelligences will like us. The Singularity Institute for Artificial Intelligence has devised a set of guidelines for how to insure superhuman friendliness. Considering that superhuman artificial intelligence should come online sometime during the next four decades, let's hope they've got it right.

One area of debate among the Extropians is whether there will be either a "soft take-off" or "hard take-off" to the Singularity. In the hard scenario, an independent, self-improving computer increases its intelligence exponentially over a very short period, reaching transhuman levels of intelligence very rapidly. In the soft scenario, superhuman intelligence would likely emerge from the plurality of intelligent systems that are developed as human enhancements over time. When superhuman general intelligence appears in this scenario, it will integrate into existing systems, rather than become something independent. (Software developer Marc Stiegler quipped that he thought he would know that a system was superintelligent when "it is strong enough to educate California politicians about the laws of supply and demand… [although] there probably is no intelligence that will be that strong.")

Another panel dealt with "Legal Processes and Liberty in the Information Age." Computer security consultant Nick Szabo spoke of smart contracts which solved the problem of trust by being self-executing. For example, the key to a car sold on credit might only operate if the monthly payments have been made. Szabo also has an ambitious project in which all property is embedded with information about who owns it. Mark Miller, the co-director of George Mason University's Agorics Project, which researches market-based computing ideas from the viewpoint of the Austrian school of economics, was on the same panel. He talked about his work, which explored ways that the Internet might be used to help people in developing economies who hold informal titles to property. With the right tools, that property could be used to secure loans and other forms of capital.

The panel on "Mastering the Information Explosion" scared the bejesus out of me when software security consultant Harvey Newstrom made it plain just how much information government agencies and commercial enterprises are gathering from those of us who are online. He showed how companies, government agents, and hackers can combine databases to profile users and predicted that more and more companies are going to install "spyware" on users' computers. For example, Newstrom said if you download AOL's and Netscape's utility called "download manager" that it can record everything you download and forward the information to interested advertisers. It gets worse. According to Newstrom, the European Union is considering a proposed law that would require the archiving of records for all telephone conversations and Internet activities for 7 years. Citizens can fight back using anonymizing services like Zero Knowledge and resorting to data poisoning by providing misleading information to snoops.

The final panel (which included your humble correspondent) dealt with how to respond to the growing neo-Luddite movement, which opposes virtually all technological advances. Such opponents were dubbed "bioconservatives" and "technophobes." Panelists identified many of the groups who oppose technological progress, including Jeremy Rifkin's Foundation on Economic Trends, the Council for Responsible Genetics, Greenpeace, the Rural Advancement Foundation International, and basically all of the 100 or so organizations that participated in the Turning Point Project. (The TPT bought an alarmist series of full-page advertisements in leading newspapers such as The New York Times and the Washington Post; click here for my analysis of them). Dismayingly, when I asked how many of the 150 or so participants in the conference had heard of the anti-technology "precautionary principle," fewer than 10 people raised their hands. I explained how the precautionary principle would require that all new technologies be proved "safe" and approved as "socially necessary" by regulatory authorities before they are introduced into the market.

As part of final panel, Natasha Vita-More announced the launch of a pro-progress group called the Progress Action Coalition, or Pro-Act for short. Pro-Act seeks to build a coalition of groups that favor rapid technological progress. It's not a moment too soon.