Will Super Smart Artificial Intelligences Keep Humans Around As Pets?
And other questions from the Singularity Summit
SAN FRANCISCO—By 2030, or by 2050 at the latest, will a super-smart artificial intelligence decide to keep humans around as pets? Will it instead choose to turn the entire Earth, including the messy organic bits like us, into computronium? Or is there a third alternative?
These were some of the questions pondered by the 600 or so technosavants meeting in the Palace of Fine Arts at the second annual Singularity Summit this past weekend. The meeting was convened by the Singularity Institute for Artificial Intelligence. The Institute's chief goal is to make sure that whatever smarter-than-human artificial intelligence is eventually spawned by exponentially accelerating information technology that it will be friendly to humans.
What is the "Singularity?" As Eliezer Yudkowsky, cofounder of the Singularity Institute, explained, the idea was first propounded by mathematician and sci-fi writer Vernor Vinge in the 1970s. Vinge found it difficult to write about a future in which greater than human intelligence arose. Why? Because humanity would stand in relation to that intelligence as an ant does to us today. For Vinge it was impossible to imagine what kind of future such superintelligences might craft. Vinge analogized that future to black holes which are singularities surrounded by an event horizon past which outside observers simply cannot see. Once the Singularity occurs the future gets very, very weird. According to Yudkowsky, the Event Horizon school is just one of the three main schools of thought about the Singularity. The other two are the Accelerationist and the Intelligence Explosion schools.
The best-known Accelerationist is inventor Ray Kurzweil whose recent book The Singularity is Near: When Humans Transcend Biology (2005) lays out the case for how exponentially accelerating information technology will spark the Singularity before 2050. In Kurzweil's vision of the Singularity, AIs don't take over the world: Humans will have so augmented themselves with computer intelligence that essentially we transform ourselves into super-intelligent AIs.
Yudkowsky identifies mathematician I.J. Good as the modern initiator of the idea of an Intelligence Explosion. To Good's way of thinking, technology arises from the application of intelligence. So what happens when intelligence applies technology to improving intelligence? That produces a positive feedback loop in which self-improving intelligence bootstraps its way to superintelligence. How intelligent? Yudkowsky offered a thought experiment which compared current brain processing speeds with computer processing speeds. Speeded up a million-fold, Yudkowsky noted, "you could do one year's worth of thinking every 31 physical seconds." While the three different schools of thought vary on details, Yudkowsky concluded, "They don't imply each other or require each other, but they support each other."
But is progress really accelerating? Google's director of research Peter Norvig cast some doubt of this claim. Norvig briefly looked at past technological forecasts and how they went wrong. For example, in Arthur C. Clarke's 1986 novel Songs of Distant Earth, set 1500 years in the future, the world was going to be destroyed as the sun went nova. So humanity had to cull through all the books ever written to decide which were good enough to scan and save for shipment in starships. Only a few billion pages could be stored and only one user at a time could search those pages to get an answer back in tens of seconds. Norvig pointed out that only 20 years later, Google saves tens of billions of pages and tens of thousands of users can query and answers back in tenths of a second.
Nevertheless, Norvig pointed out that accelerating growth doesn't characterize all aspects of our world. For example, global GDP over the past century has been growing at a pretty steady rate (1.6 percent per year) and shows no sign of acceleration. Same thing for average life expectancy.
Accelerationist Ray Kurzweil replied that generally he is focusing on infotech when he's projecting accelerating progress. In addition, Kurzweil made the excellent point that GDP figures do not account for the fact that most products are vastly more capable than earlier ones. For example, an Apple II with 48k of ram cost $2,275 in 1977 (about $7,800 in today's dollars). A new low-end iMac costs $1149.
So how might one go about trying to create a super-intelligent AI anyway? Most of the AI savants at the Summit rejected any notion of a pure top-down approach in which programmers would specify every detail of the AI's programming. Relying on the one currently existing example of intelligence, another approach to creating an AI would be to map human brains and then instantiate them and their detailed processes in simulations. Marcos Guillen of Artificial Development is pursuing some aspects of this pathway by build CCortex. CCortex is a simulation of the human cortex modeling 20 billion neurons and 20 trillion connections.
As far as I could tell, many of the would-be progenitors of independent AIs at the Summit are concluding that the best way to create an AI is to rear one like one would rear a human child. "The only pathway is way we walked ourselves," argued Sam Adams who honchoed IBM's Joshua Blue Project. That project aimed to create an artificial general intelligence (AGI) with the capabilities of a 3-year old toddler. Before beginning the project, Adams and his collaborators consulted the literature of developmental psychology and developmental neuroscience to model Joshua. Joshua was capable of learning about itself and the virtual environment in which it found itself. Adams also argued that in order to learn one must balance superstition with forgetfulness. Adams defined superstitions as false patterns that need to be aggressively forgotten.
In a similar vein, Novamente's Ben Goertzel is working to create self-improving AI avatars and let them loose in virtual worlds like Second Life. They could be virtual babies or pets that the denizens of Second Life would want to play with and teach. They would have virtual bodies and senses that enable them to explore their worlds and to become socialized.
However, unlike real babies, these AI babies have an unlimited capacity for boosting their level of intelligence. Imagine if an AI baby developed super-intelligence but had the emotional and moral stability of a teenage boy? Given its self-improving super-intelligence, what would prevent such an AI from escaping the confines of its virtual world and moving into the Web? As just a taste of what might happen with a rogue AI in the Web, transhumanist and executive director of the Institute for Ethics and Emerging Technologies (IEET), James Hughes pointed to the havoc currently being wreaked by the Storm worm. Storm has infected over 50 million computers and now has at its disposal more computing resources than 500 supercomputers. More disturbingly, when Storm detects attempts to thwart it, it launches massive denial-of-service attacks to defend itself. Hughes also speculated that self-willed minds could evolve from primitive AIs already inhabiting the infosphere's ecosystems.
On the other hand, founder of Adaptive A.I., Peter Voss outlined the advantages that super smart AIs could offer humanity. AIs would significantly lower costs, enable the production of better and safer products and services, and improve the standard of living around the world including the elimination of poverty in developing nations. Voss asked the conferees to imagine the effect that AIs equivalent to 100,000 Ph.D. scientists working on life extension and anti-aging research 24/7 would have. Voss also argued that AIs could help improve us, make us better people. He imagined that each of us could have a super smart AI assistant to guide us in making good moral choices. (One worry: if my AI "assistant" is so smart, could I really ignore its "suggestions"?)
Although Voss' views about AIs are relatively sunny, other participating technosavants weren't so sure. For example, computer scientist Stephen Omohundro argued that self-improving AIs would be ultra-rational economic agents, basically examples of homo economicus. Such AIs would exhibit four drives; efficiency, self-preservation, acquisition, and creativity. Regarding efficiency AIs optimizing their resource use would turn to nanotechnology and virtualization wherever possible. Self-preservation involves protecting its utility function from death which it would do by building in redundancy and embedding itself in mutually defensive social relations. The drive to acquire more resources means that AIs could be dangerously competitive with humans. If Omohundro is right, there are good reasons to doubt that an AI that is a relentless utility maximizer will be friendly to less than perfectly efficient humanity. The drive for creativity enables AIs (and us) to explore new possibilities for transforming and satisfying our utility functions. Omohundro's solution for making AIs human-friendly? Try to teach AIs our highest human values, e.g., happiness, love, compassion, beauty and so forth.
On the question of AI morality, Institute for Molecular Manufacturing research fellow, J. Storrs Hall did a modern take on Asimov's Three Laws of Robotics. Hall noted that Asimov's whole point was that the Laws were inadequate. So what ethical rules might be adequate for controlling future AIs? According to Hall, the problem of setting moral rules in stone can be illustrated by trying to imagine how the Code of Hammurabi might apply to the Enron scandal. (Actually the Code did deal with commercial fraud. Rule 265: "If a herdsman, to whose care cattle or sheep have been entrusted, be guilty of fraud and make false returns of the natural increase, or sell them for money, then shall he be convicted and pay the owner ten times the loss.")
Eliezer Yudkowsky made a similar point when he asked us to imagine what values the ancient Greeks might have tried to instill in their AIs. Surely AIs incorporating ancient Greek values would have vetoed our civilization which outlawed slavery and gave women rights.
Hall suggested that instead of fixed moral rules (which a super smart AI with access to its own source code could change later anyway) progenitors should try to inculcate something like a conscience into the AIs they foster. A conscience allows humans to extend and apply moral rules flexibly in new and different contexts. One rule of thumb that Hall would like to see implemented in AIs is: "Ideas should compete; bodies should cooperate." He also suggested that AIs (robots) should be open source. Hall said that his friend economist Robin Hanson pointed out to him that we already live with superhuman psychopaths—modern corporations—and we're not all dead. Part of what reins in corporations is transparency, e.g., the requirement that outsiders audit their books. Indeed, governments are also superhuman psychopaths, and generally the less transparent a government the more likely it is to commit atrocities. So the idea here is that more AI source code is inspected, the more likely we are to trust them. Finally, Hall also suggested that AIs also be instilled with the Boy Scout Law.
Given these big concerns about how super smart AIs might treat humanity, should they be created at all? Famously, former Sun Microsystems chief scientist Bill Joy declared that they are too dangerous and that we should relinquish the drive to create them. Charles Harper, senior vice president of the Templeton Foundation, suggested there was a "dilemma of power." The dilemma is that "our science and technology create new forms of power but our cultures and civilizations do not easily create parallel capacities of stewardship required to utilize newly created technological powers for benevolent uses and to restrain them from malevolent uses."
Actually the arc of modern history strongly suggests that Harper's claim is wrong. More people than ever are wealthier and more educated and freer. Despite the tremendous toll of the 20th century, even social levels of violence per capita have been decreasing. We have been doing something more right than wrong as our technical powers have burgeoned. (It is worth noting the most of the 262 million people who died of violence in the 20th century died as the result of the actions of those superhuman psychopaths called governments using pretty crude technologies.)
Nevertheless, it is a reasonable question to ask if self-willed super smart AIs are too dangerous to unleash. The IEET's James Hughes suggested that one solution could be modeled on how the world currently handles nuclear weapons. If AIs are so dangerous, perhaps only governments should be allowed to own them. But this doesn't address the problem that governments themselves can be not-so-smart superhuman psychopaths. In addition, it seems unlikely that true human psychopaths (either individuals or collectives) can be permanently restrained from covertly creating AIs. If that is the case, we should all hope for and support the Singularity Institute's efforts to create friendly AI first.
When are AIs likely to arise? Ray Kurzweil, who joined the Summit by video link, predicted that computational power sufficient to simulate the human brain will be available on a laptop for $1000 in the next 15 years. Kurzweil believes that AIs will come into existence before 2030. Peter Voss was even more bullish, declaring, "In my opinion AIs will be developed almost certainly in less than 10 years and quite likely in less than five years."
If the Singularity Summiteers are right, buckle up and get ready for a really fast ride to the future. Let's hope their efforts will keep the ride from getting too rough.
Ronald Bailey is Reason's science correspondent. His most recent book, Liberation Biology: The Scientific and Moral Case for the Biotech Revolution, is available from Prometheus Books.
Show Comments (79)