Culture

Here Comes Artificial Intelligence

Ray Kurzweil's new book imagines man-made minds.

|

How to Create a Mind: The Secret of Human Thought Revealed, by Ray Kurzweil. Viking, 336 pages, $27.95.

High-functioning artificial intelligence is the stuff of science fiction: the malicious HAL in 2001, the malevolent machines in Battlestar Galactica and The Matrix, the Butlerian Jihad in Frank Herbert's Dune series. Charles Stross' novel Accelerando describes the Matrioshka brain, an artificial mind that requires the energy of a star to function.

But the idea won't necessarily be science fiction forever, and we may have to take the concept of artificial intelligence (AI) seriously sooner than many expect. The ongoing acceleration in technology has prompted serious discussions of AI, including the possibility that the "Singularity"—the creation of a greater-than-human intelligence—might occur. In his 2005 book The Singularity is Near, the futurist Ray Kurzweil predicted that we can expect the Singularity by 2045 and that superintelligences will eventually colonize vast swathes of galaxies. In his latest book, How to Create a Mind, Kurzweil argues that reverse-engineering a human brain is the best route to creating high functioning AI.

Kurzweil begins by examining the neocortex, the uniquely mammalian part of our brain. The neocortex's hundreds of millions of pattern recognizers, he reports, allow for such rare abilities as language, speech, creativity, and the ability to form evolutionarily advantageous emotions such as love.

Inevitably, Kurzweil considers the question of consciousness, which he argues can emerge from purely physical components. Kurzweil subscribes to a subschool of panprotopsychism—the view that, broadly speaking, all matter has mental properties. According to this account, there is no reason why computers should not be able to experience consciousness.

The discussion of mind and body problems is frustratingly brief. Kurzweil uses the word "mind" rather than "brain" in his title, he explains, "because a mind is a brain that is conscious." This is something of a philosophical leap. While a book aimed at the layman might not be the best place for a detailed discussion of consciousness, the relationship between mind and body, and free will—ideas that have engaged philosophers from Anaxagoras to Galen Strawson—it would have been nice to have seen a more thorough defense of the author's views. I don't just mean Kurzweil's views of how the mind works. If high-functioning AIs arrive, the primary philosophical issues they raise will be ethical, not technological. Kurzweil, who accepts the moral standing of machines that appear conscious, predicts that we will eventually accept them as equals, and he asserts (while admitting it is something of a leap of faith) that when machines become capable of convincingly describing their experiences, they will constitute conscious persons. If this does occur, a robust philosophical defense of this position will have to be at the ready.

As with anything that Kurzweil writes, there is a question of how accurate his past forecasts have been and how seriously we should take his thoughts on the future. The prediction that the singularity will be upon us by 2045 has come under particularly skeptical criticism.

Kurzweil addressed the state of his predictions in an essay, "How My Predictions Are Faring," published in October 2010. According to his own assessment, a clear majority of the forecasts he made in The Age of Spiritual Machines, The Age of Intelligent Machines, and The Singularity is Near have been "essentially correct" or "correct," including his predictions that cloud computing would become more mainstream, that portable computers would become much lighter, and that those portable computers would be able to access libraries and information services. Almost all of Kurzweil's predictions rest on the validity of the Law of Accelerating Returns: in his words, that "fundamental measures of information technology follow predictable and exponential trajectories, belying the conventional wisdom that 'you can't predict the future.'"

Kurzweil obviously takes objections seriously, dedicating a chapter to answering them near the end of How to Create a Mind. He spends less time illustrating how promising AI could be in the short term. The discussion of health is especially brief. Towards the end of the book he raises the possibility that nanobots could monitor and repair cell damage, a technology that would have huge implications for the treatment of chronic diseases like cancer and diabetes. Surely this deserves detailed discussion. Instead we plunge into a vision of superintelligences overcoming the speed of light and colonizing the galaxy.

Still, that promise is there. If Kurzweil is right, we can look forward to prolonged life expectancy, cures for serious diseases, and social changes that would dwarf the significance of the industrial revolution. His optimism about an AI-assisted future is contagious, even if those visions of Matrix-style enslavement still lurk somewhere in the corners of your mind.