Culture

Are We Just Really Smart Robots?

Two books on the mind put the human back into human beings.

|

On Intelligence, by Jeff Hawkins with Sandra Blakeslee, New York: Times Books, 261 pages, $25

Mind: A Brief Introduction, by John R. Searle, New York: Oxford University Press, 326 pages, $26

Neurobiology's advances generate anxiety as well as joy and hope. On the joyful and hopeful side, there are the prospect and reality of improved treatments for brain diseases and debilities. But anxiety arises over what the science tells us, or will tell us, about ourselves. Thoughts and feelings may be reduced to brain structures and processes. Consciousness and free will may be proven unimportant or illusory. Much of what we value about ourselves, in short, may be explained–or, worse, explained away.

The prevailing trends in the philosophy of mind reinforce such concerns. The field is dominated by schools of materialism that describe mental phenomena as types or side products of physical phenomena. Mind-body dualism, which posits a separate existence for the mind, has been effectively eclipsed (although it seems to receive continued implicit acceptance from many nonexperts). Some forms of materialism argue that the mental phenomena in question do not even exist.

This turn toward the mechanistic could have baleful cultural and political consequences. It threatens to undermine people's sense of responsibility and self-worth. There is the danger of what philosopher Daniel Dennett calls "creeping exculpation," as more and more human behavior is attributed to material causes. Criminal violence, for example, might be excused as a consequence of low levels of serotonin or monoamine oxidase in the brain. Many philosophers, including Dennett, argue that humans should be regarded as responsible agents even if human behavior is fully determined. But the very fact that such arguments need to be made shows how the deterministic premise has altered the terms of debate.

If humans are mechanistic beings, it becomes harder to understand why they should not be used as means to an end or why there should be much concern with what they are thinking or feeling. At a political level, such quandaries pose a threat to liberal democracy, which relies heavily on the assumption that we are autonomous beings with the capacity to make meaningful decisions. Mechanistic theories have enjoyed an authoritarian cachet in the past. Stalin's regime embraced the work of Ivan Pavlov, famous for conditioning dogs to salivate at the ringing of a bell. In Walden Two (1948), the American psychologist B.F. Skinner described a society whose managers use operant conditioning to suppress competitiveness and other undesired behaviors.

Alongside the conception of human beings as biological machines looms another specter: that human mental capacities will be equaled or exceeded by machines of our own creation. An influential doctrine in the philosophy of mind, congruent not only with neurobiology but with cognitive psychology and computer science, is computer functionalism. This view holds that the mind is fundamentally a computer program implemented in the brain's hardware –one which could be replicated in a different physical substrate. Notwithstanding the limited progress of artificial intelligence (A.I.), many experts expect it to achieve vast advances in coming decades. More important, the general public expects this too. The prospect arouses considerable anxiety, as reflected in the Terminators and Matrixes that populate science fiction.

The scientific and philosophical quest to understand human beings as part of the natural world thus seems to come with a hefty price. It forces us to regard ourselves as mere machines–indeed, as potentially obsolescent machines, given advances in computing. Or does it? Technologist Jeff Hawkins and philosopher John Searle both approach matters of mind and brain from a naturalistic perspective, but their arguments veer sharply from the grim picture sketched above. Both provide valuable analysis and speculation about mental phenomena while taking issue with much current scientific and philosophical thinking about the subject.

In On Intelligence, Hawkins portrays human intelligence as more subtle and flexible than anything computers do. His model suggests that while future artificial systems may possess remarkable intelligence, they will be neither human-like nor the malevolent superhuman entities of science fiction. In Mind: A Brief Introduction, Searle provides an iconoclastic overview of the philosophy of mind, arguing for a position that accepts that the mind is materially based without dismissing or downplaying mental phenomena. Searle's discussion ranges across such topics as the limitations of computers, the nature of the unconscious, and free will as a possible feature of the brain.

Hawkins, who wrote On Intelligence with science journalist Sandra Blakeslee, is a computer entrepreneur with a longstanding interest in how the brain works. He is the inventor of the original Palm Pilot and the founder of the Redwood Neuroscience Institute. In 1980, as an Intel employee, he proposed a project to develop memory chips that operate on brain-like principles. Intel's chief scientist turned him down, reasoning (correctly, Hawkins now believes) that such an effort was premature. Hawkins then sought to do graduate work at the Massachusetts Institute of Technology, to study brains as a means toward developing intelligent machines. MIT, suffused with the idea that A.I. had little need for brain research, rejected his application. In the late 1980s, Hawkins viewed with interest but growing skepticism the rise of neural networks, programs that bore a resemblance–but only a very loose one–to brain operations. He found no use for neural networks in developing the handwriting recognition system later used in Palm Pilots.

Such experiences fed Hawkins' convictions that intelligent machines must be more genuinely brain-like, and that making them so requires a new theory of how the brain operates. Neurobiology, he argues, has amassed an impressive array of detail but lacks a compelling framework for understanding intelligence and brain function. On Intelligence is an attempt to provide such a framework.

Hawkins focuses mainly on the cortex, the most evolutionarily recent part of the brain. The cortex, in his view, uses memory rather than computation to solve problems. Consider the problem of catching a ball. A robotic arm might be programmed for this task, but achieving it is extremely difficult and involves reams of calculations. The brain, by contrast, draws upon stored memories of how to catch a ball, modifying those memories to suit the particular conditions each time a ball is thrown.

The cortex also uses memories to make predictions. It is engaged in constant, mostly unconscious prediction about everything we observe. When something happens that varies from prediction–if you detect an unusual motion, say, or an odd texture–it is passed up to a higher level in the cortex's hierarchy of neurons. The new memories are then parlayed into further predictions. Prediction, in Hawkins' telling, is the sine qua non of intelligence. To understand something is to be able to make predictions about it.

A key concept in this memory-prediction model is that of "invariant representations." The cortex is presented with a flux of sensory data but manages to perceive objects as stable. The magazine you're now holding, or the computer screen you're looking at, sends constantly changing inputs to your eye and optic nerve, but the subsequent pattern of neurons firing in your visual cortex displays an underlying stability. This capacity to pick out unchanging relationships gives humans considerable cognitive flexibility. Imagine looking at a picture of a face formed by dots (like those drawings in The Wall Street Journal). Now imagine each dot is moved a few pixels to the left. A human, unlike a conventional A.I. program or neural net, easily will see it as the same face.

Hawkins buttresses his memory-prediction model with a fair amount of neurobiological detail. Much of the model is speculative. There is, for instance, considerable evidence of invariant representations in the workings of the visual cortex, but it is not yet clear whether the concept applies broadly to other sensory areas and to motor regions of the cortex. Hawkins presents a list of neurobiological predictions to test his model's validity. He posits, for example, that certain layers of the cortex contain neurons that become activated in anticipation of a sensory input. Such anticipatory activity is in keeping with the idea that perception involves prediction, as well as receipt, of sensory inputs. When you glance around your living room, your brain fills in some details based on what it has seen before.

As Hawkins notes, invariant representations can be viewed as a bug, as well as a feature, in human cognition; negative stereotyping and bigotry might have roots in such invariance. The strong element of prediction involved in perception also has a downside: It could underlie people's tendency to see what they want to see. Overall, though, Hawkins' model underscores the considerable capabilities of human intelligence. It provides a plausible explanation of how the speed and agility of human thought can exceed the capacities of computers, even though the latter have components that operate far faster than neurons.

The model may also offer insight into creativity, which arguably arises from the brain's propensity to make predictions. In Hawkins' view, there is a continuum between everyday actions and perceptions and the production of great novels or symphonies. The cortex during normal waking moments combines its invariant memories with the details of what is happening now; it is constantly predicting things that are similar to, but at least slightly different from, what it has experienced in the past. Our brains are geared to come up with something new.

Hawkins ventures that memory and prediction will be crucial to an understanding of consciousness, but he acknowledges that his model does not probe deeply into how and why consciousness exists. He draws a link between consciousness and memory through a thought experiment: If your memories of yesterday's activities were erased, so would be your sense that your behavior had been conscious. He speculates as to why vision, hearing, and other senses are (normally) experienced as qualitatively distinct, even though their inputs are all converted into patterns in the cortex. The answer, he suggests, might involve the diverse connections between the cortex and other parts of the brain.

In his final chapter, Hawkins writes enthusiastically about the prospects for intelligent machines. He expects rapid progress in the development of brain-like systems in the next several decades, citing speech recognition, vision, and smart cars as promising near-term applications. He imagines super-intelligent systems that will predict the weather, foresee political unrest, and understand higher-dimensional spaces. Yet he emphasizes that intelligent machines will not be similar to us. They will have something like a cortex and senses, but not human-like bodies, emotions, or experiences–things it would be very difficult, and generally pointless, to give them. They will not strive for power, wealth, status, or pleasure. They will not be angry at being "enslaved."

To illustrate the error of likening machines to human beings (and vice versa), Hawkins draws on a well-known thought experiment: A man who understands no Chinese is placed in a room with a wall slot through which he receives questions written in Chinese. Following a rule book, he replies to the questions with other Chinese symbols. To an outside observer, he seems to understand Chinese. But in fact, he has no idea what the questions or answers are about.

For Hawkins, the story of the Chinese Room points to limitations of conventional A.I. and of the Turing Test, the standard that a computer is intelligent if a human inquirer cannot distinguish its replies from a person's. Hawkins adds, however, that the Chinese Room would display intelligence if it contained a memory system that could make predictions about the content of the Chinese messages passed through the slot. This is an interesting wrinkle but a debatable point. One can imagine the man in the room adeptly foreseeing which symbols will follow which others but still not knowing what they mean.

The man who first asked us to imagine the Chinese Room was John Searle, a Berkeley philosopher who has written influentially about mind, language, and other subjects. His point was that a computer manipulates symbols but attaches no meaning to them; it understands nothing. Searle revisits the Chinese Room in Mind: A Brief Introduction. He rebuts the common counterargument that it is the overall system–man, room, rule book–that understands Chinese. The point is the same, he contends, even if the man is in an open field and has memorized the rule book. Indeed, Searle believes his original argument did not go far enough in debunking computer intelligence; something is a computer, he elaborates, only if an intelligent observer interprets it as such.

At the core of Mind: A Brief Introduction is Searle's effort to situate mental activity in the physical world. Consciousness, he argues, is a biological phenomenon; it is a process of the brain, much as digestion is a process of the stomach. He emphasizes, however, that consciousness cannot be dismissed as an illusion or defined in terms of lower-level neurobiological processes. Conscious states exist insofar as someone experiences them–they have a "first-person ontology"–and in this regard they are distinct from physical phenomena that have a "third-person ontology." The pain of banging into a coffee table (unlike the table itself) is real only because you feel it. Searle terms his position "biological naturalism" and contrasts it with the conventional categories of materialism and dualism.

Searle's picture leaves open the possibility of free will, defined here in contradistinction to determinism. In this view, quantum mechanical indeterminism at the micro-level may produce free will as a higher-level feature of the brain. In making decisions, the brain would draw upon the unpredictable behavior of its constituent particles. But wouldn't such freedom consist of mere randomness? Searle argues that this objection involves a fallacy of composition, confusing the properties of a system with those of its parts. Our pervasive experience of free will, he acknowledges, may be an illusion. But if so, it is a strange illusion, one that requires vast biological resources to maintain yet somehow survived evolution's travails.

Searle ranges broadly across the subject of mental phenomena, poking holes in much received philosophical and scientific wisdom. A key feature of conscious experience, he notes, is its unified structure; one normally encounters sights, sounds, and so on as part of one's overall environment. Neurobiology, he ventures, will ultimately benefit more from a "unified-field" approach to consciousness than from the currently favored "building-block" emphasis. Searle also takes issue with philosophical arguments that humans perceive not the real world but merely "sense data." Such claims, he contends, rely on slippery language and dubious assumptions.

The concept of the unconscious, Searle argues, is indispensable for explaining some forms of human behavior, but it is sometimes pushed beyond its applicability. Unconscious mental states, in his telling, are states that could in principle become conscious. It is possible, for instance, to believe that George W. Bush is president even when you are sound asleep. In Searle's view, however, cognitive scientists are incorrect to say, for example, that people see by performing "unconscious" computations on visual stimuli. The brain processes involved, much like the workings of the liver, are not the sort of thing that could be conscious; hence they are nonconscious rather than unconscious.

Searle closes with a discussion of the elusive concept of the self. A longstanding philosophical tradition, initiated by David Hume, regards the self as "a bundle of perceptions"; we have a series of experiences but not an inner essence. Searle argues, to the contrary, that consciousness, a capacity to initiate action, and an ability to act on the basis of reasons do amount to a self–a "non-Humean self" that is more than just a set of experiences. Having such a self provides continuity between one's past, present, and future; it is what enables a person to take responsibility and make plans.

Mind: A Brief Introduction and On Intelligence are thought-provoking and, no less important, anxiety-reducing. By dispelling overstated mechanistic claims arising from recent trends in neurobiology and philosophy, these books serve to combat public fears and forestall a possible backlash against science and technology. Humans can be part of the natural world without being mere machines, and without being outdone by our own machines.

These books cast light on how it is possible to have a rich mental life while living in a physical universe. In so doing, they throw up roadblocks against any push for political authoritarianism or social engineering that might arise from increased knowledge of how brains work. Far from advancing tyranny, neurobiology may be starting to provide a deeper understanding of what human freedom is all about.?