On Intelligence, by Jeff Hawkins with Sandra Blakeslee, New York: Times Books, 261 pages, $25
Mind: A Brief Introduction, by John R. Searle, New York: Oxford University Press, 326 pages, $26
Neurobiology's advances generate anxiety as well as joy and hope. On the joyful and hopeful side, there are the prospect and reality of improved treatments for brain diseases and debilities. But anxiety arises over what the science tells us, or will tell us, about ourselves. Thoughts and feelings may be reduced to brain structures and processes. Consciousness and free will may be proven unimportant or illusory. Much of what we value about ourselves, in short, may be explained--or, worse, explained away.
The prevailing trends in the philosophy of mind reinforce such concerns. The field is dominated by schools of materialism that describe mental phenomena as types or side products of physical phenomena. Mind-body dualism, which posits a separate existence for the mind, has been effectively eclipsed (although it seems to receive continued implicit acceptance from many nonexperts). Some forms of materialism argue that the mental phenomena in question do not even exist.
This turn toward the mechanistic could have baleful cultural and political consequences. It threatens to undermine people's sense of responsibility and self-worth. There is the danger of what philosopher Daniel Dennett calls "creeping exculpation," as more and more human behavior is attributed to material causes. Criminal violence, for example, might be excused as a consequence of low levels of serotonin or monoamine oxidase in the brain. Many philosophers, including Dennett, argue that humans should be regarded as responsible agents even if human behavior is fully determined. But the very fact that such arguments need to be made shows how the deterministic premise has altered the terms of debate.
If humans are mechanistic beings, it becomes harder to understand why they should not be used as means to an end or why there should be much concern with what they are thinking or feeling. At a political level, such quandaries pose a threat to liberal democracy, which relies heavily on the assumption that we are autonomous beings with the capacity to make meaningful decisions. Mechanistic theories have enjoyed an authoritarian cachet in the past. Stalin's regime embraced the work of Ivan Pavlov, famous for conditioning dogs to salivate at the ringing of a bell. In Walden Two (1948), the American psychologist B.F. Skinner described a society whose managers use operant conditioning to suppress competitiveness and other undesired behaviors.
Alongside the conception of human beings as biological machines looms another specter: that human mental capacities will be equaled or exceeded by machines of our own creation. An influential doctrine in the philosophy of mind, congruent not only with neurobiology but with cognitive psychology and computer science, is computer functionalism. This view holds that the mind is fundamentally a computer program implemented in the brain's hardware --one which could be replicated in a different physical substrate. Notwithstanding the limited progress of artificial intelligence (A.I.), many experts expect it to achieve vast advances in coming decades. More important, the general public expects this too. The prospect arouses considerable anxiety, as reflected in the Terminators and Matrixes that populate science fiction.
The scientific and philosophical quest to understand human beings as part of the natural world thus seems to come with a hefty price. It forces us to regard ourselves as mere machines--indeed, as potentially obsolescent machines, given advances in computing. Or does it? Technologist Jeff Hawkins and philosopher John Searle both approach matters of mind and brain from a naturalistic perspective, but their arguments veer sharply from the grim picture sketched above. Both provide valuable analysis and speculation about mental phenomena while taking issue with much current scientific and philosophical thinking about the subject.
In On Intelligence, Hawkins portrays human intelligence as more subtle and flexible than anything computers do. His model suggests that while future artificial systems may possess remarkable intelligence, they will be neither human-like nor the malevolent superhuman entities of science fiction. In Mind: A Brief Introduction, Searle provides an iconoclastic overview of the philosophy of mind, arguing for a position that accepts that the mind is materially based without dismissing or downplaying mental phenomena. Searle's discussion ranges across such topics as the limitations of computers, the nature of the unconscious, and free will as a possible feature of the brain.
Hawkins, who wrote On Intelligence with science journalist Sandra Blakeslee, is a computer entrepreneur with a longstanding interest in how the brain works. He is the inventor of the original Palm Pilot and the founder of the Redwood Neuroscience Institute. In 1980, as an Intel employee, he proposed a project to develop memory chips that operate on brain-like principles. Intel's chief scientist turned him down, reasoning (correctly, Hawkins now believes) that such an effort was premature. Hawkins then sought to do graduate work at the Massachusetts Institute of Technology, to study brains as a means toward developing intelligent machines. MIT, suffused with the idea that A.I. had little need for brain research, rejected his application. In the late 1980s, Hawkins viewed with interest but growing skepticism the rise of neural networks, programs that bore a resemblance--but only a very loose one--to brain operations. He found no use for neural networks in developing the handwriting recognition system later used in Palm Pilots.
Such experiences fed Hawkins' convictions that intelligent machines must be more genuinely brain-like, and that making them so requires a new theory of how the brain operates. Neurobiology, he argues, has amassed an impressive array of detail but lacks a compelling framework for understanding intelligence and brain function. On Intelligence is an attempt to provide such a framework.
Hawkins focuses mainly on the cortex, the most evolutionarily recent part of the brain. The cortex, in his view, uses memory rather than computation to solve problems. Consider the problem of catching a ball. A robotic arm might be programmed for this task, but achieving it is extremely difficult and involves reams of calculations. The brain, by contrast, draws upon stored memories of how to catch a ball, modifying those memories to suit the particular conditions each time a ball is thrown.
The cortex also uses memories to make predictions. It is engaged in constant, mostly unconscious prediction about everything we observe. When something happens that varies from prediction--if you detect an unusual motion, say, or an odd texture--it is passed up to a higher level in the cortex's hierarchy of neurons. The new memories are then parlayed into further predictions. Prediction, in Hawkins' telling, is the sine qua non of intelligence. To understand something is to be able to make predictions about it.
A key concept in this memory-prediction model is that of "invariant representations." The cortex is presented with a flux of sensory data but manages to perceive objects as stable. The magazine you're now holding, or the computer screen you're looking at, sends constantly changing inputs to your eye and optic nerve, but the subsequent pattern of neurons firing in your visual cortex displays an underlying stability. This capacity to pick out unchanging relationships gives humans considerable cognitive flexibility. Imagine looking at a picture of a face formed by dots (like those drawings in The Wall Street Journal). Now imagine each dot is moved a few pixels to the left. A human, unlike a conventional A.I. program or neural net, easily will see it as the same face.
Hawkins buttresses his memory-prediction model with a fair amount of neurobiological detail. Much of the model is speculative. There is, for instance, considerable evidence of invariant representations in the workings of the visual cortex, but it is not yet clear whether the concept applies broadly to other sensory areas and to motor regions of the cortex. Hawkins presents a list of neurobiological predictions to test his model's validity. He posits, for example, that certain layers of the cortex contain neurons that become activated in anticipation of a sensory input. Such anticipatory activity is in keeping with the idea that perception involves prediction, as well as receipt, of sensory inputs. When you glance around your living room, your brain fills in some details based on what it has seen before.