Is the Human Brain a Prediction Machine?
In The Experience Machine, philosopher and scientist Andy Clark offers an updated theory of mind.

The Experience Machine: How Our Minds Predict and Shape Reality, by Andy Clark, Pantheon Books, 304 pages, $30
For René Descartes, minds were essentially thinking (or feeling) things. For the founding fathers of behaviorism, minds were identical with behaviors—talking, habits, dispositions to act in one way or another. More recently, minds have been imagined as a kind of computer: the software running on the hardware of the brain.
For Andy Clark, a cognitive scientist and philosopher at the University of Sussex in the United Kingdom, minds are first and foremost prediction machines. "Instead of constantly expending large amounts of energy on processing incoming sensory signals," he writes in The Experience Machine, "the bulk of what the brain does is learn and maintain a model of body and world." Our mind/brain is "a kind of constantly running simulation of the world around us—or at least, the world as it matters to us."
In other words, while people typically imagine the mind taking in information through our senses and then processing that information to create a model of the world that we experience and act upon, Clark reverses the order: Minds create a model of the world, and the senses tell us how to update the model if the world is different from what was predicted. Those predictions make up most of what we experience—but when things don't go as expected, the mind makes corrections to improve the model.
This may seem counterintuitive (and it is), but Clark makes a strong case in a very accessible and engaging book, bringing together a number of recent trends in the sciences of the mind, including the importance of the body to our mental processes (what's called "embodiment") and how our day-to-day cognition extends out into the world through our use of tools. Along the way, he shows how his approach can explain a diverse set of phenomena, including illusions, mood disorders, chronic pain in the absence of tissue damage, and why police mistakenly see weapons where there are none.
***
I should probably note, especially since I am writing in Reason, that the experience machine of Clark's title is unrelated to the famous "experience machine" proposed by Robert Nozick in Anarchy, State, and Utopia. Though Clark does write of mental simulations, he is not invoking Nozick's thought experiment about a machine that can give subjects whatever experiences they like.
Clark has a rather different project. He has been a prolific theorist of the mind for more than three decades, and his new book ties together themes explored in his previous work, often updating and illustrating them with examples from more recent cognitive science research.
For example, Clark's 1998 book Being There argued against a disembodied understanding of mind (as one might get from Descartes). We are not simply minds that happen to have bodies, he argued; we often think through our bodies. Clark extends this argument in The Experience Machine by reviewing recent work on the role of the gut (which includes 500 million neurons of its own), discussing how the microbiome of gut bacteria shows their influence on cognition. With gut bacteria producing 95 percent of the serotonin in our bodies, we should not be surprised that scientists are beginning to trace connections between our digestive system and our moods, dietary preferences, and other mental states.
Similarly, in Clark's 2008 book Supersizing the Mind and in earlier work with philosopher David Chalmers, Clark has moved beyond the body's role in cognition to consider the role played by external tools. Clark and Chalmers' provocative thesis—what they call the parity principle—is that we should consider "as part of the mind" anything that would inarguably be considered mental if it were carried out by the brain. For an example, consider those of us of a certain age who once used our brains to remember a lot of phone numbers but now rely on our smartphones. Clark and Chalmers think we should consider those phones parts of our minds. A mind, they say, extends into those parts of the world that are regularly and reliably accessible to it.
How do the tools that constitute the extended mind connect back to the predictive brain of The Experience Machine? If the core of mentality is developing and maintaining a predictive model of the world, then cognitive tools that are reliably and predictively there for us are an important part of our predictive process. That predictive process, in turn, is what we ought to think of as our minds.
***
At this point, you may be wondering whether being part of our minds means being part of consciousness. Is Clark claiming that my smartphone is somehow constitutive of my conscious experience?
If I have a complaint about this book, it is that it does not give enough attention to these questions of conscious experience. In the sections that explore the extended mind thesis most fully, there is generally little mention of consciousness at all. That said, the book does include an interlude that takes issue with David Chalmers—Clark's extended mind collaborator—and with Chalmers' worries about consciousness.
For Chalmers, the important problem for any theory of the mind is what he calls the "hard problem of consciousness": Why and how does physical activity give rise to conscious experience at all? You could imagine an artificial prediction machine that shows all the outward behaviors that Clark's account calls for, but this prediction machine might lack any conscious experience whatsoever. It would, in Chalmers' terminology, be a "philosophical zombie."
Against these concerns, Clark proposes that the phenomenon of consciousness might instead be best captured by predictive minds making "meta-predictions" about their own predictions. While admitting this part of his story is "highly speculative," Clark gamely proposes that the predictive mind thesis may help unravel the mystery of consciousness too. Alas, his discussion here is too short to be clear about what exactly he is proposing, let alone whether that position is likely to be true.
But that disappointment is short-lived. The rest of The Experience Machine features lively and interesting discussions of how scientists have been grappling with various puzzles about the mind. For example: You probably know that placebos (such as simple sugar pills) can have real effects on the subjects who take them. But were you aware there also exist "honest placebos"—that is, sugar pills given to subjects who are told they're just sugar pills? What's more, these honest placebos can also have real effects on subjects!
Or perhaps you remember one of 2015's biggest social media phenomena, "the dress." When a photo of a dress went viral, some people insisted it was blue and black while others saw white and gold. Clark takes his discussion of this in surprising directions, as when he recounts scientific work relating the colors people saw with their sleeping patterns—e.g., whether they tend to be early risers or night owls.
All these things connect back directly to Clark's proposal that the mind is, at base, a prediction machine, guided by the expectations we have learned. If you are curious what that entails and if you want an accessible tour through recent cognitive science, I predict that you'll find this book illuminating
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
DEI ideological conformity in the VA:
https://www.nationalreview.com/news/va-psychologist-reassigned-after-publicly-opposing-men-in-female-vets-medical-spaces/
"Waldrep, a co-author and psychologist at Rocky Mountain Regional VA Medical Center in Denver, was out for training when the joint piece went live. When he returned to work, he walked into a barrage of backlash in a VA group chat he belonged to that’s dedicated to LGBT matters. After some back and forth, Waldrep was kicked out of the chat in early February.
Long before the group’s editorial debut on the issue, Waldrep had been penalized by his outpatient mental-health clinic for challenging the VA’s incorporation of DEI. The tug-of-war began almost two years ago, after his facility announced racially segregated trauma groups. After objecting, Waldrep said he was pulled into a three-hour fact-finding — a type of administrative investigation for the VA.
The VA stripped Waldrep of his ability to supervise students, prohibiting him from doing didactics trainings with rotations and from attending meetings where students were present.
“All because I questioned DEI,” he told National Review. “I was retaliated against then, I was basically censored, cut off from speaking at different meetings.”
"“One thing we learn very well is to do a very thorough clinical assessment, be it through a diagnostic interview, administering different objective, psychometric tests, and really careful consideration of differential diagnoses,” she said. “Chronic PTSD or Borderline Personality Disorder or Bipolar, sometimes these conditions, at face, value appear very similar, but when you . . . ask more questions . . . that’s when you can start differentiating. And to do so is important because the best evidence-based treatment is different depending on the condition.”
This medical-discovery process is suspended when it comes to gender, however, because to question someone’s self-professed gender identity is, as activists claim, to deny their existence.
“The environment of ideological conformity is palpable,” Waldrep said. “People are afraid to speak up. This is not only bad because I don’t like DEI, but this is also bad because this has a potential negative impact for health care.”"
Irrelevant to this article.
I wonder if I'm the only one — probably because I'd been friendly for many years with executive producer Damon Lindelof and his late father — who caught the playful portrayal of a Nozickian experience machine — with a victim of it named D. David Hume! — on the TV serial Lost.
'Minds create a model of the world, and the senses tell us how to update the model if the world is different from what was predicted.'
And often with models based on ideology, propaganda (including from our own "knowledge"), and belief. IMO belief is the strongest driver,
'Those predictions make up most of what we experience—but when things don't go as expected, the mind makes corrections to improve the model.'
Or just deny the external reality and believe harder.
Minds don't do anything. Brains do things.
That's as useful as saying "brains don't do anything. Bodies do."
Nope. The brain is a physical entity. The mind is a process given a name, It doesn't mean that the process is reified. It is useful to avoid the trap of attributing physical consequences to non-physical entities, which leads to people saying stupid things like, "when you think about X, the brain responds by doing Y" as though the brain acts after the mind does
.
Consciousness IS the model. It is effectuated in the body, but the model itself operates as an emergent property, not a static one. The internet exhibits the same pattern. The wires and machinery effectuates the flow of symbols and signals, but the information itself -- the model -- is emergent.
I'm liking Penrose's deep dive into consciousness. I'm leaning towards the idea that 'consciousness' is not 'computational'.
"honest placebos"—that is, sugar pills given to subjects who are told they're just sugar pills? What's more, these honest placebos can also have real effects on subjects!
This is a matter of interpretation. No one's cancer was cured by a placebo.
And perhaps this is a good moment to talk about COVID vaccines.
Look up "spontaneous remission". Some people's cancer is cured by things we do not fully understand but we do know that attitude plays a more than trivial role. So at least for some people, I do not think we can categorically state that "no one's cancer was cured by a placebo."
Is spontaneous remission really a thing? Bad diagnosis seems very likely. I know one person who's life was destroyed by cancer treatment for cancer thar was never there.
Over-treatment is common (but not as potentially dire as under-treatment, of course.) Then when the patient doesn't die, they say they "survived" the disease, or are thankful they "caught it in time." When in reality they may have been fine without ever finding out, and had less mental and physical stress to deal with.
In the world of evidence-based medicine, you cannot categorically state that anyone's cancer "was cured by placebo". The fact that cancer sometimes goes into remission does not make the placebo argument "true".
Doesn't make it not true
Andy Clark's theory of the predictive brain reminds me of artificial intelligence. That system is based on a series of non-linear multiple regression equations whose purpose is to predict by association (correlation).
And this is analogous to what the brain is doing. The more frequently it associates one event with another the greater its confidence that one follows the other, and the more likely it is to predict it.
It is the reason why the police officer fires his gun when the suspect has no gun: His brain is accustomed to seeing a gun follow certain types of hand movements. So when he sees those movements, his brain makes him see a gun as a prediction.
Of course what this also means is that the human brain and AI share the same flaws. And this is inevitable, because, in real life, all decisions, whether by humans or computers are based on the probability of their consequences, not their certainty, because decisions are, by nature, about the future.
This makes me think that the term AI is not as much of a hype as I had thought.
Sounds about right. I've "seen" my cat sitting in the kitchen, then realized a second later that it's the watering can my wife uses on her plants. They are roughly the same size and shape, and I expected to see the cat, not the can. But they're not even the same color.
That's how winners win. They see the victory, predict it, and the mind makes it happen
Ideally, then, the mind should operate on Bayesian principles. Too many of us think too often in terms of traditional statistics, with static instead of dynamic probabilities.
Pretty well articulated. I've been trying to explain that process for some time as it appears to me to be the foundation of Free Will. Dreams, daydreams, mental rehearsals serve to build a wide range of simulated situations we rely on to make choices.