“The brain secretes thought as the liver secretes bile,” asserted 18th century French physiologist Pierre Cabanis. Last week, the Potomac Institute for Policy Studies convened a conference of neuroscientists and philosophers to ponder how our brains secrete thoughts about ethics and morality. The first presenter was neuroeconomist Gregory Berns from Emory University whose work peers into brains to see in which creases of gray matter those values we hold sacred lodge. The study, “The Price of Your Soul: neural evidence for the non-utilitarian representation of sacred values,” was just published in the Philosophical Transactions of the Royal Society B.
Philosophers often frame arguments over the bases of ethics in terms of deontology (right v. wrong irrespective of outcomes) and utilitarianism (costs v. benefits of potential outcomes). Both utilitarians and deontologists would argue that it is wrong to kill innocent human beings. A utilitarian might tote up the costs of being caught in murder or the harms to a victim’s family, whereas a deontologist would assert it is moral duty to avoid killing the innocent. For most people, a utilitarian reckoning in this case seems cold and psychologically broken (e.g., the kind of calculation that a psychopath would make). The researchers define personal sacred values as those for which individuals resist trade-offs with other values, particularly economic or materialistic incentives.
It is this distinction that Berns probes using functional magnetic imaging (fMRI) to see in which parts of subjects’ brains their moral decision-making is localized. Such scans identify areas of the brain that are activated by measuring blood flow.
Without going into all the details, in the study subjects were asked to choose between various values; some hypothesized to be more deontological and others more utilitarian, e.g., you do/do not believe in God, and you do/do not prefer Coke to Pepsi. Once the baseline was established for each subject, they were given an opportunity to auction off their personal values for real money up to $100 per value sold. Once the auction was over, each subject was asked to sign a document contradicting his or her personal values. Those values that subjects refused to auction off were deemed “sacred.”
Berns and his colleagues found that values identified as sacred were processed in areas of the brain that are associated with semantic rule retrieval. Basically subjects were reading off moral rules; what another conference participant would later refer to as “moral platitudes.” In addition, when sacred values were contradicted by their opposites (e.g., to a believer asserting “You do not believe that God exists”), the researchers found arousal in the amygdala, which is associated with negative emotions.
Not surprisingly, with regard to the personal values that subjects auctioned off, the areas of the brain known to be associated with evaluating costs and benefits were activated. The researchers also suggest that when policymakers try to employ positive or negative incentives to encourage trade-offs in foreign or economic arenas they may instead arouse sacred values provoking a reactionary response in the people at whom the policies are targetted.
Berns also presented the results of another study [PDF] in which brain scans turned out to have identified a song that subsequently became a hit. In an earlier study, Berns and his colleagues had downloaded 15-second clips of various unknown songs from MySpace and played them for 27 adolescents while scanning their brains. The earlier study [PDF] focused on how knowing what others think about an item (in this case, a song fragment) activates brain areas associated with anxiety motivating people to switch their choices in the direction of the consensus. In other words, people often succumb to peer pressure.
Some years later, Berns heard one of the songs on the TV show, American Idol. Berns wondered if something in the earlier scanning data could have predicted a “hit” song. Mining the old brain scans, Berns found that subsequent song sales were weakly but significantly correlated with the activation of “reward” centers in the brains of the scanned adolescents. He speculates that scanning the brains of small groups might be used some day to predict cultural popularity.
The next presenter was philosopher William Casebeer who is now also a program officer at the Defense Advanced Projects Research Agency. In general, Casebeer argues that the moral psychology required by virtue theory is the most neurobiologically plausible. Basically, there is no is/ought chasm between facts and values, and evolutionary psychology properly understood teaches us that it’s Aristotleian virtue ethics all-the-way-down. Ethics is largely a matter of cultivating the proper moral character.
Casebeer’s talk, suggestively subtitled “How I learned to love determinism, but still respect myself in the morning” aimed to deal with the longstanding problem in neurophilosophy of how to square determinism in neuroscience with a moral philosophy that celebrates the freedom and responsibility of agents. Determinism undergirds science in general and neuroscience in particular; there are no uncaused causes. However, our social institutions are shot through with free will agency assumptions. Is it possible to reconcile these two views? Casebeer argues that we should stop talking about free will and instead adopt a language focused on the idea of critical control centers.
Casebeer thinks that holding agents responsible depends on the notion of being in or out of control. Being in control depends on what he calls the functional architecture of a well-ordered psyche. To suggest an idea of what elements might constitute an appropriate functional architecture of the psyche, Casebeer urged us to consider a schema of meaningful control distinctions [PDF] devised by philosopher and artificial intelligence theorist Aaron Sloman. A large working memory gives a putative agent more control than a small one; so too does an ability to learn versus a fixed repertoire; having a theory of mind versus having none; ability to reason counterfactually versus none; robust reward prediction mechanism versus a weak one; and a multi-channel sensory suite versus a single one. Along these dimensions organisms (and perhaps one day artificial intelligences) can be ranked from microbes to humankind with regard to being more or less in control.
Another mechanism of control is the environment in which an organism exists. In the case of humans, Casebeer argues, that a lot of outside control exists in our culture, norms, and institutions. We tell each other moral narratives in which we explain how internal control factors relate to the external environment. We take our cues of what is right and wrong to do from watching and emulating others. Our brains transmute these moral narratives into our moral characters. In other words, these narratives tell us what sorts of things are sacred (no trade-offs) and what can be evaluated on the basis of costs and benefits.
In some environments we recognize that any control system can become overwhelmed and no one would be held responsible for what they do in such a circumstance. For example, if someone spikes your coffee with LSD and you ended up harming someone because you hallucinated that they intended to kill you. Clearly, we already assign various levels of culpability based on our evaluation of an individual’s ability to control himself, e.g. children, the mentally ill, etc.
At the end, Casebeer suggested that the research agenda for the next 100 years would generate a neuroscience of critical control distinctions. He predicted that many critical controls will be social; narratives involving the punishment of moral infractions and the reward of moral conduct become etched in our brains and build our moral characters.
Next, University of San Diego neurophilosopher Patricia Churchland asked, where do values come from? She pointed toward Charles Darwin’s notion of the moral sense that arises from a combination of our in-born social instincts, our habits, and our reason. Neuroscientists now know more about the neurotransmitters involved with our social instincts. The hub of these instincts is the molecules oxytocin and vasopressin that encourage attachment and trust. Mammalian attachment and trust are the platform from which moral values derive. Bigger brains help by giving humans greater capacity to learn habits and override and repress impulses and to plan. Better memories help us keep track of who did what to whom and why, thus enabling us track reputations and seek out cooperators. Culture is an essential part of the story, guiding and limiting our moral choices.