Politics

Soul Survival

Is "the new neuromorality" a threat to traditional views of right and wrong?

|

Will neuroscience revolutionize our understanding of law and morality? If so, can law and morality be saved? That was the question posed by a June conference at the American Enterprise Institute on "The New Neuromorality."

Despite being held under conservative auspices, the event had an entirely secular perspective. The only overt references to religion were tinged with irony, and the only theoconservatives on hand were in the audience. Yet some of the decidedly nonobscurantist speakers voiced support for concerns that in today's public discourse are often seen as faith-based.

In her talk on the moral challenges posed by brain enhancement and brain imaging, University of Pennsylvania neuroscientist Martha J. Farah sympathetically cited the warnings of Leon Kass, the conservative chairman of the President's Council on Bioethics, that some advances in this area may undermine intrinsic human worth. Said Farah, "There is an intuition that I think we all share, regardless of our ultimate judgment of right and wrong, that says it's a little bit like treating a person as a thing, to say we're just going to open up the hood and make them run better." She also expressed nonalarmist but emphatic concern that developments in brain imaging could pose a threat to privacy, with potential employers or the government seeking to get under the hood and diagnose such technical glitches as violent tendencies or even racial prejudice.

The keynote address was given by renowned Harvard University psychologist Steven Pinker, who described a neuromorality of personal responsibility. In Pinker's view, the worry that a biologically based understanding of human behavior will turn into a "my brain/genes/hormones made me do it" catch-all excuse stems from a basic fallacy: the assumption that bad acts deserve to be punished only if they result from some fully autonomous "free will" exempt from biological or other causation. How can we "salvage the core of responsibility" without such mystical notions? For Pinker, the answer is to shift the focus from the unanswerable question of whether an act was truly "freely chosen" to whether the perpetrator has a normally functioning brain with a normal response to the stimuli of reward and punishment.

Thus, responsibility really means deterrability: the capacity to understand that if we harm others, we'll suffer the consequences. Pinker asserted that we already use such an approach in practice. Cases in which we do not punish harmful actions because we don't assign moral responsibility to the perpetrator happen to be just the kinds of cases in which punishment cannot deter similar acts: when the harm was accidental, or when the perpetrator is too young or too mentally ill to be deterred by the threat of punishment. Even "abstract justice"–seeking to impose punishment when it's clearly not cost-effective for society and when, as with elderly Nazi war criminals, there's no chance of recidivism–ultimately serves utilitarian ends, since creating exemptions for some crimes would be too inviting to scofflaws.

But is Pinker's vision of brain-based justice far more radical than he is willing to admit? That was the case made by Princeton philosopher and neuroscientist Joshua Greene, whose snazzy presentation, illustrated with slides of pop culture images, was titled "Dueling Dualisms" but could have been called "Punishment Without Guilt." Greene noted that in all the debates about whether to blame the guilty person or his damaged brain, we assume some nonphysical core self–a soul–that makes moral judgments. What's going to happen as research in neuroscience explains more and more of the mind in physical and mechanical terms? The likelihood, said Greene, is "a lot more fighting" about morality and responsibility unless we're willing to give up the idea of the soul altogether–something that, he wryly noted, "Americans are not yet ready to do." Like Pinker, Greene spoke of a shift toward a utilitarian understanding of deterrent justice; however, he saw this as a dramatic departure from tradition because it would entail giving up on the idea that punishment is not only efficient but morally just. "What we're saying," he said, "is no one's really guilty in their souls because, secret: No one has a soul."

Stephen Morse, a professor of law and psychology at the University of Pennsylvania, played common-sense curmudgeon to Greene's brash visionary, flatly stating that for now the new neuroscience poses no more of a challenge to problems of morality and law than psychology or sociology–or astrology, for that matter. Even while declaring himself a thoroughgoing materialist, Morse insisted that "responsibility is about persons, not brains" (precisely the kind of distinction Greene had earlier mocked as dualistic) and defended the old-fashioned approach to justice. "We give people what they deserve," he said, "not because it produces good consequences, but because it's right."

If Greene's "dirty little secret" was that the soul does not exist, Morse's was that we still have no clue "how the brain enables the mind" and produces mental states or moral judgments. That there is no immaterial soul, he argued, doesn't mean that "we are not the kind of creatures we think we are–conscious, rational, intentional beings"; science or no science, the physicalist model must be resisted for the sake of human dignity and "the good life we can live together." During the question-and-answer period, Morse was grilled on whether he was smuggling the dreaded dualism in through the back door with his talk of dignity and personhood. He retorted that biophysical creatures can still have dignity and that human dignity specifically resides in rationality.

Where, then, does all this talk of neuromorality leave us? Probably not with Greene's solution. If Darwinian evolution–which is not incompatible with religion and does not require any radical rethinking of our moral tradition–is still a bone of contention, proposing to do away with the soul is not exactly a prescription for no more squabbling. Nor is doing away with retributive justice. Pinker noted, somewhat ambivalently, that "the thirst for retribution"–punishment as "just deserts" and a way to right the moral balance–may be inherent in human nature, and a legal system that does not satisfy this need may never command enough respect to be effective. Confirming this point, Greene acknowledged that in a host of studies people evaluating hypothetical crimes assess punishment based on their notions of just deserts, not deterrence.

Taking the podium again on the last panel, Pinker sounded a cautionary note: When we discuss scientific advances in either understanding the brain or building a better one, there is a tendency to lapse into what he called "the Jetsons model" and confuse science with what remains, at present, science fiction. He pointed out that "some technologies plateau at medium levels of efficiency" and that our ability to imagine the perfect mood or the bionic implant that will let us read other people's minds doesn't mean we'll actually get them. Given "the fantastic complexity of the brain," he said, the best technological interventions are likely to be fairly crude. (For all the talk of the identity-shattering potential of Prozac, for instance, its actual effects are only marginally stronger than a placebo's.) The real danger, in Pinker's view, is that social fears of extreme changes will block modest and useful ones.

That seems like a reasonable concern. Perhaps one bit of advice for champions of science is to avoid revolutionary claims that may scare as much as they impress. There is another reason for such modesty: Several speakers cautioned that popular perception could greatly overestimate the reliability of mind-measuring technologies, with the result that people could be labeled sociopathic, dangerously impulsive, or otherwise suspect based on flawed data.

In the big philosophical picture, perhaps Morse's advice–to simply go on treating each other as autonomous and rational creatures–makes the most sense, even if rationality may be his code word for soul. I'm not sure even traditional ethics ever treated the autonomous human self as completely exempt from external causes. And one need not be a believer in immaterial souls to think that, just maybe, the rational and moral consciousness packed inside our brains is something more than the sum of our neurons.?