"Can you name a scientific discovery that has ever added to our understanding of morality?" asked Discovery Institute senior fellow, Wesley Smith, over dinner after our recent debate with science reporter Chris Mooney in New York City. Fortunately, I could.
Anyone who has ever taken an undergraduate course in moral philosophy will remember the moral dilemma posed in the "trolley problem": You are standing next to a switch in a trolley track and you notice that a runaway trolley is about to hit a group of five people who are unaware of their danger. However, if you switch the track, the trolley will hit only one person. What do you do? Most undergraduates say that they would switch the track; after all, five lives are worth more than one.
In the second version of the problem, you are standing on a bridge over a trolley track beside a fat person. Again you notice that the runaway trolley is headed toward five unaware people. Do you push the fat person onto the track to stop the trolley? Notice the moral calculus is the same, one life to save five. But in this second version most undergraduates say that they would not push the fat stranger onto the track. (We will simply ignore the issue of whether or not you should jump onto the track to save the five people—that's for a graduate level moral philosophy seminar.)
Moral philosophers have puzzled over the disparity in the answers to these two versions of the moral dilemma posed by the trolley problem. Then along came a graduate student in psychology at Princeton University, Joshua Greene, who had access to a functional magnetic resonance imaging (fMRI) machine that allowed him to scan the changes in blood flow in human brains in real time. He put some undergraduates into the fMRI, posed both versions of the trolley problem to them, and found that their brains lit up differently in each case.
Greene and his colleagues found "that brain areas associated with emotion and social cognition (medial prefrontal cortex, posterior cingulate/precuneus, and superior temporal sulcus/temperoparietal junction) exhibited increased activity while participants considered personal moral dilemmas, while 'cognitive' brain areas associated with abstract reasoning and problem solving exhibited increased activity while participants considered impersonal moral dilemmas." In other words, the first case (impersonal) runs straight through our prefrontal cortices that coldly balance costs and benefits, while the second case (personal) also engages those parts of our brains that cause us to feel empathy and which cause us to hesitate to shove someone off a bridge.
Granted, Greene's fMRI experiment does not tell us what the right answer to the trolley problem is, but it does tell us a bit about how many of us make moral decisions.
More recently (and after my dinner with Smith), researchers at the University College London have found that men enjoy retribution more than women do. Tania Singer and her colleagues set up an experiment in which 32 volunteers witnessed people play a financial game in which some players were fair and others were unfair. Later the volunteers were placed in fMRIs where they watched as both the fair and unfair players received a mild electric shock. When a fair player was shocked, the parts of brains associated with feelings of empathy lit up for both women and men. When an unfair player was shocked the brains of the women volunteers still lit up with empathy. However, in the men's brains, not only were the empathy areas silent, the parts of the brain indicating feelings of reward were activated in a big way. The men evidently felt happy when the bastards got what they deserved. I know I would have.
Singer noted that the male volunteers "expressed more desire for revenge and seemed to feel satisfaction when unfair people were given what they perceived as deserved physical punishment." She added, "This investigation would seem to indicate there is a predominant role for men in maintaining justice and issuing punishment."
This kind of information about how men and women tend to differ in their moral judgments (or feelings of righteousness) will certainly be of interest to, say, lawyers when they select jury members.
Smith is right when he suggests that science cannot tell us what is right and what is wrong morally speaking. However, as the foregoing examples show, science can tell us more about why we make the moral decisions that we do. As neuroscience develops, I believe that the discoveries it makes about how our brains work will help us to make better moral decisions in the future.
For example, go ahead and heave that fat stranger onto the tracks.