Just a quick note on some interesting new work coming out in Evolution and Human Behavior. A paper in press by Lee et al. asks if people’s moral judgments differ depending on whether they are being observed. To do this, subjects were assigned to two different conditions. In one, subjects were told that their responses to the questions they saw were been “monitored and manually recorded by two experimenters due to some technical glitch,” while subjects in the control group received no such instructions.
Their primary interest was in whether observation would change judgments in moral dilemmas. They used fifty questions in all. Dilemmas were of various types, but a key comparison relates to dilemmas such as the Trolley Problem. As most readers at this point know, in the Trolley Problem, in the classic version, the subject must decide whether it is permissible to push one person off a footbridge in order to save five people on the trolley tracks. Pushing the person is the utilitarian answer: it’s the one that leads to the greatest good (one dead versus five). Not pushing is the deontological answer: it’s the one that corresponds to a moral imperative (in this case, one about killing a person, even to save many).
The authors find that “social observation increased the proportion of deontological judgments in moral dilemmas.” That is, if you were the person on the footbridge, you would want others to be around so that the subject didn’t push you, but if you were on the trolley tracks, you would want the subject to be unobserved, in which case they would be more likely to push the one to save you and your four friends.
Why? Why should being observed cause someone to choose the option that leads to a worse outcome? You might think that being observed would cause people to be more likely to make the choice that was most beneficial to others. The authors speculate that the reason is that “deontological decisions in moral dilemmas evoke the perception of warmth-related positive traits such as trustworthiness and sociability,” or, related, not pushing signals “their propensities to avoid harming innocent others.”
These possibilities raise the question of why choosing the more overall harmful option signals these positive attributes. Is it really “sociable” to choose the option that leads to worse outcomes? Perhaps. Another possibility seems to be that people know that pushing the person off of the footbridge will be seen by observers as immoral and the sort of thing for which one could be punished. In most legal systems, after all, not pushing, even if it leads to harm, is not punishable, but pushing the person, even to save others, is. (I don’t know if “duty to help” laws require pushing. Anyone?) This fact, that pushing might lead to punishment, might tilt psychological judgments under observation in the direction of the option that avoids punishment. This would be consistent with some of my earlier work, which shows that punishment increases under conditions of observation.
As the authors indicate, these lab results could have real world implications. As they say, “many ethical conundrums in the real world are essentially social in that they require public disclosure of one’s moral stance.” If these results do hold in the real world, then being observed could make people more likely to make moral judgments that lead to worse overall outcomes. Given that so many moral judgments are observed, this fact might have widespread implications.