Harassment With Impunity

Last week the New York Times ran an interesting piece entitled, “Motherhood in the age of fear,” written by Kim Brooks, who left her child in a car for five minutes to run an errand and, as a consequence, wound up with a warrant for her arrest. The piece reflects on the outsized attacks from both law enforcement and everyday citizens on parents who leave their children unattended – but safe – for short periods of time.

These sentiments are relatively new. In some research on this topic interesting in its own right, Thomas et al. (2016) trace the concern over children left alone to the period between the 1970s – when Americans accepted that children could be left alone in parks, on their way to school, etc. – and the next twenty or so years, during which something of a moral panic emerged, and Americans in the modern era forbid their – and others’ – children from being left unattended for even short periods of time. The authors of this work suggest that this new norm was caused by media reports of the kidnapping of children, in turn leading people to fear for the unattended child.

Fear alone, however, doesn’t really explain the panic. After all, Thomas et al. say, “[t]he fact that many people irrationally fear air travel does not result in air travel being criminalized. Parents are not arrested for bringing their children with them on airplanes. In contrast, parents are arrested and prosecuted for allowing their children to wait in cars, play in parks, or walk through their neighborhoods without an adult.” The authors continue, quoting David Pimentel, who wrote: “In previous generations, parents who ‘let their kids run wild’ were viewed with some disdain by neighbors, perhaps, but subjected to no greater sanction than head wagging or disapproving gossip in the community. Today, such situations are far more likely to result in a call to Child Protective Services, with subsequent legal intervention.”

In short, as I have recently been discussing, the reaction to leaving children unattended has some of the feel of a moral panic: even small “infractions” – which might not even be against the law – are met with harsh censure and punishment.

Brooks goes on to make an interesting point. As she puts it in the subtitle of her article: “Women are being harassed and even arrested for making perfectly rational parenting decisions.” In the piece itself, she writes:

… it occurred to me that I had never used the word harassment to describe this situation. But why not? When a person intimidates, insults or demeans a woman on the street for the way she is dressed, or on social media for the way she speaks out, it’s harassment. But when a mother is intimidated, insulted or demeaned because of her parenting choices, we call it concern or, at worst, nosiness.

There is, I think, a deep point here about morality. What distinguishes the cases she mentions here, how a woman is dressed versus parenting choices? As she points out in the article, the parenting choices in question are neither illegal nor dangerous, in parallel with with clothing choices. The key difference is that – now, unlike in the 70s – these parenting choices have been moralized. Some subset of the population morally condemns these actions. It is not a coincidence that the art that accompanies the Times article is a person surrounded by frowning expressions in one frame and wagging fingers in another.

Here is the brief way to put this subtle but important point: as a society, we believe that it’s ok to harass people if they are doing something we morally condemn even if their action is protected by the law and is either minimally harmless or completely so. It’s important to note that whether or not we harass others probably depends a great deal on whether we think others morally condemn what they’re doing as well. The more we think others take our own moral position, the more likely we are to act.

My prior recent discussions illustrate this point. Certain people who exercise their right to free speech – and don’t hurt anyone – are harassed to the point of distraction, and the world applauds. It’s ok to harass someone who is perceived by observers to have done wrong, even if the law is on their side and no harm has been done. (Again, witch hunts illustrate this point.)

Note that the reverse is true. If the act in question is one that we see as morally acceptable, we object to harassment of the person engaged in the behavior. Consider breastfeeding in public places. While breastfeeding in public is protected in many places, the act is not protected under public indecency laws. What we find is that people who consider public breastfeeding immoral condemn mothers in the same way that Brooks describes. But others rush to the mother’s aid – instead of piling on – because they don’t moralize breastfeeding in public. Indeed, my favorite coffeeshop has a sign indicating their support, heading off any attempts to harass.

It goes still further. It’s not acceptable to harass people for doing things we don’t moralize even if they are harmful. As indicated above, we don’t wag our fingers at parents taking their children on planes, even if we think flying is dangerous. Just driving around in a car with a child is dangerous, but most people would not stand for a bystander yelling at a parent for doing so. Perhaps more cogently, consider how we would react to the person chastising a parent for feeding their child the 24 grams of fat and 60 grams of carbs in a happy meal. As long as the parent is doing something harmful but not immoral, finger wagging is verboten.

And it won’t do to say that it’s the danger that’s the difference here. The point of the Thomas et al. work is that the causality goes the other way. They write that “when people make a negative moral judgment about a parent who leaves her child alone, their estimate of the danger facing that child is higher than for a situation that objectively poses equal risk to the child, but does not elicit the same moral disapproval.” The moral judgment is driving the danger estimate.

The lessons for us as citizens seem clear enough. When you support the condemnation of someone who has done something within their rights but that you don’t like – especially if no harm is involved – consider whether piling on or supporting those that do is the ethical choice. Piling on might be fun. It might even feel right. When reflecting, consider all the people historically who have piled on when others committed behavior that was widely considered wrong: homosexuality, miscegenation, and so forth. Those people probably felt that they were on the side of the angels as well.

How Does Being Observed Influence Moral Judgment?

Just a quick note on some interesting new work coming out in Evolution and Human Behavior. A paper in press by Lee et al. asks if people’s moral judgments differ depending on whether they are being observed. To do this, subjects were assigned to two different conditions. In one, subjects were told that their responses to the questions they saw were been “monitored and manually recorded by two experimenters due to some technical glitch,” while subjects in the control group received no such instructions.

Their primary interest was in whether observation would change judgments in moral dilemmas. They used fifty questions in all. Dilemmas were of various types, but a key comparison relates to dilemmas such as the Trolley Problem. As most readers at this point know, in the Trolley Problem, in the classic version, the subject must decide whether it is permissible to push one person off a footbridge in order to save five people on the trolley tracks. Pushing the person is the utilitarian answer: it’s the one that leads to the greatest good (one dead versus five). Not pushing is the deontological answer: it’s the one that corresponds to a moral imperative (in this case, one about killing a person, even to save many).

The authors find that “social observation increased the proportion of deontological judgments in moral dilemmas.” That is, if you were the person on the footbridge, you would want others to be around so that the subject didn’t push you, but if you were on the trolley tracks, you would want the subject to be unobserved, in which case they would be more likely to push the one to save you and your four friends.

Why? Why should being observed cause someone to choose the option that leads to a worse outcome? You might think that being observed would cause people to be more likely to make the choice that was most beneficial to others. The authors speculate that the reason is that “deontological decisions in moral dilemmas evoke the perception of warmth-related positive traits such as trustworthiness and sociability,” or, related, not pushing signals “their propensities to avoid harming innocent others.”

These possibilities raise the question of why choosing the more overall harmful option signals these positive attributes. Is it really “sociable” to choose the option that leads to worse outcomes? Perhaps. Another possibility seems to be that people know that pushing the person off of the footbridge will be seen by observers as immoral and the sort of thing for which one could be punished. In most legal systems, after all, not pushing, even if it leads to harm, is not punishable, but pushing the person, even to save others, is. (I don’t know if “duty to help” laws require pushing. Anyone?) This fact, that pushing might lead to punishment, might tilt psychological judgments under observation in the direction of the option that avoids punishment. This would be consistent with some of my earlier work, which shows that punishment increases under conditions of observation.

As the authors indicate, these lab results could have real world implications. As they say, “many ethical conundrums in the real world are essentially social in that they require public disclosure of one’s moral stance.” If these results do hold in the real world, then being observed could make people more likely to make moral judgments that lead to worse overall outcomes. Given that so many moral judgments are observed, this fact might have widespread implications.

 

Moral Panic Part II – A Sense of Proportion

In my last post, I discussed one of two interesting features of moral panics, the tendency for people to pile on the alleged perpetrator instead of standing up for them even when, in retrospect, at least, they did little, or nothing wrong.

In this post, I discuss a second feature of moral panics, that people frequently favor draconian punishments for even mild offenses. Modern Americans recoil when they hear of hands cut off to punish theft. We similarly shake our heads about Singapore, where the death sentence is mandatory for, among other things, possession of 15 grams of heroin.

But the American penal system is also panicked about drugs. Three strikes laws have been used to condemn prior offenders to life sentences for absurdly tiny offenses, like stealing a pair of socks. (The court also imposed a fine of $2,500; the issue of working wages in prisons is a topic for another time.) A high school student who sends an explicit picture of themselves to someone who has asked for just such an explicit picture – consensual sexting – often faces felony charges as well as being required to register as a sex offender.

I like this rendering (Critcher, 2017) who, talking about disproportion in the context of moral panic, puts it this way:

Fundamentally, “the concept of moral panic rests on disproportion” (Goode & Ben-Yehuda, 2009, p. 41, emphasis in original). It is evident where “public concern is in excess of what is appropriate if concern were directly proportional to objective harm” (Goode & Ben-Yehuda, 2009, p. 40). Statistics are exaggerated or fabricated. The existence of other equally or more harmful activities is denied.

In short, disproportion is a key, repeating element in moral panics.

In my last post, I referred to the case of Aziz Ansari, and quoted Caitlin Flanagan at The Atlantic, writing about the case, which I render again here:

… what she and the writer who told her story created was 3,000 words of revenge porn. The clinical detail in which the story is told is intended not to validate her account as much as it is to hurt and humiliate Ansari. Together, the two women may have destroyed Ansari’s career, which is now the punishment for every kind of male sexual misconduct, from the grotesque to the disappointing.

The key point from the perspective of moral panics is the latter end of the scale, “the disappointing.” Suppose we understand Ansari’s behavior – not, to be clear, that I am saying that this is how I take it – to be within the boundaries of the law but perhaps outside the boundaries of gentlemanly conduct. Is the punishment merited? Should years of work put into a profession be erased because of one “disappointing” episode?

Maybe. After all, if we agree Ansari’s is free to be ungentlemanly, we must also agree that people are free to tweet whatever they want.

Still, Flanagan’s remark about the “clinical detail” strikes me as insightful. The moralization of scenarios such as one played out in Ansari’s apartment opens up a space for venom, a space the mob can inhabit, throwing shade with near impunity. The victims of the moral mobs are dehumanized, and – as befits animals – no treatment is too severe, as indicated by the stolen socks case, above.

It’s important to note that the mob mentality can penetrate organizational structures. Worries about harassment in and around the workplace were an important part of the #metoo movement. Should Justine Sacco, whose story I discussed last time, have been fired from her job? Suppose that the employees of her firm were screaming for her firing, but cooler heads deemed the tweet an obvious (if ill-considered and offensive) joke? The urge that people have to jump on the bandwagon and join the moral mob shapes decisions made by those who determine the fate of people such as Sacco. What ought they to have done? How should they think about the “right” punishment?

One might be tempted to reply that of course they should still have fired her. The firm is a private concern, and should protect itself. If the mob is coming, throw them their victim.

This is a tempting line to take, especially since nearly everyone, statistically, will be part of the mob, rather than its victim.

But to return to Flanagan’s quote, above, again, should the punishment for a disappointing date be the end of a career and the destruction of a life? Should the punishment for one vile public statement set against countless other countless benign private acts be global humiliation? Given that moral norms change rapidly, how does anyone know that their particular shortcomings – and we all have them – won’t be moralized and – here’s the key – weaponized?

Moral Panic & The Joy of Piling On

In the face of a panic it is the job of those who know better to stand and say… wait… this is misplaced anxiety. — Malcolm Gladwell.

The quotation above is from Gladwell’s recent podcast entitled, “The Imaginary Crimes of Margit Hamosh.” It reminds me of the famous poem by Martin Niemöller, the one that begins, “First they came for the Socialists…” and ends with “Then they came for me—and there was no one left to speak for me.” Why, indeed, do people not speak up? The podcast recounts the story of Dr. Hamosh, a scientist at Georgetown accused of scientific misconduct, at a time—the early nineties—when such accusations were sprouting like weeds. Put through untold hours of scrutiny and reputation-destroying questioning by the NIH’s Office of Research Integrity (ORI), her offense seems to have turned on using the word “presently” – as the English do – to mean “soon,” rather than how (most but not all) Americans do, to mean “now.” (She was ultimately exonerated; Gladwell wrote about the story in the Washington Post, for those interested.) Gladwell’s point is that the diligent ORI was, in the hunt for the supposed epidemic of scientific misconduct, destroying careers and reputations – and no one stood to say, “wait, this is misplaced anxiety.”

So the panic that Gladwell has in mind is not that of patrons at a movie theater on fire, but rather a moral panic, the worry that some moral transgression is happening everywhere with dire consequences, and must be stopped, damn the cost.

Moral panics come about with some regularity whenever a sufficiently large number of humans get together, which is to say pretty much all the time. Americans are probably most familiar with the moral panic surrounding witchcraft and the subsequent Salem witch trials in the late 17th century. Twenty people (and even two dogs) were executed when all was said and done, illustrating the awesome power of moral panics to destroy. The twenty people killed by the moral mob were, of course, innocent of witchcraft, to say nothing of the poor dogs.

The worry – really, the panic – that witches and witchcraft were everywhere were, in Gladwell’s phrase, “misplaced anxiety.” Why did no one stand and say this?

That is an interesting psychological question, and one that remains timely. For instance, Gladwell links the investigations of scientific fraud to the scare in Belgium in the late nineties that led to the recall of 2.5 million bottles of Coke… which turned out to be just fine. Moral panics can seemingly break out any time anywhere about anything.

On the psychology front, moral panics have a number of shared features, but I’ll focus  on just two, one in the remainder of this post, and one in the next.

Taking the first of these two features, as Gladwell’s quote points out, in the face of these massive miscarriages of justice and lives ruined, there is often a distinct lack of individuals who know better to stand up. In fact, modern experiences with moral transgressions seem to paint a different, even, the opposite picture: an eagerness to pile on. The headline the Times ran on the story about the woman who, granted, had a moment of stupidity, captures it precisely: “How One Stupid Tweet Blew Up Justine Sacco’s Life.” The Twitter mob – the current incarnation of the normal, everyday mob with their torches and pitchforks – knows no mercy.

To drift into academic matters for a moment, this seems to fit uneasily with many theories of morality. Many modern theories of morality – though by no means all – focus on notions of harm and deterrence. Why do we morally condemn and punish? To prevent future harm. But that seems hard to square with Twitter mobbing. Surely after the first critical replies Justine would never, ever so tweet again. (And this holds aside the question of whether the “harm” here is covered by theories of morality.) Why do third parties delight in jumping on the moralistic bandwagon, expressing their disapproval, heaping punishment after punishment on the perpetrator?

I don’t propose to answer this vexing question here – though I’ve worked with my former student Peter DeScioli for the reader interested in the sort of answer I favor. As a very informal matter, one sort of (proximate, unsatisfying) answer is simply that people enjoy piling on. Having seen a moral mob or two, I can say that my sense is that people take great joy in expressing the moral failing of the victim of the day. The word that always occurs to me is gleeful. The carrion feeding on Aziz Ansari’s corpse – see below — seemed to me to do so with glee. This, of course, pushes the question back: why is piling on so enjoyable?

A second sort of answer is, perhaps obviously, the cost of speaking up. In research with Alex Shaw and Peter DeScioli, we have found that in certain contexts even simply remaining neutral, let alone coming in on the “wrong” side, can be costly to one’s social relationships. Others make inferences about you based on the moral judgments you make. “Can you believe that scientist had an error in her 50,000 word grant proposal?! She’s a horrible person, right?” The correct answer – as long as one isn’t, the sort that “knows better” and is willing to “stand up” – is always an emphatic “right!” We burnish our moral credentials by condemning the person everyone else is standing in line to condemn. That is, piling on confers a reputational benefit: one is signaling one’s moral virtue and, so, how good a group member and individual one is.

The full answer is no doubt more complex. But whatever the reasons, the larger point here is that piling on is a key feature of moral panics, and, really, the one that Gladwell is pointing to in the quotation. We should strive to understand why it happens, and, of course, as people, we should strive to be the ones standing up instead of the ones piling on.

Do we?

When I think of modern moral panics, the case of Aziz Ansari I mentioned above comes to mind. Now, don’t misunderstand me. I absolutely believe that sexual assault, harassment, and indeed any coercion should be punished. Now, regarding the now-famous account of a woman’s date with Aziz Ansari, opinions seem to vary regarding his behavior. Was it harassment or coercion? Or was it something less than that? Whatever it was, Caitlin Flanagan at the Atlantic characterizes the result this way:

… what she and the writer who told her story created was 3,000 words of revenge porn. The clinical detail in which the story is told is intended not to validate her account as much as it is to hurt and humiliate Ansari. Together, the two women may have destroyed Ansari’s career, which is now the punishment for every kind of male sexual misconduct, from the grotesque to the disappointing.

The “clinical detail” is interesting in its own right, and I’ll return to that next week. But the other part of the hurting and humiliating of Ansari isn’t the detail per se, but it’s the decision to write about the evening publically. Ansari’s career could probably have withstood the clinical details if they were rendered only to the woman’s circle of friends. The details help, but the real attack is in the decision to go public. Given the human love of piling on, making the incident public was the key piece in cementing viral disparagement of Ansari.

Gladwell is right that those who know better ought to stand up. The psychology that underlies human morality – especially the peculiar tendency for people to enjoy joining the moral mob – explains, however, why they generally do not. I’ll return to some of the consequences of this in my next post.

World Cup Soccer, Social Adjustment, and the Origins of Hooliganism. And Nelson from the Simpsons.

How many times have you heard someone explain that a child – or an adult – acted out in anger or violence because they were insecure, had low self-esteem, or were poorly adjusted? This sort of connection, from low self-esteem to aggression – and the reverse, a link between high self-esteem and achievement – is and has been a popular one, reflected in – and maybe propagated by – portrayals in popular media. To take but one example, the authoritative Simpson’s Wiki confidently asserts regarding the school bully, Nelson Muntz, that “the most likely cause of Nelson’s poor behaviour is his low self-esteem…” A key problem with this view – that low self-esteem plays a causal role in violence and aggression – is that, as Boden (2017) recently put it in the similarly authoritative Wiley Handbook of Violence and Aggression, “there is no evidence to suggest that low self‐esteem plays a causal role in violence and aggression.”

So, with the World Cup in full swing, and Brazil still in the running, this seems like a good moment to discuss a forthcoming paper in my old journal, Evolution and Human Behavior, which reports some work that looks at this connection in the context of soccer (hereafter, football, in deference to the Cup) fans. A new paper by Martha Newson and colleagues investigates if hooliganism in football is, as has been suggested, due to “social maladjustment” or, instead, to something more “positive,” the degree to which people feel part of their particular group, or what they call “identity fusion.”

So, Newson et. al surveyed 439 (male) football fans, asking them questions about their fandom, whether they had been in football-related fights, willingness to fight and die for one’s team (!), identity, fusion, social adjustment, and a number of other items. In terms of their Social Adjustment Scale (SAS), they find that “none of the SAS sub-scales correlated with our main variables of interest… Nor was there evidence for social maladjustment contributing to violence [or] a willingness to fight/die” for their team. In contrast, they find that “hooligan acts (both past violence reports and endorsements of future fighting/dying for one’s club) are most likely to occur among strongly fused fans.”

In short, it doesn’t look like, in this context at least, being socially maladjusted makes one prone to violence. Instead, it’s being a super big fan of your team. Now, the usual caveats must be kept in mind. The sample here isn’t completely random. The data are self-reported. And add in there the usual concern about correlation and causation. (Having said that, if it were true that social maladjustment caused violence, then the correlation should have been there. Correlation does not logically entail causation, but usually if there is causation, you should be able to detect a correlation.)

Are there broader lessons from this work? As indicated above, my view is that this work plugs into a larger debate about where antisocial behavior comes from. In contrast to the whimsical example of Nelson from the Simpsons, recent work undermines the view that bullying is driven by having low self-esteem. Reciprocally, the putative benefits of high self-esteem continue to be suspect.

Note that while discussions of self-esteem have often focused on educational settings, the recent work by Baumeister and Vohs (linked above) should be taken seriously by people in the real world in terms of the workplace. As they put it, referring to work by Orth et al,: “Self-esteem mainly affected subjective outcomes, such as relationship satisfaction and depression. The more objective the measure was (e.g., salary, occupational attainment), the less effect self-esteem had…. Despite their large sample, there was no effect whatsoever on occupational status. Thus, high self-esteem leads to being more satisfied with your job but not with getting a better job.”

Finally, results such as these have potentially important implications for anyone trying to improve one’s own – or others’ – behavior. While the idea that increasing self-esteem will produce improved outcomes – better educational attainment, a better job, less aggression – has historically been a popular one, the present state of knowledge should make one cautious, even skeptical of this idea.

Stepping back even further, as some have been suggesting for quite some time, it might be better to stop thinking of self-esteem as a cause but rather an effect. Self-esteem might be the feeling that one gets when one is doing well – professionally, socially, etc. – rather than the feeling that gets one to do the things that will help one do well. If that’s true, then interventions in the classroom and in the workplace shouldn’t focus on making people feel better about themselves, but – and this really shouldn’t be a surprise – to helping people accomplish the sorts of things that will lead to success and, as a consequence, feeling good.

(Note: This entry has been cross-posted on Psychology Today)

How Should Societies Allocate Their Stuff?

One of my favorite novels is The Phoenix Guards by Stephen Brust. Brust writes this novel from the perspective of one Paarfi of Roundwood, a scholar from the fictitious world Brust created. Paarfi begins with a little preface about how he came up with the idea for the historical novel, based on some reading he was doing of a manuscript by another (also, of course, fictitious), author. He writes:

One thing that caught our eye occurred in the sixty-third or sixty-fourth chapter, where mention was made of a certain Tiassa who “declined to discuss the events” leading up to the tragedy.

From this brief phrase, Paarfi/Brust produce a story of magic and adventure that stretches to over 350 pages and bears a singular resemblance to The Three Musketeers but set in a world with sorcery to go with the swords.

The only reason that I mention this is that the rest of this post is a meditation on a recent editorial by Bryan W. Van Norden about free speech, but I’m not going to focus on the editorial per se, but I was struck instead by two sentences in the piece, and my remarks are, like The Phoenix Guards, a lengthy reaction to a short part of the whole. Van Norden writes:

Access to the general public, granted by institutions like television networks, newspapers, magazines, and university lectures, is a finite resource. Justice requires that, like any finite good, institutional access should be apportioned based on merit and on what benefits the community as a whole. (My italics.)

I think it’s worth contemplating this claim for two reasons. First, the question of who gets invited to speak on campus – and ultimately is allowed to speak on campus, which is not always the same thing – is an important and contentious issue that bears close scrutiny. Second, there is a much broader question about how finite goods – which is basically all goods (and services) – should be apportioned, whether because justice requires it or for any other reason.

Let’s take the second piece first.

If it’s right that “institutional access” should be apportioned – just like any other good – based on merit and social welfare, then we should be able to put any other good in that sentence and it should still make sense. (I use “social welfare” as a shorthand for “what benefits society as a whole.”) Here are a few examples.

Like any finite good…

…chocolate bars should be apportioned based on merit and social welfare.

…sexual partners should be apportioned based on merit and social welfare. (I add this only because of Robin Hanson’s recent discussion about this.)

…first class seats on airplanes should be apportioned based on merit and social welfare.

…medical care should be apportioned based on merit and social welfare

…admission to colleges should be apportioned based on merit and social welfare.

.             …kidneys should be apportioned based on merit and social welfare.

To most Western readers, some of these claims probably sound a lot more sensible than others, with the ones toward the bottom sounding more reasonable than the ones toward the top.

Indeed, again for most Westerners, we have a fairly strong sense of how goods (and services; hereafter just “goods”) ought to be apportioned, and far and away the basis is neither merit nor social welfare, but rather prices. Who gets the chocolate bars? Whoever is willing and able to pay for chocolate bars.

The overwhelming majority of goods are indeed allocated this way, and historically arguments have been required to justify deviating from this allocation system. (Karl Marx produced such an argument…) The current medical care system in the U.S. and debates surrounding it is an obvious example. Everyone agrees that medical care is finite; people disagree (strongly) about the right way to allocate it. But examples such as the medical care system illustrate the broader rule: by and large the capitalist West has decided that markets and prices will determine allocations. In cases in which prices aren’t used, the decision has to be made another way. For example, at water fountains, access is decided on a first come, first served basis. For medical care, it is the baroque system of providers, insurers, and the state, all serving up a stew of allocations all but impenetrable to we mortals.

So why does Dr. Van Norden assert that institutional access ought to be apportioned in some other way?

Well, first, I should confess I don’t really know. But second I should lay my cards on the table about how I generally approach the question of how people come to think about how scarce resources ought to be apportioned. A number of years ago, some colleagues and I conducted a study in which a scarce resource – money, in this case – was to be divided between two participants in an online experiment. The two participants were told the rules of the interaction – one person would have to work a bit harder than the other – and then asked how the money that was allocated by the experimenter ought to be divided. Before players knew whether they would have the easier or harder task, they more or less agreed on how to allocate the money. However, after they learned which role they would have, the person who worked harder came to believe that the allocation ought to be based on effort, rather than simply split evenly between the two participants. The player who worked less came to believe an even split was a more sensible idea.

In short, the answer to the question about how scarce resources ought to be allocated depended exquisitely on what allocation regime worked to the best interests of the person making the judgment.

Of course self-interest is not the only determinant of people’s views on allocation regimes. The worlds of psychology and economics are never so simple. But as the expression goes, the race is not always to the swift, but… that’s the way to bet. (Attributions vary.)

And, indeed, in some cases, what matters is not individual differences, but the good or service to be allocated. To return to the example I drew on in my last post, kidney allocation seems to most people to be best done based on factors such as urgency of need and place in line. But we would recoil at the idea that sexual partners should be allocated that way, or indeed by prices. And these views seem to be relatively broadly held.

So the moral of the story to this point is, first, that it doesn’t seem right that people broadly think merit and social welfare should dictate allocation of goods. Second, people in fact differ on how they think we should divvy things up, and at least sometimes they do so in a way that tracks their own interests. Third, intuitions depend on what the good is, exactly.

Which brings us back to the question of institutional access. How should a university allocate its finite speaking slots? The answer to that, to me, depends on what you think the function of those slots are. If they are solely to do with the financial health of the institution, then those making invitations ought to invite those who will maximize that health. This might be entertaining, famous people, who seem to contribute to that end. I recall that Penn had Lin-Manuel Miranda speak at commencement in 2016. I’m as devoted a fan of Hamilton as anyone, but I’m not sure he guided or inspired the Class of 2016.

A different goal of university invitations might have to do with maximizing its educational goals, which would somehow contribute to financial goals as well. In that case, entertaining, famous people might not be as desirable as those who contribute to education and learning.

Might those who invite speakers take merit and social welfare into account? Sure. I’ll have more to say about that down the line, but it doesn’t seem to me that those criteria ought to count as first principles. In the end, it could be that there are no general principles about how to allocate scarce resources or, at least, no principles so general that the answer about how to allocate them is the same answer that one tends to see across the social sciences: it depends.

Taboos & Moral Waste

The New York Times recently ran a heartbreaking story with the headline, “Where a Taboo Is Leading to the Deaths of Young Girls.” The piece discusses an ancient but ongoing practice in Nepal, called chhaupadi, that makes it taboo for a woman who is menstruating to stay in their home. The women sleep instead in small huts or elsewhere apart from the family home, which brings attendant life-threatening dangers, including falling prey to snakes. The article recounts fatalities that resulted from this practice.

There is certain judgmental tone to the Times piece, which points out that the practice is hundreds of years old, based on superstition, and, of course, fatal to young women. Even the title of the piece conveys the notion that this is a “cultural” phenomenon, the sort of thing that happens elsewhere – Where a Taboo is Leading to Deaths – as if they don’t in America.

Don’t they?

First, it’s important to note that taboos are really just strong moral rules. And a moral rule is roughly just a way of saying that doing such and such is wrong and that if you such and such you are susceptible to punishment by the group. Now, how punishment occurs varies a lot from culture to culture. It can be shunning and shaming in one place, and the police/judicial system in another. But the idea is more or less the same everywhere.

Even more importantly, the link from moral rules to harm is the same everywhere. In the present case, the thread of the story is that this moral rule – this taboo – causes harm, in some cases death. In some sense, this might seem counterintuitive. After all, morality is supposed to prevent or reduce harm. Isn’t it unusual that in this case morality causes harm?

Not at all. The perhaps overused example of the Trolley Problem illustrates the point. It’s immoral to push the fellow off the footbridge, causing five (hypothetical) people to die instead of just one. This intuition, that one ought not to push, is remarkably cross-culturally consistent.

But this pattern is in no way limited to hypothetical vignettes. To take just one example – about which one of us has written extensively – consider the case of the morality surrounding abortion. Until the Supreme Court case Roe v. Wade in 1973, states were free to prohibit abortions, a prohibition driven by morality, the idea that terminating a pregnancy was morally wrong.

Now, again, elsewhere we have argued that the moral commitments people claim animate their abortion views might not really be the source of their position on abortion, but there is no doubt that the case for banning and criminalizing abortion has and continues to have a strong moral component, frequently grounded in supernatural beliefs.

This runs parallel to the case in Nepal, but the parallel doesn’t end there. When women were prevented from legal abortions, many women turned to illegal means of doing so. These illegal abortions, having to be done in the shadows of the law, frequently lacked the tools and precautions needed for clean, safe abortions. And, of course, because the procedures were illegal, women could not rely on the courts as a remedy if anything were to go wrong.

And, of course, things did go wrong. Exact statistics regarding how many women were permanently injured or killed as a result of illegal abortions are not available, but estimates place these values in the hundreds or even thousands per year.

These women did not have to die. Of course medical technology has advanced, but there is no doubt that the vastly smaller number of deaths as a result of abortion procedures in modern times is due to the fact that these procedures are done in the light of day, with the protection of the rule of law.

The parallel with the Nepal case should be clear. In both cases, moral beliefs produce moral rules and these moral rules in turn cause young women to suffer and die needlessly.

Examples such as these, in which moral beliefs lead to harm, aren’t very difficult to find. An example I’ve written about previously is the very broadly held belief that it is wrong to pay for organs. [Note: I have archived my prior blog which used to live on the Evolutionary Psychology web site, and the piece on kidneys is here.)

I think that this phenomenon, the causal link between a group’s moral commitments and harm, is sufficiently common that it merits its own term. In the future, I’ll use the phrase “moral waste” to capture this idea. (Hat tip to my former student, Peter DeScioli, who I believe was the first to use the term this way.) Moral waste is the welfare – in lives, suffering, money, or other currencies – lost because of shared beliefs that something, such as selling a kidney, sharing a house during menstruation, etc. – is wrong.

In the future, I’ll argue that an important public policy goal should be to reduce moral waste.

And there’s plenty to clean up.

Recent trends in party identification

Between the 2012 and 2016 elections, evangelical and less-educated whites moved further toward Republicans, while non-Christian and more-educated whites moved further toward Democrats.

I’ve been maintaining a cumulative file of all the publicly released Pew political and religious surveys since the beginning of 2013. It’s simply enormous, currently containing data from over 100,000 respondents.

For today’s post, I was curious about the extent of recent changes in party identification across demographic groups, changes that are fairly subtle and require a ton of data to identify reliably. So I took my big Pew database and started looking for the major movements.

Turns out the basic story combines two themes, one involving white evangelicals vs. white non-Christians and the other involving non-degreed vs. degreed whites. Generally speaking, when looking from early 2013 to mid-late 2016, white evangelicals and non-degreed whites have tended to increasingly identify as Republicans, while white non-Christians and degreed whites have tended to increasingly identify as Democrats. More specifically, the groups shifting towards Republicans have been white evangelicals (of all education levels) along with white non-evangelical Christians without college degrees, and the groups shifting towards Democrats have been white non-Christians (of all education levels) along with white non-evangelical Christians with college degrees.

The chart below shows the trend lines. (The scale here assigns 1 to Democrats, 2 to independents who lean towards Democrats, 3 to non-leaning independents, 4 to independents who lean towards Republicans, and 5 to Republicans.) In addition to the directional shifts among the two white groups, non-whites had an interesting pattern of increased Democratic support near the presidential elections but softened support in the time in between.

The next chart shows the percentage breakdowns at both ends of the time period, that is, in combined surveys from January to March of 2013 and from August to October of 2016. Comparing these two periods, non-whites are pretty similar, with around 69% landing or leaning Democratic and around 18% landing or leaning Republican. Among whites who are either non-Christian or degreed non-evangelical Christians, though, there was a noticeable shift—from 50% Democrat and 36% Republican in early 2013 to 56% Democrat and 33% Republican in mid-late 2016. And then there’s a particularly pronounced shift among whites who are either evangelicals or non-degreed non-evangelical Christians—from 33% Democrat and 57% Republican in early 2013 to 27% Democrat and 66% Republican in mid-late 2016.

(Note: HENEC refers to high-education non-evangelical Christians and LENEC refers to low-education non-evangelical Christians, where the dividing line between “high” and “low” is whether they have 4-year college degrees.)

Two things worth mentioning here. First, there’s a basic overlap between evangelical identification and having less education, on the one hand, and non-Christian identity and having more education, on the other. As I’ve noted in prior posts, people who label themselves “born again or evangelical” Christians tend to be churchgoing Protestants with less education, while people with more education are relatively more likely not to be Christians.

Second, the recent party trends among whites don’t appear to be a specific response to Trump’s nomination. That is, evangelicals and less-educated whites were already moving in a more solid Republican direction before Trump, and non-Christians and more-educated whites were already moving in a more solid Democratic direction. In fact, it seems likely that Trump’s nomination was itself made possible by the long-term decline in Republican identification among those with the most education, though it also seems likely that his nomination and subsequent victory have further reinforced this decline.

What are the big deals when linking demographics and politics?

Analyses of the public’s political opinions from Pew Research (and to an extent Gallup as well) often follow a certain script. On whatever topic they’re covering, they’ll usually start with the current numbers. Then they’ll compare the latest survey with older ones to look for trends over time.

Then they’ll break things down by party affiliation and/or liberal-conservative self-identification. For most kinds of issues, these breakdowns show very large differences—self-labeled liberal Democrats really tend to hold liberal views on various issues while self-labeled conservative Republicans really tend to hold conservative views. These kinds of splits are interesting, but often carry tremendous causal ambiguity. They result from complex mixtures of people who hold specific issue opinions because of their party or ideology along with people who favor a party or ideology because they have a given set of issue opinions.

To give a better sense of what’s driving things, the last step in the analysis often shows various demographic splits. In recent Pew political posts, for example, they showed trust in government broken down by gender, age, and education, views on the rich and the poor broken down by gender, education, and income, and budget preferences broken down by gender, age, education, and income.

Overall, it’s a pretty good script. But I’m frequently puzzled by one aspect: the choice of demographics. Gender, age, education, and income often contribute to political divisions. Yet they’re mere ripples in the pool these days compared with the crashing waves of religion and race.

In this post, then, I want to show as plainly as I can that the contributions of religion and race to political differences are typically much larger than the contributions of other standard demographic categories. That is, if you’re doing a demographic breakdown of public opinion on political topics, you’d usually want to start with religion and/or race, because they’re the biggest deals. In addition, I want to show where religion tends to be especially dominant versus where race tends to be especially dominant.

And that’s the chart below. Basically, I took a few years of recent Pew data, grouped a number of their individual questions into multi-item opinion measures (on rich-poor economic redistribution, on homosexuality, etc.), and used multiple regressions to predict differences in those political items as a function of (1) information on religious identity (whether folks are evangelical, Catholic, atheist, etc.) along with frequency of religious service attendance, (2) racial and ethnic information along with immigrant status, (3) education, (4) age, (5) gender, (6) family income, and (7) region of the country (South, Northeast, etc.) along with population density (urban/suburban/rural).

You can see in the chart that religion/church (the black bars) and race/immigrant (the green bars) are just way bigger deals than education, age, gender, income, and region/density. Further, there are some kinds of items where race/immigrant variables are particularly big deals (party identification along with views on rich-poor issues, immigration, gun regulation, racial issues, and white nationalism, which combines views on immigration, race, etc.), while there are other kinds of items where religion/church variables are clearly the dominant demographic predictors (self-labelled liberal/conservative ideology along with views on homosexuality, abortion, marijuana legalization, environmental regulation, and Middle Eastern conflicts).

(Notes: The displayed values are additive contributions to the overall multiple R in forward stepwise OLS regressions. The predictor set included a large number of binary categories—being black, being Catholic, having a college degree, having an income below $40k, going to church at least one a week, and so on, and so on—as well as interactions involving all predictors. I entered individual predictors based on which had the biggest impact at any given step.)

The demographic categories other than religion and race are never seriously big deals (at least with this set of political items). Sometimes they’re moderate deals, though. Income is a moderate deal in predicting rich-poor positions. Education is a moderate deal in predicting views on immigration and guns. Age differences are a moderate deal when it comes to immigration and marijuana legalization. And so on.

Some of these demographic differences would be larger in a stand-alone analysis. Here, I’m using a regression analysis that accounts for the biggest differences first, and then shows only the marginal contributions for less important items. So, for example, age differences in issue opinions and partisanship are in part driven by the fact that younger groups are more racially diverse and less religious. Thus, when religion and race go into the models at earlier steps, the remaining marginal contributions of age differences aren’t typically very large. The story is similar with regional and urban/rural differences, which are largely driven by racial, religious, and educational differences.

Also, there are some key demographic items that aren’t typically measured in Pew samples. From other sources, for example, I’ve found political differences based on sexual orientation, veteran status, and occupational information.

Why avoid religion and race?

So it’s puzzling to me that Pew analyses often highlight age, gender, education, and income while often avoiding religion and race. I suspect the neglect of religion relates in part to the fact that Pew has separate groups focused on politics and religion, so perhaps the politics folks don’t like to crowd into the religion folks’ turf. I also suspect some of it relates to the complexity of the religious divisions. To really find the religious fault lines, you have to spend some time grouping and regrouping the categories. In my analyses, I’ve usually ended up settling into a not-entirely-obvious system that combines Mormons and non-Catholic evangelicals into one category, other Christians in another category, “nothing in particular” and those with missing information in another, specific non-Christians (Jews, Buddhists, etc.) in another, and then atheists and agnostics in yet another. There’s no great a priori insight that drives any of this for me; it just tends to be a set of categories that carves the sample effectively when looking at political differences.

I also suspect that lots of people just don’t really enjoy thinking about religious and racial differences, and particularly don’t enjoy noticing just how much of our current political differences are attributable to these sources. There’s something creepy about it—it’s too close to home and it’s too near the bone, and all that. Or maybe that’s not it; I really don’t know.

Millennials and the 2016 Election

There seem to be at least three things that were true of Millennials in last year’s presidential election. First, they heavily favored Sanders over Clinton in the Democratic primary. Second, they heavily favored Clinton over Trump in the general election. But, third, they were substantially less likely than older folks to vote at all.

The charts below give a look at these patterns using data from the 2016 American National Election Studies (ANES). (It’s important to remember that this is just one sample. As I showed in a prior post, there are various differences among the ANES, the Cooperative Congressional Election Study (CCES), and the exit polls.)

If you’re used to thinking about exit polls, the big difference here is that I’m also showing non-voters. According to the ANES results, substantially more Millennials than older folks didn’t vote in the primaries (73% vs. 51%) and didn’t vote in the general election (38% vs. 20%). This isn’t something new—younger folks are usually quite a bit less likely to vote than older folks.

The ANES numbers, however, are almost certainly underestimating non-voters across the board. According to the United States Election Project (which uses actual vote tallies rather than after-the-fact surveys), around 41% of eligible voters didn’t vote in the general election. This is substantially higher than what the ANES sample suggests. In fact, to get to a 41% non-voting total, you’d have to assume something like a bit over half (rather than 38%) of Millennials and a bit over a third (rather than 20%) of older generations not voting. The problem with after-the-fact surveys is in part that some non-voters lie about voting, but it’s also that voluntary surveys disproportionately pick up the kinds of people who have opinions and don’t mind sharing them—that is, the kinds of people who are more likely to vote in the first place.

Millennials and the primaries

The top two charts show the primaries. And, sure enough, the ANES data suggests that, when they voted in the Democratic primaries, Millennials overwhelmingly chose Sanders over Clinton. But keep in mind that these data also suggest that older Democratic primary voters chose Clinton over Sanders in about the same overwhelming proportions. Here too, though, there are reasons not to oversell the exact numbers. The ANES sample gives Clinton a bigger total margin over Sanders (with about 59 Clinton votes for every 39 Sanders votes) than analyses based on the actual vote totals (where Clinton received 55 votes for every 43 Sanders votes). Also, the CCES sample shows Clinton running almost even with Sanders among Millennials, something that seems very unlikely given the ANES and exit poll results, but nonetheless represents a cautionary data point.

While Millennials seem to have heavily favored Sanders over Clinton in the primaries, their actual favorite option by far was to not turn out to vote (again, even the 73% non-voting number in the ANES sample for Millennials in the primaries is probably substantially too low). And, even among those Millennials who turned out, there were probably at least as many non-Sanders primary voters as Sanders voters. If you neglect these points, it’s easy to overstate Millennials’ support for Sanders.

Millennials and the general election

In the general election, we see again that Millennials were a lot less likely to vote than were older generations. And, as I discussed earlier, the ANES non-voting estimates for the general election are too low.

But for those who did vote, Millennials substantially preferred Clinton over Trump. Millennials also were more likely than older generations to support third-party candidates.

A big reason why Millennials generally favor Democrats over Republicans relates to generational differences in demographics such as race and religion. This shows up clearly in the CCES sample (which I analyzed in prior posts on Clinton/Trump voter demographics). Just looking at Clinton vs. Trump general-election voters in the CCES data, Clinton got 64% of the two-party vote among Millennials while she got only 48% of the two-party vote among older generations. That’s a 16-point gap.

And while a 16-point gap might seem like a big deal, it’s really not when you compare it to various bigger deals. So, for example, in the CCES data, there’s a 49-point gap between whites (42% voted for Clinton over Trump) and blacks (91% voted for Clinton over Trump), and there’s a 37-point gap between evangelicals (34% voted for Clinton over Trump) and non-Christians (71% voted for Clinton over Trump). Start combining such items—focusing, say, on white evangelicals—and the gaps grow even larger.

In fact, it turns out that the lion’s share of the Millennial gap in the CCES is due to the fact that, compared with older generations, Millennials have more racial minorities, fewer evangelicals and other Christians, more LGBT folks, and fewer military veterans. In short, what begins as a 16-point Millennial gap in the two-party 2016 vote gets reduced to a mere 5-point gap when statistically controlling for race, religion, sexual orientation, and veteran status.

While these fundamental demographics can explain most of the Millennial gap in the general election, they can’t, as far as I can tell, explain much of the Millennial gap in support for Clinton vs. Sanders. I have yet to see anything that really explains the strong generational splits within the Democratic primary (e.g., when I analyzed the issue positions of Millennials, it turned out that they’re actually not unusually liberal on Sanders-emphasized redistribution issues, even though they are unusually liberal in some other areas, such as views on homosexuality, marijuana, Middle Eastern conflicts, and immigration).

Another thing we don’t really know is the future. There are some safe bets, though. Like other generations before them, Millennial voter participation is likely to increase as they age. It’s also likely that, for the time being, given their demographics, Millennials will continue to prefer Democrats over Republicans when they do vote. Eventually, though, Millennials and those who come after them will inevitably force changes in the current party coalitions—there just won’t be enough white Christians around to support a viable national party organized primarily around white Christians, and so the parties will continue to evolve.

Social science for the pleeps