WHY DO WE HATE HYPOCRITES?
I have long been fascinated by hypocrisy. Why does inconsistency between words and deeds bother us so much? Why is it so much easier to criticize inconsistency than to criticize evil? Why is our political discourse dominated by whataboutery and the tu quoque fallacy?
In a series of experiments forthcoming in Psychological Science, Jillian Jordan and I, along with David Rand and Paul Bloom, find support for a theory of hypocrisy as false signaling. In a nutshell, it's not that hypocrites fail to practice what they preach; it's that their moralizing earns them reputational benefits that they don't deserve. Take away the reputational benefits, we find, and people forgive the hypocrisy.
This idea of hypocrisy as false signaling makes sense if you think of moral condemnation as a way to boost your reputation. When we scold others for their transgressions, we imply that we ourselves do not engage in such behavior. In our research we find that people take others’ moralizing commentary—statements such as “it is wrong to eat meat”—as an indication about the speaker’s private conduct. In fact, we find that people would be more likely to infer that the speaker does not herself eat meat if she said, “it’s wrong to eat meat” than if she stated outright, “I do not eat meat.” So moral condemnation functions as a particularly persuasive signal of private moral behavior.
We gain reputational benefits when we speak out publicly on moral issues or chastise others for their moral failings—but those reputational benefits are undeserved if it turns that our private behavior does not align with the signal implied by our condemnation. Hypocrites are disliked, we argue, because their condemnation misleadingly implies that they are good, when their private behavior shows that they are not. Consistent with this theory, we find that hypocrites who don’t signal are liked just fine.
Public attitudes toward consent
While we have seen lots of conversation in recent years about the importance of consent, there has been hardly any empirical work examining what people think consent is. It turns out that this is not a straightforward question. Consent is a subject that presents endless puzzles and difficulties: coming up with a philosophically coherent and morally sound theory of consent has proven elusive for philosophers and jurists alike.
On most prevailing accounts, consent is normatively significant because it constitutes an expression of autonomy. Because of its relation to autonomy, consent must be given knowingly, competently, and freely in order to have moral force. Consent is vitiated if factors are present that compromise autonomous decision-making, such as incapacity (undermining competence), coercion (undermining freedom), or deception (undermining knowledge).
In my work, I examine whether laypeople judge incapacity, coercion, and deception as undermining consent. I find that people largely see incapacity and coercion as vitiating consent, but they feel differently about deception. A substantial proportion of survey respondents believe that morally valid consent can be granted despite the presence of significant and wrongful deception. Their intuitive judgments thus defy the prevailing legal and philosophical understandings of consent.
How do you get people to show up in court?
In the United States, the vast majority (80-90%) of debt collection defendants fail to show up in court to contest the allegations against them. As a result, they frequently lose their cases by default. Numerous appellate courts have held that routine default is against public policy, because it breeds a system in which the state publicly declares a winner to a dispute without the opportunity to assess the relevant facts and apply the law. I am working with Jim Greiner, Dalié Jiménez, and Andrea Matthews to field a three-state randomized controlled trial (RCT) examining the effect of sending mailings to debt collection defendants urging them to appear in court. The field experiment examines how various messages and behavioral "nudges" affect the rates at which defendants file answer forms and appear in court.
BIAS AND POLICE BODY-WORN CAMERAS
In the wake of national outrage and polarization over several high-profile police killings of unarmed citizens, reformers have called for police officers to wear body cameras. In a study of mock jurors’ perceptions of real police footage, I find that observers’ prior attitudes toward the police color their interpretation of what transpired in the videos, resulting in considerable polarization. Further, I find that video evidence does not conclusively outperform non-video testimony in minimizing decision-makers’ reliance on their prior attitudes. Study participants learned of an incident involving a police officer and citizen by either watching a video of the altercation, reading dueling accounts of the altercation written from the perspectives of the police officer and of the citizen, reading a single account from the perspective of a disinterested third party, or reading only the police officer’s version of events. Prior attitudes toward police significantly affected judgments of the officer’s conduct in all four conditions and did not differ significantly across the different types of evidence. Moreover, people who identified strongly with the police—but not those who identified weakly—became more confident in their judgments when presented with video evidence. Ultimately, I believe we should be more skeptical of the commonsense view that body cameras will reduce polarization by telling us unambiguously and definitively what happened.
Punishment and Moral Responsibility
In another line of work, I have examined the psychology of retribution and punitive attitudes. I am interested in questions such as: when tortfeasors die, why don't we impose punitive damages against their estates? The answer, I suggest, is that despite our insistence that punitive damages serve deterrence, our punishment decisions more closely follow an outrage heuristic. In another study, conducted with Daniel Herz-Roiphe, I examine moral intuitions regarding dumb luck. We find evidence that people use a folk moral theory that matches agents' luck to their character: they are willing to punish bad actors for unforeseen, unintended bad outcomes—but not for unforeseen, unintended, good outcomes. And they are willing to hold good actors accountable for good luck, but not bad. We argue that this "matching heuristic" can help us understand the criminal law's puzzling treatment of luck.
Discrimination in Employment and Higher Education
Past research has demonstrated that decision-makers choosing between individual applicants often inflate the importance of criteria that happen to favor in-group candidates, while deflating those same criteria when they would favor an out-group candidate. Comparatively little attention has been paid to how race or gender bias affect which criteria are deemed important in the abstract, at the policy-setting level. That is, when a hiring committee or admissions office is setting the criteria by which applicants will be evaluated, is the decision affected by a desire to elevate the criteria that they expect will advantage certain group members? Research I am conducting with Jack Dovidio examines whether the willingness to adopt a given criterion shifts depending on which group is expected to be advantaged by the addition of that criterion. We find that policy-setting decisions regarding abstract criteria can be motivated by a desire to exclude a disliked out-group. The current antidiscrimination legal regime does not protect against the problem identified by this research: that employees can apply the same neutral, job-related criteria to everyone while still working to exclude members of a disfavored minority group, by favoring criteria that disproportionately disadvantage members of that group.