The Psychology That Makes Dehumanization Possible
Explaining it is relatively easy, stopping it, on the other hand...
For most people, dehumanization is something other people do.
It’s associated with extremism, atrocity, propaganda — pathologies of people who’ve gone morally wrong somehow. In popular imagination, it requires hatred, ideological fervor, a deliberate rejection of shared humanity.
If that were true, it would be far easier to explain. And far easier to stop.
The more unsettling reality is that dehumanization doesn’t require any of that. It emerges, reliably, from ordinary psychological processes when social, moral, or identity pressures get hard to manage. It’s less a belief than a cognitive adaptation — a way of restoring psychological equilibrium when reality becomes morally demanding or socially destabilizing.
That’s what makes it dangerous. And that’s why it keeps happening.
Herbert Kelman identified the core mechanism fifty years ago. In his analysis of what he called “sanctioned massacres” — political violence perpetrated against groups who posed no direct threat to the perpetrators — Kelman found that three processes reliably enabled harm: authorization, which suspended normal moral judgment; routinization, which eliminated the opportunity to raise moral questions at all; and dehumanization, which stripped victims of identity and community (Kelman, “Violence without Moral Restraint,” Journal of Social Issues 29, no. 4 [1973]: 25–61, https://doi.org/10.1111/j.1540-4560.1973.tb00102.x).
What matters about Kelman’s framing isn’t the historical context. It’s the causal logic. Dehumanization isn’t the result of violence, hatred, or moral failure. It’s a precondition — a cognitive reorganization that makes harm possible by eliminating the friction that would otherwise prevent it. People don’t dehumanize because they’re already cruel. They dehumanize so that cruelty becomes manageable.
This is the first move that matters: dehumanization is not primarily a political strategy. It’s a psychological shortcut that politics reliably activates.
Why motivated reasoning creates the opening. Decades of work in political psychology point to the same basic finding: people aren’t neutral processors of information. We reason directionally. Evidence is filtered through identity, emotion, and perceived threat long before it’s evaluated for accuracy. Ziva Kunda’s foundational account of motivated reasoning framed this not as intellectual dishonesty but as goal-directed cognition — people reason in ways that protect valued identities, relationships, and moral self-concepts (Psychological Bulletin 108, no. 3 [1990]: 480–498, https://psycnet.apa.org/doi/10.1037/0033-2909.108.3.480). Accuracy is often secondary. Lodge and Taber showed that this effect is strongest precisely when issues become entangled with social identity rather than material interest — the more identity-laden the issue, the more motivated the reasoning (The Rationalizing Voter, Cambridge University Press, 2013).
Dehumanization fits cleanly into this framework. Once an out-group is perceived as morally threatening or symbolically corrosive, continued recognition of their full humanity creates cognitive strain. Empathy complicates judgment. Moral concern introduces ambiguity. Responsibility becomes harder to avoid.
Dehumanization resolves the tension.
By flattening the other — stripping away complexity, interiority, moral standing — it allows motivated reasoning to proceed without friction. Harm no longer requires justification. Dismissal no longer feels cruel. Certainty becomes easier to maintain.
How group identity hardwires the architecture. Social identity theory deepens this picture. Group identities aren’t simply collections of shared traits; they’re contrastive structures. “Us” only coheres in relation to “them.” Henri Tajfel’s minimal group experiments demonstrated how little is required to activate in-group favoritism and out-group discrimination — random assignment to trivially defined groups was sufficient to produce differential allocation of real resources (Tajfel, Billig, Bundy, and Flament, “Social Categorization and Intergroup Behaviour,” European Journal of Social Psychology 1, no. 2 [1971]: 149–178, https://doi.org/10.1002/ejsp.2420010202).
Under conditions of threat, the same process intensifies. Group boundaries harden. Out-groups become cognitively compressed — treated as more homogeneous, less individuated, less psychologically complex. Nick Haslam’s integrative review identified two distinct forms of dehumanization running on this substrate: animalistic dehumanization, which denies uniquely human attributes and casts out-groups as primitive or uncivilized; and mechanistic dehumanization, which denies human nature attributes and renders others as cold, inert, or automaton-like. Both are “everyday social phenomena,” Haslam argues, rooted in ordinary social-cognitive processes rather than extreme prejudice (Personality and Social Psychology Review 10, no. 3 [2006]: 252–264, https://doi.org/10.1207/s15327957pspr1003_4). The infra-humanization literature — the finding that out-groups are routinely denied the secondary emotions, the complex feeling-states, that signal full moral personhood — extends this further. You don’t have to call someone subhuman to treat them as less than fully human. You just have to stop imagining their interior life.
Disgust amplifies all of this. Jonathan Haidt’s work on moral intuitionism showed that disgust responses — which evolved to protect against contamination — generalize easily, attaching themselves to people, behaviors, and symbols in ways that feel intuitive rather than deliberative (”The Emotional Dog and Its Rational Tail,” Psychological Review 108, no. 4 [2001]: 814–834, https://psycnet.apa.org/doi/10.1037/0033-295X.108.4.814). Moral judgments arrive pre-formed, and reasoning comes after, constructing justifications for conclusions that disgust has already reached.
Albert Bandura’s framework on moral disengagement provides the final piece of the standard account. Through mechanisms like moral justification, displacement of responsibility, and dehumanizing attribution, people cognitively restructure harmful behavior to preserve a positive self-concept (Moral Disengagement: How People Do Harm and Live with Themselves, Worth Publishers, 2016). The process is self-sealing. Once it begins, the cognitive work of maintaining it is lighter than the cognitive work of reversing it.
What the survey data actually shows. The social-psychological literature just described deals largely with subtle and implicit forms of dehumanization — the quiet denial of interiority, the attenuated attribution of emotion. For a long time, researchers assumed that overt, blatant dehumanization was rare and marginal, a feature of extremism rather than ordinary political opinion.
Nour Kteily’s work has complicated that assumption considerably.
Across seven studies conducted in three countries, Kteily and colleagues introduced and validated a direct measure of blatant dehumanization — the “Ascent of Man” scale, which asks respondents to indicate how “evolved” they consider members of various out-groups on a sliding scale from ape-like ancestor to modern human. The findings are not comfortable. Blatant dehumanization of target out-groups is measurable and meaningful in ordinary survey populations, not just among self-identified extremists. It uniquely predicts support for aggressive policies — torture, collective punishment, retaliatory violence — beyond what prejudice alone explains. And it’s reactive: blatant dehumanization of Arabs spiked immediately following the Boston Marathon bombings and remained elevated for months (Kteily, Bruneau, Waytz, and Cotterill, Journal of Personality and Social Psychology 109, no. 5 [2015]: 901–931, https://doi.org/10.1037/pspp0000048).
The policy implications of that last finding are significant. Dehumanization isn’t just a background attitude that shapes how people process information. It’s a dynamic response to threat — one that can be activated quickly, at scale, by real-world events.
A follow-up finding makes this worse. Kteily, Hodson, and Bruneau (2016) documented what they call “meta-dehumanization” — the perception that your own group is being dehumanized by the out-group. That perception, whether accurate or not, predicts reciprocal dehumanization and intergroup aggression. The causal chain closes on itself: dehumanize them, they perceive it, they dehumanize back, you perceive that, and the cycle tightens (Journal of Personality and Social Psychology 110, no. 3 [2016]: 343–370, https://doi.org/10.1037/pspp0000056). In an information environment saturated with claims about which groups are being demeaned, this mechanism deserves considerably more attention than it gets.
The uncomfortable part. There’s a reason discussions of dehumanization tend to stay abstract or moralized. When examined closely, the mechanisms involved are uncomfortable — not because they’re rare, but because they’re ordinary.
Dehumanization reduces ambiguity. It resolves moral tension. It restores a sense of clarity when social reality feels unstable or morally demanding. Linda Skitka’s research on moral conviction shows that once moral stakes are activated, compromise feels like betrayal and complexity feels like weakness (”The Psychology of Moral Conviction,” Social and Personality Psychology Compass 4, no. 4 [2010]: 267–281, https://doi.org/10.1111/j.1751-9004.2010.00272.x). Dehumanization offers an escape from that bind.
Being wrong is often less threatening than being uncertain. Being cruel is often less destabilizing than being morally conflicted. And being confidently simplified is often easier than being honestly complex.
This is the part of psychology we prefer to skip: dehumanization is not aberrant behavior. It’s a context-sensitive response that solves a real problem for the person using it. Certain traits — low dispositional empathy, high social dominance orientation, narcissistic sensitivity to status threat — can amplify the tendency. But none are required. Under the right conditions, ordinary people engage the same mechanisms. The “right conditions” aren’t rare.
Political systems don’t invent this. They activate it. Political entrepreneurs don’t create these psychological shortcuts; they learn to trigger and scale them. Elite framing research shows how identity-consistent cues activate motivated reasoning without requiring detailed argumentation — a signal is enough, if it lands on the right identity substrate (Druckman, “The Implications of Framing Effects for Citizen Competence,” Political Behavior 23, no. 3 [2001]: 225–256, https://doi.org/10.1023/A:1015006907312). Different ideological traditions use different moral vocabularies, different threat frames, different targets. But the underlying cognitive architecture is consistent across them.
Why awareness helps less than we hope. There’s a persistent temptation to believe that naming dehumanization will dissolve it. Sometimes it does. More often it doesn’t — and understanding why is important.
Dehumanization persists not because people fail to recognize others as human in the abstract, but because fully recognizing their humanity is psychologically costly in the specific. It introduces moral friction into situations and identities that increasingly reward speed, clarity, and certainty. Awareness addresses the intellectual framing. It doesn’t address the underlying pressure that generated the dehumanizing response in the first place.
This connects to a theme running through several of my earlier pieces on propaganda and epistemic pressure. The information environments we inhabit now don’t just deliver false beliefs; they deliver conditions — repetition, identity activation, social sorting, threat saturation — that make motivated reasoning almost unavoidable and dehumanization psychologically efficient. The mechanism isn’t new. The scale and speed of activation are.
Resisting dehumanization, then, requires more than better arguments. It requires social, institutional, and technological environments that make moral complexity survivable — that reduce the pressure which makes flattening the other feel like relief.
That’s a much harder problem. And one we really must become more honest about.
This essay is part of a series on propaganda, epistemic pressure, and the psychology of political belief.
Also, if you enjoyed (hated) this piece, please restack or share it with someone who you think would appreciate (hate) it. Thanks. :)
From the series:
When Persuasion Turns Into Pressure — On high-pressure information environments and what happens when the conditions for genuine persuasion collapse.
Propaganda Is Normal, Folks — Why propaganda isn’t a pathology of authoritarian systems but a permanent feature of mass politics — and what that means for how we respond to it.
Propaganda, Activism, and Why Beautiful Trouble’s Tactics Work — The structural mechanics behind why certain activist playbooks recur across ideologies, and why institutions keep losing these fights.
When Argument Gives Way to Propaganda — On the line between legitimate persuasion and epistemic coercion, and how that line gets crossed.
The Weak Nuclear Force of Politics — Identity operates on political cognition the way the weak nuclear force operates on matter: invisible at distance, decisive up close.
Trust, Critique, and the Problem of Knowing Together — On epistemic dependence, the limits of individual reason, and what it means to know things collectively in a fractured information environment.


This is my favorite post you've written.