When Persuasion Turns Into Pressure
Bernays, Ellul, and Recognizing High-Pressure Information Environments Without Losing Your Mind
In my earlier essays (in what I guess has turned into a series…), I argued that propaganda is not an aberration of authoritarian systems but a normal, even inevitable, feature of mass politics:
In “Propaganda Is Normal, Folks” (https://kylesaunders.substack.com/p/propaganda-is-normal-folks), I explained why propaganda is ubiquitous and why democratic systems differ from authoritarian ones not in the presence of persuasion but in whether persuasion remains pluralistic, contestable, and constrained by institutions that still function as referees.
In “Propaganda, Activism, and Why These Beautiful Trouble Tactics Work” (https://kylesaunders.substack.com/p/propaganda-activism-and-why-beautiful), I extended that argument to activism, reframing effective protest tactics as strategies that raise the cost of inaction rather than win debates — and why democratic institutions struggle to assimilate them.
This post extends that thread one step further inward. Instead of focusing on institutions or movements, it asks a simpler but more uncomfortable question: what does it feel like at the individual level when pluralistic persuasion begins to collapse into narrative pressure?
OK, but…what do I actually mean by that jargony sentence though?
Well, in practice, I am talking about when things feel like the range of acceptable questions narrows, dissent becomes uncomfortable rather than merely unpopular, and agreement begins to feel less like persuasion and more like a condition for social belonging.
This is sometimes described, all too casually, as being “caught in a psyop,”—I want to avoid that language here because it carries conspiratorial freight.
The goal here is not to train people to spot (or even become!) hidden manipulators. It is to recognize high-pressure information environments that increasingly surround us, which are contexts in which information flows, social incentives, and psychological dynamics align in ways that narrow what can be said, believed, or questioned without significant cost.
These environments are rarely the product of secret coordination. More often, they just emerge from incentives, cognition, and media dynamics.
In the interests of growing this substack, I’d ask that if you enjoy (hate) this piece, please restack or share it with someone who you think would appreciate (hate) it. Thanks. :)
Propaganda as Environment, Not Deception
Edward Bernays understood elements of this nearly a century ago. In Propaganda (1928), he argued that modern complex societies require elite management of opinion not because the masses are ignorant, but because the sheer scale and complexity of modern life make unmediated deliberation impossible. Someone will frame, someone will simplify, someone will filter — and that someone will shape the terrain in which thinking occurs.
Jacques Ellul pushed this idea further in Propaganda: The Formation of Men’s Attitudes (1965). For Ellul, propaganda is most powerful not when it lies, but when it becomes ambient, meaning when it sets the psychological background in which all thinking takes place. In that case, people feel autonomous even as their range of acceptable thought quietly narrows.
Ellul’s insight here is important because it helps us see propaganda not as discrete messages but as an environmental force—a context in which the mechanics of attention, identity, and incentive interact. Awareness of that environment adds friction to automatic reactions; it doesn’t create immunity.
This idea resonates with Marshall McLuhan’s insight that “the medium is the message.” McLuhan wasn’t just saying that technology alters content; he was saying that media environments shape the very structure of perception and attention. In a world where screens, feeds, and platforms mediate most of our information, the speed, rhythm, and architecture of those media do as much work on cognition as the messages themselves. High-pressure environments—compressed narratives, rapid repetition, and affective amplification—are not just stylistic choices. They are structural features of the media we live in. McLuhan’s point helps explain why narrative convergence, identity anchoring, and moralization feel so compelling: it’s not just what we are being told, but how our nervous systems and social attention are being shaped by the media environments that carry those narratives.
(Here’s a link to my longer piece on McLuhan if you’re interested: https://kylesaunders.substack.com/p/marshall-mcluhan-and-the-political)
This perspective also echoes the argument I made in “We Live Inside the Experiment Now” (https://kylesaunders.substack.com/p/we-live-inside-the-experiment-now), where I described contemporary media environments as akin to a psychological experiment: behavior is shaped not by individual reasoning alone, but by the feedback systems embedded in digital platforms. Just as a Skinner box reinforces certain actions with rewards and penalties, modern attention ecosystems reinforce certain forms of engagement—moral outrage, repetition, emotional intensity—regardless of underlying truth. This helps explain why compressed narratives and identity-anchored reasoning don’t just emerge; they are selected for, amplified, and habituated over time.
What High-Pressure Information Environments Look Like
So, it’s more accurate to talk about patterns that reliably emerge when persuasion gives way to pressure. These patterns are not evidence of bad faith on their own, but they are predictable outcomes of the mechanics discussed earlier—traits like moral simplification, identity activation, cost asymmetries, and compressed narratives.
Narrative Convergence Without New Information
One feature of high-pressure environments is rapid convergence around identical frames, language, and urgency across otherwise independent actors: media outlets, influencers, institutions, celebrities, and experts.
This doesn’t require coordination—it emerges from platform dynamics rewarding repetition, reputational incentives penalizing divergence, shared sources or cues, and simple social proof. What looks like orchestration is often selection pressure from the environment.
Cognitively, this exploits informational social influence: when people infer truth from apparent consensus, especially under uncertainty. As Solomon Asch’s classic conformity experiments showed, individuals often align with group opinion even when they privately doubt it.
Agreement by itself isn’t concerning. It’s when the same framing shows up everywhere, especially all at once, that’s when you should probably slow down.
This is where classic persuasion research becomes relevant. Decades ago, Carl Hovland and his colleagues showed that people rely heavily on source credibility heuristics when evaluating information, especially under uncertainty. Messages are more persuasive when they appear to come from sources that signal expertise (“they know what they’re talking about”) and trustworthiness (“they don’t seem self-interested”), with confidence and social similarity further reinforcing credibility. In high-pressure environments, narrative convergence across many such sources doesn’t just look like agreement — it feels like verification. Credibility is inferred from repetition and apparent alignment, not independently assessed evidence.
Robert Cialdini later systematized these dynamics in his work on influence, emphasizing principles like social proof and authority. When many aligned sources repeat the same frame, people don’t experience it as pressure — they experience it as reality stabilizing. Once individuals publicly commit, consistency pressures make exit psychologically costly. In this way, environments shift from persuasion to constraint without any single actor imposing it.
Identity Anchoring and Motivated Reasoning
Another mechanism we’ve touched on before — and one that directly links to the post on ideology and reasoning — is how beliefs become tied to social identity.
(link to my identity and motivated reasoning piece, if you’re interested: https://kylesaunders.substack.com/p/the-weak-nuclear-force-of-politics)
When issues become identity-loaded, processing information is no longer about updating toward accuracy. Instead, it becomes about protecting group coherence.
Political psychologists describe this as motivated reasoning, which is the tendency to evaluate evidence in ways that support group commitments rather than challenge them. This isn’t a moral failing or stupidity. It is a functional feature of social cognition: sticking with the group protects social bonds and minimizes internal conflict.
In high-pressure environments where narratives are compressed and stakes feel existential, motivated reasoning operates as the default processing mode. This is also why persuasion through factual argument often fails: the argument isn’t competing with other facts; it’s competing with identity.
Rising Social Costs of Dissent
High-pressure environments also tend to raise the informal costs of dissent — social, reputational, emotional, or professional.
In low-pressure contexts, disagreement is expected and safe. In high-pressure ones, it can attract:
moral shaming,
exclusion from communities,
reputational damage,
or even deplatforming.
These costs are often asymmetric. It is cheap to signal outrage or alignment and it is costly to defend nuance or depart from the dominant frame.
Psychologically, this leverages one of the strongest human motivators: the fear of social exclusion. Baumeister and Leary’s “Need to Belong” hypothesis comes to mind, which underscores how painful social rejection is compared to the pain of factual correction. When staying inside the group feels safer than questioning the group, internal pressure, not external truth, becomes the governing logic.
Amplification Through Compression and Salience
High-pressure environments favor narrative compression. Symbols, slogans, vivid incidents, and moral binaries crowd out slow-moving contextual explanations. What sticks is what is emotionally legible and attention-grabbing.
This is not a bug of human cognition, it is a feature! It’s an adaptation to an environment where information is abundant and attention is scarce. Social media industrialized this propensity, creating feedback loops that reward clarity and affect over complexity and nuance.
Bernays and Ellul both anticipated this environment in different language: Bernays in terms of engineered consent, Ellul in terms of ambient control.
Modern platforms simply accelerate and magnify what was already there.
Why These Patterns Recur
The important point is that these dynamics recur because they exploit stable features of human psychology, not because someone discovered a trick and wants to manipulate everyone.
They exploit:
our sensitivity to social consensus,
our fear of exclusion,
our tendency to moralize,
and our reliance on heuristics under complexity.
They also exploit institutional asymmetries: slower, procedural institutions versus fast, affective media dynamics. This mismatch lies at the heart of why institutions struggle to absorb pressure, not because they lack authority, but because they operate on a different temporal and cognitive logic than high-pressure environments.
Why Democracies Are Especially Vulnerable
Liberal democracies thrive on contestation. They assume disagreement, bias, and persuasion. But they also depend on tacit agreements about institutions that can arbitrate disagreement—courts, legislatures, procedures, and norms—and that remain legitimate even when outcomes are unpopular or contested.
High-pressure information environments strain that capacity. When narratives lock in and escalation accelerates, institutions face a familiar dilemma: enforce procedures and appear slow or unresponsive, or bypass those procedures and weaken the very constraints that sustain trust. Either move carries legitimacy costs.
This is where broader structural changes matter. As automation and artificial intelligence reshape work, status, and daily life, more people find themselves with weaker institutional attachment and fewer material or social stakes in existing arrangements. As Peter Turchin has argued, when the constraints that normally dampen escalation erode, political volatility increases, not because people are irrational, but because restraint becomes harder to sustain.
In that environment, pressure politics becomes cheaper and more effective. Procedural patience becomes harder to justify. And systems designed for episodic contestation are pulled toward permanent mobilization.
Awareness Without Paranoia
Recognizing these patterns does not mean assuming manipulation, bad faith, or hidden coordination. Democracies require messaging, leadership, and persuasion to function at all. False positives are easy, and overinterpretation can be just as distorting as naïveté.
Ellul was explicit about this. The real danger is not propaganda itself, but the temptation to use “propaganda” as a catch-all explanation for everything we dislike or disagree with. When every disagreement is treated as manipulation, analysis gives way to suspicion, and judgment gets replaced by reflex.
That’s why the goal of awareness is not immunity. It’s pause.
Deliberately slowing yourself down—even slightly—interrupts automatic reactions. It creates room to think before responding, and it helps keep disagreement alive rather than collapsing instantly into acceptance or rejection.
When you notice the same framing appearing everywhere at once, it’s worth asking what incentives or media dynamics might be pushing that alignment before assuming bad intent. When disagreement suddenly carries social or reputational costs, it helps to separate ordinary social norms from real suppression. And when moral language ramps up quickly, the most important thing to watch may be what that does to your own reasoning, not just to other people’s.
None of this will make you neutral. None of it will make you unbiased. But it can slow the process just enough to notice choices, tradeoffs, and uncertainties that would otherwise slide past unnoticed.
Full Circle
This brings us back to the starting point of this series.
Propaganda works because it hacks psychology. Activism works for the same reason. Neither is inherently democratic or authoritarian.
Liberal democracy survives not by eliminating these forces, but by keeping them contestable through counter-narratives, institutional referees, and procedures that remain binding even when outcomes are disputed.
High-pressure information environments are dangerous not because they deceive, but because they narrow the space in which disagreement can safely occur.
Recognizing that narrowing, in real time, is not a cure. But it is one of the few ways a person, and well, a liberal society, can preserve space for deliberation, pluralism, and institutional legitimacy.
If you enjoyed (hated) this piece, please restack or share it with someone who you think would appreciate (hate) it. Thanks. :)
Recent Developments
A quick addendum, for readers interested in newer empirical work, especially in the context of AI and algorithmic media.
Generative AI and the scaling of misinformation
A 2025 scoping review in AI & Society synthesizes empirical work on how generative AI is being used to generate, detect, mitigate, and study misinformation, highlighting the ways synthetic text can be produced at scale and how that changes the practical landscape of information quality.
https://link.springer.com/article/10.1007/s00146-025-02620-3
Countermeasures matter, but debunking still matters most
A 2025 article in Royal Society Open Science tests interventions for AI-generated misinformation, comparing pre-emptive source discreditation (“don’t trust AI as a source”) and debunking. The key takeaway is that you can reduce effects, but the strongest reductions come from well-designed corrective information—an important reminder that high-pressure environments aren’t destiny.
https://royalsocietypublishing.org/rsos/article/12/6/242148/235451/Countering-AI-generated-misinformation-with-pre
https://royalsocietypublishing.org/doi/abs/10.1098/rsos.242148
Friction via feed design is not hypothetical
One of the most interesting recent findings is that algorithmic ranking itself can raise or lower affective polarization. A Stanford-covered project (based on a Science paper) describes a browser-extension approach to rerank “antidemocratic attitudes and partisan animosity” content on X, with measurable shifts in affective polarization. This is concrete evidence for your “structural friction” point: changing the informational environment changes downstream emotional polarization.
https://news.stanford.edu/stories/2025/11/social-media-tool-polarization-user-control-research
https://culture-emotion-lab.stanford.edu/sites/culture_emotion_lab/files/media/file/science.adu5584.pdf
Digital propaganda as an algorithmic ecology
Work on “neo-propaganda” emphasizes that contemporary propaganda is often less about overt censorship than about selection, repetition, and emotionally optimized distribution—an informational ecology more than a single message.
https://www.researchgate.net/publication/391544545_Digital_Propaganda_Techniques_The_Specificities_of_Neo-Propaganda
Trust as the background condition
A broader compilation on trust under threat in digital society provides a useful umbrella frame: the core problem is not just “falsehood,” but the erosion of contestable procedures for credibility—exactly the place where pluralistic persuasion can collapse into narrative pressure.
https://www.pedocs.de/volltexte/2025/33273/pdf/Orban_et_al_2024_Trust_under_threat.pdf



