Artificial Intelligence, Elections, and (What I Think Are) the Wrong Fears About 2028
AI Won’t Steal Elections. It Will Change What Elections Are.
If you enjoy (hate) this piece, please restack or share it with someone who you think would appreciate (hate) it. Thanks. :)
There’s no shortage of writing about AI and elections right now—serious, alarmist, or viral ‘memetic warfare’ rhetoric treating politics as an arms race of content and manipulation. Most of it isn’t wrong, but in my estimation, much focuses on the wrong layer.
(Also, I’ve been thinking a lot about electoral trust for a while in my work, so this is something that’s been on my mind a lot. Here’s some recent academic pubs of mine if anyone is interested:
Ideological Asymmetries of Trust in Elections and Non-Voting Political Participation; Fitz & Saunders, 2026 · Party Politics
Losers’ Conspiracy: Elections and Conspiracism; Miller, Farhart & Saunders, 2025 · Political Behavior
Distrusting the Process: Electoral Trust, Operational Ideology, and Non-Voting Political Participation in the 2020 American Electorate; Fitz & Saunders, 2024 · Public Opinion Quarterly)
If the 2024 election was about whether AI could matter, and the 2026 cycle becomes a kind of dress rehearsal for normalization, then 2028 is where the deeper consequences really show up—not just for persuasion, but for legitimacy, authority, and how governance itself operates inside optimized environments.
This piece builds directly on some threads I’ve been developing recently:
Behavioral conditioning and latency collapse, in “We Live Inside the Experiment Now”: https://kylesaunders.substack.com/p/we-live-inside-the-experiment-now
Media as environment rather than content, via Marshall McLuhan:
https://kylesaunders.substack.com/p/marshall-mcluhan-and-the-politicalPropaganda as structure, not ideology, in earlier posts on persuasion and power: e.g., https://kylesaunders.substack.com/p/propaganda-is-normal-folks
The election context makes the underlying argument harder to ignore.
The Four Buckets of AI-and-Elections Writing
Most existing work on AI and elections falls into four recognizable buckets.
1) Policy and governance alarmism
This is the “deepfakes and guardrails” literature: serious, careful, and institutionally focused.
Examples include the Brennan Center’s work on regulating deepfakes and synthetic media in politics
https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
Stanford GSB’s recommendations on preparing generative AI for elections
https://www.gsb.stanford.edu/faculty-research/publications/preparing-generative-ai-2024-election-recommendations-best-practices
And RAND analyses on generative AI and democratic resilience
https://www.rand.org/content/dam/rand/pubs/perspectives/PEA3000/PEA3073-1/RAND_PEA3073-1.pdf
This work treats AI as a threat to content integrity—important, but it assumes the core problem is citizens seeing false things.
That assumption seems to me like it’s increasingly fragile.
2) Empirical caution: “we don’t actually know how much this matters yet”
Another strand—often from Brookings or similar institutions—pushes back on panic, emphasizing that we still lack strong causal evidence linking AI-generated content directly to large-scale belief change or vote switching.
I agree this caution is responsible. Attitudes are sticky, persuasion effects are small, yadda yadda yadda. But it defines ‘impact’ too narrowly as belief or vote changes. The real shift is environmental: faster feedback, tighter loops, adaptive targeting—even if beliefs stay stable.
The more consequential shift may be environmental rather than attitudinal: faster feedback, tighter stimulus–response loops, adaptive targeting, and continuous testing of behavior—even when beliefs remain stable. These systems shape how people engage, react, withdraw, or participate, not just what they think.
Seen this way, the absence of dramatic belief change does not imply minimal effect. It may instead indicate that influence is operating at a different layer—one that existing empirical tools are only beginning to measure.
Recent 2025-26 research challenges this, showing chatbots shift attitudes 4x more than ads—up to 10+ points in experiments across U.S., Canada, and Poland—even using misinformation.
Nature reports on voter sway:
https://www.nature.com/articles/d41586-025-03975-9\
Science on conversational AI levers:
https://news.cornell.edu/stories/2025/12/ai-chatbots-can-effectively-sway-voters-either-direction\
MIT Technology Review on biased models spreading falsehoods:
https://www.technologyreview.com/2025/12/04/1128824/ai-chatbots-can-sway-voters-better-than-political-advertisements\
And Scientific American on new fears for elections:
https://www.scientificamerican.com/article/ai-chatbots-shown-to-sway-voters-raising-new-fears-about-election-influence
3) Disinformation and foreign-interference tracking
This bucket focuses on coordinated inauthentic behavior, state-sponsored influence campaigns, and adversarial efforts to disrupt democratic processes—especially through social media, synthetic accounts, and information laundering.
Much of this work has been careful, methodical, and genuinely important. It documents how foreign actors probe electoral systems, test narratives, and exploit existing social fractures rather than inventing new ones.
Examples include the Digital Forensic Research Lab’s ongoing election monitoring
https://dfrlab.org/elections-2024/
and Atlantic Council reporting on foreign meddling and information operations targeting U.S. elections
https://www.atlanticcouncil.org/content-series/fastthinking/what-to-know-about-foreign-meddling-in-the-us-election/
This literature has helped clarify that influence campaigns rarely aim to persuade wholesale. Instead, they amplify distrust, polarize identities, and increase uncertainty about what is real, who can be trusted, and whether participation is even worthwhile.
But domestic political environments already behave like continuous experiments. Platforms, campaigns, and media ecosystems routinely optimize for engagement, affect, and response speed using the same behavioral signals that foreign actors exploit. In that sense, adversarial interference is less an alien intrusion than a stress test of systems we already run on ourselves.
Seen this way, foreign influence operations matter—but they are best understood as accelerants and probes, not the root cause. The deeper issue is that optimized environments reward responsiveness over reflection and predictability over deliberation, regardless of who is pulling the lever.
4) Viral “memetic warfare” takes
This is the fastest-moving and loudest bucket—and the least analytically grounded.
These are the arguments circulating on X, Bluesky, Substack, and YouTube that frame AI in elections as an arms race of memes, virality, and narrative dominance. The basic claim is that AI will allow campaigns, movements, or foreign actors to flood the zone with hyper-targeted content so fast and so cheaply that traditional political communication simply can’t compete.
The rhetoric is very vivid: portraying AI as machine guns and airplanes, democracy as Napoleonic infantry, and it’s a tragic walkover like the scene where the Terminator models are just moving through the resistance like a knife through warm butter. The implication, of course, is that whoever masters memetic throughput first will overwhelm opponents by sheer volume and emotional force.
And sure, there’s a grain of truth here. AI does lower the cost of content production, enable rapid iteration, and allow messages to be tailored at scale. But the memetic warfare frame mistakes means for mechanism.
It assumes the central political problem is still persuasion—convincing people to adopt particular beliefs or narratives. In that sense, it’s a content-centric model: better memes, better targeting, faster cycles.
What it misses is that modern political platforms already operate as adaptive environments where behavior, not belief, is the primary signal. The system does not reward whoever makes the most compelling argument. It rewards whatever produces engagement, reduces latency, and stabilizes predictable response patterns—whether those responses are agreement, outrage, mockery, or fatigue.
From that perspective, “memetic warfare” overstates novelty and understates that underlying structure. We are not moving from slow persuasion to fast persuasion. We are moving from persuasion to continuous behavioral testing.
Under the hood, it’s not just anger-inducing messages; it’s constant testing—what sustains anger, calms it, or drives disengagement. Emotional valence matters less than responsiveness.
You can already see early versions of this logic in contemporary political campaign practice. In the 2024 cycle, several campaigns and advocacy groups quietly experimented with AI-assisted ad testing and message sequencing—not to persuade undecided voters in the classic sense, but to identify which emotional frames produced faster engagement, longer dwell time, or reduced drop-off among specific micro-audiences. Messages were rotated and refined based on real-time behavioral signals—scroll depth, pauses, shares, and disengagement—often without any assumption that the “best-performing” message was the most informative or even the most convincing. What mattered was not belief change, but response optimization: who reacts, how quickly, and in what direction.
This is where the arms-race metaphor breaks down a bit. In a system optimized for feedback, escalation is not always rewarded. Sometimes dampening is more effective than outrage. Sometimes silence outperforms provocation. Sometimes the most “successful” intervention is not viral content at all, but friction, exhaustion, or withdrawal.
The danger, then, is not that AI will overwhelm democracy with better propaganda. It’s that political behavior itself becomes the object of optimization—across anger, reassurance, mobilization, demobilization, and apathy—without any stable normative reference point.
Seen this way, memetic warfare is a symptom, not the disease. It captures the surface drama of faster content, but not the deeper shift toward environments that continuously adapt to shape how people respond, not what they believe.
In 2026, this could plausibly manifest as partisan imbalances, with GOP campaigns leveraging AI more aggressively—think Trump’s AI-generated memes and executive orders, or Musk’s Grok models tilting conservative—while Democrats play catch-up or push back reactively. This gap risks widening, giving one side an edge in messaging and optimization.
TIME has covered this emerging divide:
https://time.com/7321098/ai-2026-midterm-elections\
And POLITICO notes the polling splits on data centers and AI support:
https://www.politico.com/news/2026/02/06/tech-industry-ai-data-centers-politics-00762348
The Missing Frame: Elections as Optimized Environments
What’s largely absent across all four buckets is a structural argument: elections are increasingly conducted inside adaptive behavioral systems that do not primarily ask “is this true?” or even “is this persuasive?”
All they ask, even somewhat dispassionately, is: does this work?
“Work” means sustaining attention, intensifying affect, shortening latency, shaping turnout, dampening volatility, or producing predictable responses. AI accelerates this logic—not because it lies better, but because it optimizes faster.
This is where the Skinner and McLuhan threads converge.
Skinner was the one who helped us see that behavior responds to reinforcement schedules. McLuhan taught us that environments shape behavior before arguments are heard. Modern elections now operate where those insights overlap.
When Optimization Becomes Visible, Authority Loses Immunity
Historically, authority relied on large asymmetries: discretion, lag, opacity, and the ability to act without being instantly scored.
Optimized environments erode those asymmetries and, over time, make it increasingly difficult to operate outside Skinner’s Box—especially for actors whose authority depends on informational advantages, delayed judgment, or controlled visibility, whether in governance, media, or markets.
Once optimization becomes visible, authority can no longer plausibly stand outside the loop. Legitimacy stops being something you possess and becomes something you temporarily survive. Governance shifts from discretion to exposure management—legible enough to function, opaque enough to endure.
This is not simply persuasion getting faster. It’s a structural break.
In earlier systems, feedback arrived episodically. Today, feedback is continuous, ambient, and scoreable. Authority doesn’t fail because people stop believing. It fails because there is no durable place left from which belief can stabilize.
2026 as a “Soft Open,” 2028 as Stress Test
If 2024 was about AI awareness and 2026 is about AI normalization, then 2028 is where the system-level effects of AI really show up.
One emerging concern is synthetic participation—AI-driven bot swarms simulating consensus or conflict. The Guardian has already warned about this trajectory
https://www.theguardian.com/technology/2026/jan/22/experts-warn-of-threat-to-democracy-by-ai-bot-swarms-infesting-social-media
Another vector is “model grooming” or “LLM poisoning”—flooding the informational environment so future AI systems reproduce particular narratives. The Washington Post has reported on this risk: https://www.washingtonpost.com/technology/2025/04/17/llm-poisoning-grooming-chatbots-russia/
What matters is not just persuasion, but substrate control.
Watch AI super PACs shaping policy pre-emptively, with groups like Leading the Future (backed by Andreessen Horowitz and OpenAI’s Brockman) pledging $100M+ to elect pro-AI candidates and block state regs, while counters like Public First push for safety. This exemplifies optimizing governance at the elite level.
Axios details the rival PACs:
https://www.axios.com/2026/02/13/super-pacs-ai-candidate\
WIRED on the industry’s midterm playbook:
https://www.wired.com/story/ai-super-pacs-trying-to-influence-midterms\
And CNBC on the crypto-inspired lobbying:
https://www.cnbc.com/2026/01/28/ai-laws-tech-lobyying-super-pac-midterm-elections.html
Beyond the Liar’s Dividend
The “liar’s dividend” is now familiar: the easier it is to fake evidence, the easier it becomes to dismiss real evidence. Brookings summarizes this clearly
https://www.brookings.edu/articles/watch-out-for-false-claims-of-deepfakes-and-actual-deepfakes-this-election-year/
But a deeper problem is emerging alongside it. When citizens assume everything is optimized, sincerity itself becomes ambiguous. Trust erodes not because people are fooled, but because every signal feels instrumental.
This returns us to one of my favorite topics of late, which is latency—the interval between stimulus and response. The sober second thought before reacting.
When latency collapses, citizens become more predictable. Predictability is the currency of optimized environments. Legitimacy becomes provisional. Authority becomes contingent.
What to Watch Going Forward
So, if I’m right, the most important indicators heading into 2026 and 2028 won’t just be deepfakes or misinformation scandals. They’ll be structural:
Shorter stimulus-response cycles
More adaptive personalization and sequencing
Increased synthetic participation and engagement
A growing sense that “everything is a test”
Institutional drift toward survivability rather than consent
This backlash, already hitting swing states with 7% power hikes from data centers, job fears, and local resistance, raises AI as a populist flashpoint by 2026—fueling bipartisan calls for tech to “pay their fair share.” It erodes trust when optimization feels like inequality.
Reuters warns of midterm risks:
https://www.reuters.com/commentary/breakingviews/us-midterm-elections-are-ripe-ai-backlash-2026-01-22\
TIME on the brewing revolt:
https://time.com/7371825/trump-data-center-ai-backlash-ai-america-china\
AP on rising energy costs and political agreement:
https://apnews.com/article/data-center-artificial-intelligence-electricity-costs-rise-a6cdf9aa09d1cd3dbf82750430c15373\
And POLITICO on how an AI bust could reshape 2028:
https://www.politico.com/newsletters/forecast/2026/02/11/how-an-ai-bust-could-blow-up-the-2028-race-00776629
None of this guarantees democratic collapse. But it raises the price of democratic citizenship.
By 2028, the central challenge may not be whether voters believe false things, but whether elections still provide enough latency—enough space between stimulus and response—for citizens and institutions to remain self-governing inside environments optimized to make them predictable.
That’s the problem most AI-and-elections writing hasn’t quite caught up to yet.
If you enjoyed (hated) this piece, please restack or share it with someone who you think would appreciate (hate) it. Thanks. :)

