We Live Inside the Experiment Now
From the behavioral conditioning of Skinner’s Walden Two to today’s algorithmic governance
I read Walden Two for the first time as an undergraduate. I found it fascinating—dystopian, science-fiction-y in a way—but also genuinely intriguing. It was probably one of the first times I really started thinking, in an academic sense, about why people do what they do, if I’m honest.
In that book, B. F. Skinner imagined a society organized around behavioral (“operant”) conditioning—environments carefully structured to produce cooperation, stability, and well-being without appealing to inner virtue, civic duty, or even freedom as it is usually understood. Human behavior, Skinner argued, could be shaped far more reliably by adjusting reinforcement schedules than by persuasion, argument, or moral exhortation.
“A person does not act upon the world, the world acts upon him.”
— B. F. Skinner, 1953 (Science and Human Behavior)
The book was widely received as dystopia, and for good reason.
What’s striking now is not how chilling Skinner’s vision was, but how narrow and contained it seems in retrospect.
Skinner imagined controlled environments. We now inhabit adaptive ones.
From Conditioning to Continuous Testing
The classic Skinner Box was static. The contingencies were known. The subject adapted, but the environment itself remained fixed.
Ha! If only. Safe to say that is no longer our condition.
Modern informational environments update continuously. Every scroll, pause, click, share, recoil, and refusal is registered as signal. Not argument. Not belief. Behavior.
The governing question is no longer Is this persuasive?
It is: Does this produce the desired response? Iterated stimuli, over and over.
Which response? Any response that:
sustains attention,
intensifies affect,
shortens latency,
or deepens engagement.
In this sense, we are not living under propaganda systems in the traditional meaning of the term. We are living inside recursive A/B tests—millions of micro-experiments run in parallel, optimized in real time, and largely opaque to the subjects generating the data.
For example, psychologists Dominic Packer and Jay Van Bavel recently observed that social platforms “reward engagement, and outrage generates more engagement than nuance,” creating incentives to frame information in emotionally provocative ways rather than to inform.
This is not persuasion per se. It is behavioral optimization.
And optimization has a distinctive political consequence: it rewards speed.
Latency Is the First Casualty
Skinner understood way back then something that remains underappreciated: conditioning works best when the interval between stimulus and reward collapses.
Modern platforms are extraordinarily good at this.
The faster the system can convert
stimulus → response → reinforcement
the less room there is for reflection, doubt, or contextualization. Latency becomes friction. Friction becomes inefficiency. And inefficiency is engineered out.
“Immediate consequences are far more effective than delayed consequences.”
— B. F. Skinner, 1953 (Science and Human Behavior)
From a democratic perspective, this is not a trivial shift.
Deliberation depends on delay.
Judgment requires slack.
Trust requires time.
When latency—the interval between stimulus and response, the space in which a person can still decide not to react on cue—collapses, citizens are not simply more emotional (which they are); they become more predictable.
And predictability is the currency of optimized environments.
This helps explain why the most rewarded political behaviors today are not careful arguments or provisional claims, but rapid expressions of certainty. Speed becomes a proxy for conviction. Conviction becomes a proxy for authenticity. Authenticity becomes a proxy for trustworthiness.
Which brings us to the second, quieter consequence.
Trust Erodes When Everything Feels Like a Test
Political trust depends on a background assumption that communication is not purely instrumental—that messages are offered in good faith, not merely deployed to elicit response (e.g., Hetherington 2005; Levi 1998 if you’re interested).
A continuously optimized environment undermines that assumption.
When citizens sense, even dimly, that:
outrage is being amplified because it performs well,
fear is being surfaced because it spreads faster than reassurance,
moral clarity is being rewarded regardless of accuracy,
they rationally begin to treat all communication as potentially strategic.
The result is not simply cynicism. It is epistemic withdrawal.
If every signal might be an experiment, then no signal deserves unqualified trust. People do not merely distrust elites or institutions; they begin to distrust the communicative environment itself. This is one reason contemporary distrust feels so generalized—and so difficult to reverse. It is not anchored to a single actor or failure. It is ambient.
Importantly, this erosion of trust does not require falsehood. Optimization alone is sufficient.
McLuhan, Skinner, and the Loss of Deliberative Space
Marshall McLuhan (who I just wrote a piece on last week, which of course led to this piece, funny how that works…) famously argued that media operate as environments rather than neutral conduits for messages (for example, Understanding Media, 1964). Skinner made a parallel claim about behavior: change the environment, and behavior will follow without argument.
“The medium is the message.”
— Marshall McLuhan, 1964
Together, they point toward a troubling synthesis.
Politics increasingly unfolds after conditioning has occurred.
By the time citizens encounter arguments, preferences have already been shaped by reinforcement histories they did not choose and cannot easily inspect.
This reframes familiar pathologies—polarization, motivated reasoning, distrust—not as failures of civic virtue or informational literacy, but as adaptive responses to an environment that rewards immediacy, certainty, and affective intensity.
In such an environment, slowing down feels like weakness. Hesitation reads as disengagement. Nuance underperforms.
What Skills Are Even Available?
There is no immunity to this condition. The fantasy of opting out through awareness alone is itself a relic of an earlier media ecology.
What is possible is resistance through friction.
That begins with stimulus literacy: learning to recognize when one is being tested rather than informed.
It requires cultivating latency tolerance: the capacity to delay response long enough for reinforcement loops to weaken. Delay, here, is not passivity. It is agency exercised against the grain of the system.
It depends on affective regulation, not as moral self-improvement, but as a way of avoiding behavioral legibility. A subject whose emotional responses are tightly coupled to stimuli is easier to optimize than one whose reactions are uneven, delayed, or selective.
It also requires loosening the grip of identity where possible. Strong identities provide clean reinforcement hooks. This does not mean abandoning commitments. It means resisting the conversion of identity into a fully programmable interface.
Finally, it demands epistemic humility—not as virtue signaling, but as strategy. In an environment that exploits certainty, doubt introduces noise. Noise degrades optimization.
The Deeper Problem (with AI)
Skinner believed the ethical danger lay in who designed the environment.
Our problem is increasingly more structural than that, unfortunately.
The environment now designs itself, folks, through feedback loops that privilege engagement over understanding, speed over judgment, and response over reflection. No single actor controls it. No clear locus of responsibility exists.
AI systems do not merely run experiments faster; they learn which kinds of humans are most responsive to which stimuli, and they generalize those patterns across populations. Where earlier systems optimized content, AI optimizes interaction. Where earlier platforms tested messages, AI tests you, continuously, adaptively, and at scale.
This matters politically because AI collapses latency even further.
When systems can:
anticipate responses before they are consciously formed,
personalize stimuli in real time,
and adjust reinforcement schedules dynamically,
the space for deliberation shrinks again. Reflection becomes a lagging indicator. Trust becomes harder to sustain because the environment feels increasingly strategic, even when no deception is involved.
What This Leaves Us With
Artificial intelligence does not doom democracy. But it raises the price of real democratic citizenship.
Remaining self-governing under these conditions requires capacities that are no longer environmentally supported: the ability to delay response, tolerate uncertainty, regulate affect, extend trust selectively but not cynically, and resist converting identity into a control surface.
These are not virtues the system rewards. They are capacities the system quietly taxes.
Which brings us back to Skinner’s original discomfort—and, frankly, what should be our discomfort with Skinner.
“The question is not whether we will control human behavior, but how.”
— B. F. Skinner, 1971 (Beyond Freedom and Dignity)
Skinner worried about who would design the environment. Today, the more unsettling problem is that the tools now exist to operationalize his insights at civilizational scale—and not only for benign purposes.
Historically, authoritarian systems relied on blunt instruments: propaganda, coercion, surveillance, and fear. Even at their most ambitious, they struggled with lag, noise, and resistance. Mao, to pick just one example, had ideology, mass rallies, informants, and violence, but he still lacked real-time feedback, individualized reinforcement, and adaptive behavioral optimization.
Imagine that system with modern data, continuous experimentation, AI-driven personalization, and predictive behavioral models.
That is not science fiction. It is simply an authoritarian regime with better tools.
What changes is not just efficiency, but subtlety. Control no longer depends primarily on overt repression. It can operate through incentives, friction, visibility, and latency collapse—nudging populations toward compliance, conformity, and silence without ever needing to announce itself.
In such a world, freedom does not disappear overnight. It just erodes.
It erodes as the interval between stimulus and response shrinks.
It erodes as trust becomes irrational.
It erodes as hesitation is punished and certainty is rewarded.
Freedom survives, if it survives at all, as latency—the shrinking interval in which a person can still choose not to respond on cue.
That interval has not vanished. But it must be chosen.
And in an AI-mediated political environment increasingly compatible with authoritarian ambition, preserving that latency may be the central task of democratic life in the near term.


This is sharp on conditioning and latency.
One extension: once optimization becomes visible, institutions lose immunity. The issue isn’t just behavioral capture, it’s that authority can no longer operate outside the test environment.
When every signal feels instrumental, governance shifts from discretion to survivability. Legitimacy fails not because people are manipulated, but because no actor can credibly stand outside the experiment anymore.