Why Higher Education’s AI Backlash Reveals Some of Its Deepest Cracks
In some ways, AI isn’t the real threat. It’s just the mirror.
It seems increasingly clear that AI is going to radically transform the nature of learning and credentialing in a way that many of our educational institutions are failing to adapt to.
Knowing that you need to adapt is pretty straightforward. How to do it though is the real challenge, something we all need to grapple with soon. Very soon.
-a combination of Dan Williams and Jimmy Alfonso Licon’s posts on Substack Notes yesterday
Bring up artificial intelligence in a faculty meeting and you can often feel the temperature change.
The problem is, of course, that most in academia know that we must adapt relatively quickly to the future that is front of us. However, academia is notoriously slow to adapt, as though it is in its DNA, especially to exogenous forces that are reshaping the world it inhabits.
In that meeting, someone raises a hand to ask a cautious question—maybe about allowing AI-assisted drafting, or redesigning assignments so that use is transparent rather than prohibited. The room goes quiet. A few people shift in their chairs. Someone invokes “academic integrity.” Another worries aloud about students “not learning to think.” The conversation tightens, and before long, the implicit consensus is clear: this is not a direction we’re going.
Not always, of course—some rooms spark real dialogue—but too often, the chill sets in fast.
And, what’s striking isn’t disagreement. It’s how quickly genuine discussion seems to shut down.
(and to be clear, I am caricaturing these discussions here, and not pointing to my colleagues at my institution! But these are emblematic of online discussions and tales heard from other colleagues across academe.)
What I am trying to get at is that AI has become one of those topics in higher education where reasoning feels downstream of something else—something social, reputational, and institutional. By the time evidence enters the room, the outcome is often already decided.
That initial reaction is usually explained as a debate about pedagogy or cheating. And, sure, those concerns are real. But they don’t fully explain either the intensity of the backlash or its distinctive form.
I’d argue that what’s happening instead is a collision between a destabilizing technology and an institution already under strain, and all of it is filtered through motivated reasoning, identity threat, and the group dynamics that determine which ideas survive discussion and which die in the room.
AI isn’t just challenging how students complete assignments. In my opinion, it’s challenging how higher education understands itself.
If you enjoy (hate) this piece, please restack or share it with someone who you think would appreciate (hate) it. Thanks. :)
This Isn’t a Pedagogy Debate (At Least Not at First)
If AI resistance were primarily about pedagogy, we would expect something different.
We’d expect careful disagreement across disciplines, experimentation followed by assessment, and open acknowledgment of mixed evidence. Instead, what we often see are blanket prohibitions, symbolic enforcement, and moralized language about “real thinking” that collapses under closer inspection.
That pattern shows up clearly in faculty attitudes. A January 2026 survey from the American Association of Colleges and Universities (AAC&U) and Elon University found that 95% of faculty worry generative AI will increase student overreliance on these tools over time, and large majorities expect it to diminish critical thinking. (aacu.org)
https://www.aacu.org/newsroom/national-survey-95-of-college-faculty-fear-student-overreliance-on-ai-and-diminished-critical-thinking-among-learners-who-use-generative-ai-tools
What’s notable is not skepticism itself—it’s how quickly skepticism hardens into policy, often ahead of evidence about learning outcomes or enforceability.
That’s a tell.
The first response to AI in academic settings is rarely Does this improve learning? It’s often something more immediate, like:
What does this do to my role as an evaluator? How do I certify competence if I can’t easily distinguish student cognition from tool output? If this works, what does it imply about the assignments I’ve been using for years?
Those are not pedagogical questions in the narrow sense. They’re questions about intellectual identity, authority, and institutional legitimacy.
To be clearer about where I’m going: universities don’t merely teach. Universities certify, sort, and signal.
And AI interferes with all three at once.
Motivated Reasoning, but Institutional
From a political psychology standpoint, this dynamic is familiar.
I’ve talked about it a lot here at the BBQ, but motivated reasoning doesn’t disappear in expert communities. In fact, high-information actors are often especially good at defending preferred conclusions—particularly when those conclusions protect identity, status, or role legitimacy.
Faculty are no exception. And honestly, who wouldn’t be? We’re all navigating this existential, threatening shift.
What AI unsettles first isn’t teaching technique, but the unspoken assumptions that structure academic life.
For a long time, expertise in universities has felt special and scarce in a very specific way. The ability to produce fluent prose, competent code, or structured analysis was something students had to struggle toward, and something faculty could reliably recognize when it appeared. When a tool can generate approximations of that work cheaply and on demand, it doesn’t eliminate expertise—but it makes it feel less rare and special, and therefore less apparent as a marker of mastery.
That uncertainty quickly spills over into evaluation of work product. Much of grading rests on the inference that a finished product reflects a student’s underlying understanding. When that inference becomes shaky, assessment itself starts to feel unstable. The problem isn’t that instructors don’t want to evaluate—it’s that the familiar signals no longer mean what they used to.
And once those signals weaken, credentialing gets noisier. Degrees have always relied on imperfect proxies for competence, but AI flattens differences on exactly the kinds of assignments universities have leaned on most heavily. When everyone can clear the same visible hurdles, the sorting function of credentials becomes harder to defend.
None of this requires bad faith. In fact, motivated reasoning works best when it doesn’t feel like self-protection. Concerns about cheating, skills erosion, and fairness provide cognitively respectable justifications for a deeper discomfort that’s harder to articulate.
That’s motivated reasoning at work, just as psychology predicts: starting from a threatened identity and building backward to conclusions that feel safe.
Are These Good Ideas, Though?
This is where the AI debate starts to look less like a disagreement about tools and more like a familiar pattern in academic life.
As I discussed in Monday’s piece, in any kind of meetings, new ideas are rarely evaluated on their merits first. They’re screened socially. Before anyone asks whether a proposal is workable, effective, or supported by evidence, a set of implicit questions tends to get answered—often without anyone naming them explicitly.
Who is this likely to unsettle?
What does it signal about the person raising it?
And what risk do I take on if I’m seen agreeing too early?
AI tends to trip those wires almost immediately because these matters are intellectually existential. It arrives already associated with administrators, tech companies, and external pressures faculty distrust. It raises questions about grading, standards, and authority that no one wants to own publicly. And it forces people to take positions before they feel they understand the terrain.
The result is not a clean rejection so much as a kind of collective narrowing. The conversation shifts quickly from what might this allow us to do differently to why this is dangerous, why it’s premature, or why we need to slow down. Those are not unreasonable concerns—but they function, in practice, as conversation-stoppers.
What follows is a pattern many faculty recognize even if they don’t like admitting it. Official policies may discourage or prohibit AI use. At the same time, students continue using it, and many instructors adapt—redesigning assignments, allowing limited use, or simply choosing not to police what they know they cannot reliably detect. Enforcement becomes selective, uneven, and largely symbolic.
The idea isn’t debated and rejected on the evidence. It’s contained.
That gap between formal rules and real adaptation shows groups prioritizing threat management over finding better ways forward.
Novel ideas that destabilize existing roles often don’t get tested in the open. They get labeled unsafe, irresponsible, or not ready for serious discussion.
By the time empirical evidence or concrete proposals arrive, the moment has passed. The room has already decided what kind of idea this is.
When Principles Serve as Boundaries
There’s another layer to the resistance that’s harder to talk about because it cuts closer to professional identity.
Universities don’t just teach, they certify: they decide what counts as legitimate work, what standards matter, and who is qualified to judge them. That authority depends on shared understandings about what “real” academic labor looks like—how it’s produced, evaluated, and distinguished from shortcuts.
AI unsettles those understandings. Not because it makes faculty irrelevant, but because it blurs lines that have long been doing work for the profession. When a tool lowers the cost of producing outputs that look like competent academic work, the instinctive response is to defend the boundary rather than rethink the task.
This shows up in ways many of us recognize: renewed emphasis on traditional assessments, sharper distinctions between “authentic” and “inauthentic” cognition, and ethical language that treats disruption as decay rather than as a signal that existing structures might be under strain.
I want to emphasize that this isn’t the result of bad motives! It’s what people do when the practices that give their roles meaning start to feel, well, “less.” Institutions built to preserve legitimacy respond by tightening norms first and adapting later—if they ever adapt at all.
That doesn’t make faculty villains. It makes them participants in systems that were never designed to change quickly when their core assumptions are challenged.
Why Evidence Has Such a Hard Time Landing
One of the most frustrating features of the AI debate in higher education is how little traction evidence seems to get.
Studies about student learning outcomes, surveys showing how widely AI is already used, even open acknowledgment that enforcement is uneven or impractical—none of it seems to move the conversation very far. Positions harden early and stay put.
That’s not because evidence doesn’t matter. It’s because of when it enters the discussion.
In many departments, the basic posture toward AI is set before anyone starts asking empirical questions. By the time data show up, the issue has already been framed as a matter of integrity, standards, or institutional values. Evidence then gets pulled in selectively—to justify a stance that feels socially and professionally safer—rather than to test whether that stance is actually working.
You can see this in the familiar disconnects faculty will sometimes admit privately but avoid saying out loud. People acknowledge that bans are hard to enforce. They recognize that students are using these tools regardless. They may even concede that some forms of AI-assisted work don’t obviously harm learning. Yet none of that translates cleanly into policy change.
So what’s really guiding it? Often, it’s about signaling norms more than achieving results. The institution is communicating where it stands, even if everyone involved understands that the signal and the reality don’t quite line up.
AI as an Accelerant, Not the Cause
It’s tempting to treat all of this as an “AI problem”—as if higher education would be stable and confident but for the sudden arrival of generative tools.
That framing lets institutions avoid a harder truth.
Universities entered the AI moment already dealing with a long list of unresolved tensions: declining public confidence, growing skepticism about the value of credentials, and an increasingly uneasy relationship between cost, signaling, and outcomes. These pressures didn’t begin with ChatGPT, and they weren’t waiting for a technological excuse to appear.
Public trust data make that clear. Gallup’s July 2025 poll found confidence in higher education at 42%, up modestly from the recent low point, but still far below the 57% who expressed confidence in 2015—and still sharply polarized. (Gallup.com)
https://news.gallup.com/poll/692519/public-trust-higher-rises-recent-low.aspx
AI didn’t create those cracks. It widened them.
By forcing uncomfortable questions about what counts as thinking, originality, and mastery, AI exposes how much academic authority rests on proxies that were always imperfect but rarely challenged—timed essays, take-home problem sets, polished originality as a stand-in for understanding. When those proxies start to wobble, the anxiety that follows is understandable.
What’s less helpful is where that anxiety gets directed. Framing AI primarily as a cheating problem or a moral failure turns a structural challenge into a story about bad actors. It protects the institution from asking whether some of its most familiar practices were doing more symbolic work than pedagogical work all along.
AI, Trust, and the Myth of Aggregate Productivity
Building on that, much of the public debate about AI treats productivity as an aggregate outcome. Either AI delivers spectacular gains, or it destroys the system entirely. Charts project singularities or modest trend boosts, as if the only question that matters is how steep the line becomes.
What those views miss is that productivity is inseparable from trust.
AI does not raise or lower productivity uniformly. It redistributes it. Within trusted groups—teams, firms, networks with shared norms and context—AI can dramatically reduce coordination costs. Work that once required layers of translation, documentation, and oversight can be compressed or automated. In those settings, real productivity gains are not hypothetical; they are already visible.
But between groups, the opposite dynamic dominates. AI floods the environment with plausible but unreliable signals: synthetic content, automated outreach, fake credentials, and adversarial noise. The cost of verifying who is real, competent, or acting in good faith rises sharply. In many contexts, those verification costs overwhelm the efficiency gains AI creates elsewhere.
Institutions like universities sit directly at that fault line. Their core function has always been to provide cheap, scalable trust—certifying competence, signaling legitimacy, and reducing uncertainty for outsiders. AI weakens those signals precisely by making them easier to imitate. The result is not just anxiety about learning, but a deeper fear that the institution’s coordinating role is eroding.
Seen this way, resistance to AI is less about nostalgia for old pedagogies than about the collapse of low-cost trust. And that collapse doesn’t show up in GDP charts, even as it reshapes how—and where—productivity can actually occur.
A Necessary Counterpoint: Resistance Isn’t Universal
None of this is to say that faculty resistance is monolithic—or that productive integration isn’t happening.
Some instructors and programs are experimenting with AI in ways that are pedagogically serious: emphasizing process over product, requiring disclosure and reflection, and using AI as an object of critique rather than a forbidden shortcut. In computer science, engineering, and some writing-intensive courses, this has meant redesigning assignments rather than defending legacy ones.
It’s also worth noting that many higher-ed professionals are approaching AI with cautious pragmatism rather than blanket rejection. In an EDUCAUSE report released in January 2026, 81% of respondents said they feel enthusiasm or a mix of caution and enthusiasm about AI, and 92% reported that their institutions have a work-related AI strategy. (EDUCAUSE)
https://www.educause.edu/research/2026/the-impact-of-ai-on-work-in-higher-education
What’s notable is that these successes often remain low-profile. They travel through informal networks, not faculty meetings. They flourish where social risk is low and die where identity threat is high.
That’s not a technology problem. It’s a group dynamics problem.
The Hard Question We’re Avoiding
The most unsettling implication of AI isn’t that students might stop thinking.
It’s that some of the things we’ve long treated as evidence of thinking were thinner than we liked to admit.
If a task can be outsourced to a machine, maybe the problem isn’t the machine. Maybe the task was doing more symbolic work than cognitive work.
That doesn’t mean the solution is uncritical embrace. It means the solution is harder than bans. It requires rethinking evaluation, pedagogy, and what human judgment is actually for in an AI-saturated environment.
Institutions under threat rarely do that well. They default to control, moralization, and delay.
Which is exactly what we’re seeing now.
But acknowledging this could open doors to real innovation—let’s freaking talk about it.
Where This Leaves Us
In sum, higher education’s backlash against AI is not primarily about technology. It’s about identity, authority, and the social mechanics of institutional self-preservation.
Until those pressures are acknowledged, debates about AI policy will continue to feel oddly disconnected from evidence—and reform will lag behind reality.
AI isn’t killing education.
It’s forcing universities to confront how much of their authority rests on fragile proxies for thinking—and how uncomfortable it is to say that out loud.
We really need to figure this out, folks.
If you enjoyed (hated) this piece, please restack or share it with someone who you think would appreciate (hate) it. Thanks. :)

