Trust, Critique, and the Problem of Knowing Together
Gouldner, Late Postmodern AI Epistemics, and the (Very) Fragile Possibilities of a Metamodern Horizon
NB: I will warn you up front, this piece is nerdy and not for the faint of heart. It’s long, it’s complicated, and it is not necessarily uplifting (though there are some hopeful bits, I assure you!)
Still, I hope that this essay provides some insight into the current (and maybe even future) world we are in. So, grab a chair, put on your thinking caps, and get comfy.
One of the defining features of contemporary political and intellectual life is not disagreement per se, but the inability to resolve disagreement in ways that produce shared understanding, legitimacy, or trust. We argue constantly, yet nothing seems to move. Evidence piles up, counterevidence proliferates, and competing claims coexist without displacement. The result is not pluralism in the productive sense, but fragmentation, defined as parallel realities operating under incompatible standards of judgment.
When people no longer agree on how truth claims should be evaluated, institutions tasked with resolving disputes (e.g., universities, courts, bureaucracies, media organizations) rarely respond by becoming more-neutral arbiters. More often, they adapt by substituting moral judgment, procedural box-checking, or boundary policing for genuine attempts at truth-finding.
In a previous essay, I described this condition as incommensurability—not the mere presence of multiple perspectives, but the absence of shared norms for adjudicating among them. When actors no longer agree on what counts as evidence, how uncertainty should be weighed, or how disputes are meant to resolve, trust erodes. Not because people are irrational or malicious, but because the epistemic infrastructure required for collective sensemaking has broken down.
This breakdown is often narrated ideologically. On parts of the right, it appears as a story of “cultural Marxism” or moral capture: universities and cultural institutions abandoning truth in favor of narrative power. On parts of the left, it appears as a story of reactionary backlash against necessary critiques of hierarchy and injustice.
Both accounts contain fragments of truth. Neither is sufficient. Both mistake symptoms for causes.
How We Got Here: Gouldner and Critique Turned Inward
To understand the structural dynamics beneath this breakdown, we need a framework that treats epistemic authority, institutional incentives, and intellectual class position as interdependent. The late work of Alvin Gouldner—particularly The Two Marxisms and The Future of Intellectuals and the Rise of the New Class—offers precisely such a framework.
Gouldner’s central move was deceptively simple: if Marxism is serious about ideology critique, it must submit itself to that same scrutiny. Theories do not just float above history, they are produced by situated actors, embedded in institutions, and sustained by incentives.
Marxism, no less than liberalism or conservatism, is a social product.
From this starting point, Gouldner identified what he called a nuclear contradiction within Marxism: the tension between scientific Marxism and critical Marxism.
Scientific Marxism aspired to law-like explanations of historical development, economic crisis, and class struggle.
Critical Marxism emphasized moral critique, alienation, domination, and emancipation.
As long as Marxism appeared empirically successful, like when revolutions seemed imminent and capitalism unstable, the tension between its scientific and critical strands could be managed. But as anomalies accumulated like failed predictions, revolutions occurring outside the conditions that Marxist theory anticipated, and capitalism’s demonstrated adaptability, well, the authority of the scientific strand eroded.
What persisted, and in many contexts flourished, was the critical strand.
This shift is often described polemically as Marxism “retreating into culture.” Gouldner’s account (and prediction?) is actually more precise and more unsettling. The shift was not merely ideological, it was institutional.
As Marxism shifted from political movements and workplaces into universities and bureaucracies, its critical tools increasingly aligned with the interests and self-understandings of intellectuals and the institutions they worked within. In those settings, authority came to be rewarded less through empirical success in explaining the world and more through interpretation, moral judgment, and symbolic and narrative power.
Critique survived not because it was empirically vindicated, but because its narrative power made it useful within postmodern institutions that reward moral intensity rather than epistemic rigor.
The New Class and the Moralization of Authority
Gouldner’s “new class” thesis is frequently misread as an attack on intellectuals. It is better understood, to my eye, as a warning about self-misrecognition.
Intellectuals control what Gouldner called the conceptual means of production: credentials, classifications, interpretive frameworks, and standards of legitimacy. They are neither capitalists nor workers in the classical sense, but they wield real power over meaning, status, and institutional reproduction.
This position generates both genuine moral commitments and structural temptations.
Intellectuals are often genuinely committed to rationality, critique, and universalism. But they also benefit from environments where authority flows from expertise, interpretation, and moral judgment. When empirical confidence weakens, like when predictions fail or become politically risky (sound familiar yet?), moral language becomes a particularly attractive substitute.
Why? Because it is harder (if not impossible) to falsify, easier to enforce, and exceptionally well suited to bureaucratic governance.
This is the kernel of truth in contemporary critiques that frame universities and cultural institutions as moralizing or priestly. But the mistake is to treat this as a conspiracy or a uniquely Marxist pathology. What Gouldner shows is a general institutional dynamic:
When epistemic standards fragment and predictive authority declines, institutions stop correcting error and start policing meaning.
In plainer terms: what looks like ideological capture is often epistemic failure that has been bureaucratically routinized.
Postmodernism and the Weaponization of Subjectivity
This failure cannot be understood without confronting the epistemic consequences of late postmodern thought, especially when its core insights are taken to their logical extremes, as they increasingly are.
To be clear, however, the problem is not postmodern critique as such, but its institutionalization without compensating mechanisms of adjudication.
This is not to claim that postmodern epistemics alone produced this outcome; economic precarity and increased inequalities, attention-driven media systems, and technological amplification, and a whole lot of other things all interact with these ideas. But postmodernism supplied the epistemic grammar through which these forces became intelligible—and justifiable.
Three claims sit at the heart of the postmodern perspective:
Contextualism: there are no universal truths, only truths embedded in specific historical and cultural contexts.
Constructivism: knowledge is not discovered but constructed; facts do not speak for themselves.
Aperspectivism: there are no ahistorical, pregiven, or privileged standpoints from which to adjudicate claims.
Each of these captures a genuine insight, at least in a way.
Together, unchecked and weaponized to gain power, they are epistemically devastating.
When contextualism erodes universality, constructivism dissolves constraint, and aperspectivism rejects adjudication, the concept of error itself becomes very difficult to sustain. This is not because knowledge becomes merely subjective, but because disagreement loses any shared mechanism for resolution.
Knowledge claims no longer compete under shared standards; they coexist as expressions of position, identity, or experience.
Lived experience, once a corrective to abstraction, can become an epistemic trump card that is immune to challenge, translation, or comparison.
In this environment, disagreement is no longer something to be resolved. It is something to be managed. Claims are not tested; they are validated or condemned based on moral standing. In such environments, institutions often cease correcting errors and instead default to boundary policing.
Trust collapses not because people disagree, but because disagreement no longer has a pathway toward resolution.
The result is that our episteme (in this sense—the shared background assumptions that govern what counts as knowledge, evidence, and error) is very, very broken.
A Metamodern Horizon—Not a Cure
If modernism treated objectivity as a given, and postmodernism exposed its limits, a metamodern perspective begins from a more sobering premise:
Objectivity is not a starting point. It is an achievement.
It is something societies labor to build, maintain, and often lose over time—due to something like a form of social entropy, perhaps.
This, sadly, is where any optimism must be tempered. The damage we have experienced is very real, as entire generations have been trained to experience epistemic disagreement as moral threat and critique as violence. Rebuilding shared standards of knowing will not be quick. It will not be painless. It will almost certainly take a generation, if it happens at all, and it may occur with, or in spite of, artificial intelligence.
Artificial intelligence matters here not because it introduces new epistemic problems (though there are certainly some of those too), but because it accelerates and hardens the ones we already have.
Artificial Intelligence and the Routinization of Epistemic Failure
Artificial intelligence enters this story not as a cure, but as a societal stress test.
If you think about it, AI systems do not resolve epistemic breakdown. It’s more like they inherit it.
They are trained on fractured corpora, optimized for engagement, scale, and coherence rather than truth, and embedded in incentive structures that already privilege speed, scale, and confidence over deliberation, uncertainty, and correction. In that sense, AI does not introduce a new epistemology so much as mechanize the existing one.
This cuts both ways.
On the one hand, AI can appear to restore a kind of objectivity. It can summarize vast literatures, surface patterns no human could track, and impose formal consistency where discourse has become incoherent. In institutional settings desperate for adjudication (in grading, hiring, risk assessment, and content moderation just to name a few), AI offers something that looks like neutrality: standardized outputs, replicable procedures, and the removal of overt moral conflict from human judgment.
But, of course, this appearance is misleading.
AI systems do not adjudicate truth, they just optimize across multiple representations. They cannot distinguish between disagreement that should be resolved and disagreement that should remain open because that distinction depends on norms, not patterns. They do not know when a claim is wrong, only when it is atypical relative to a training distribution. So, in an epistemic environment already stripped of shared standards, AI can become a high-speed normalizer, reinforcing whatever patterns dominate the data rather than reconstructing the norms that would allow those patterns to be evaluated.
This is where the danger lies.
In a postmodernized epistemic field, and especially one already hostile to universality, constraint, and privileged adjudication, AI risks accelerating moralization rather than reversing it. When truth claims cannot be settled, procedural legitimacy becomes everything. And when legitimacy itself is contested, AI becomes an attractive proxy authority, not because it deserves it because it is wiser, but because it is impersonal.
What emerges is a paradox. AI appears most objective precisely where the social achievement of objectivity has already collapsed. It functions as an administrative substitute for epistemic consensus, not a reconstruction of it.
Worse still, AI interacts dangerously with the moralization dynamic Gouldner warned about. Moral claims, unlike empirical ones, are easier for AI systems to reproduce confidently, harder to falsify, and less constrained by error correction. In environments where disagreement is already experienced as harm, AI-generated certainty can harden boundaries rather than open inquiry. The result is not shared understanding, but faster, cleaner, more scalable epistemic closure.
None of this, of course, means AI is irrelevant to a metamodern project. Not in the least, actually.
But it does mean that AI cannot lead it.
More optimistic readings of postmodern critique and AI-assisted epistemics certainly exist, but they tend to assume background norms of adjudication that are precisely what is now missing.
If objectivity is a societal achievement, AI can at best assist in its maintenance, not substitute for the social labor required to build it. Without reflexive humility, AI becomes dogma at scale. Without procedural pluralism, it becomes technocratic authority without legitimacy. Without epistemic latency, it becomes a machine for freezing premature conclusions into institutional fact.
The risk, then, is not that AI will think for us, but that it will relieve us of the burden of rebuilding the epistemic conditions that thinking together requires.
If a metamodern order emerges in the years to come, it will not be because AI solved disagreement. It will be because humans rebuilt the norms, institutions, and expectations that allow disagreement to terminate in understanding—and then used AI cautiously, provisionally, and subordinately within that rebuilt infrastructure.
Anything else is not intelligence. It is the automation of epistemic failure.
Worse still, the more institutions rely on AI to manage epistemic conflict, the less incentive they have to repair the epistemic foundations that conflict exposes.
So, a healthy (metamodern) response, if it is to mean anything at all, therefore requires at least three commitments.
Reflexive humility. Following Gouldner, intellectuals must treat their own frameworks as historically situated and interest-laden without abandoning the aspiration to truth. Auto-critique is not nihilism, but a precondition for credibility.
(This commitment is, as of this writing, structurally disincentivized in contemporary academia, where professional advancement is more often tied to alignment, productivity, and moral positioning than to sustained reflexive critique.)
Procedural pluralism with real adjudication. Multiple methods are not the problem. The refusal to specify how they relate, compete, and constrain one another as part of the work of knowledge production is. Trust requires explicit norms for evidence, uncertainty, and disagreement—norms that are enforced, justified, revisable, and adaptable.
(While such norms have begun to emerge within small, self-selecting intellectual communities, there is little evidence of any convergence whatsoever at the institutional or societal level.)
Epistemic latency. In an attention-saturated environment, speed undermines integration. Freedom is not merely the right to speak, but the capacity to delay judgment, absorb complexity, and revise beliefs without humiliation. Institutions that reward instantaneous moral signaling will continue to fragment. Institutions that protect deliberative time may (slowly) recover legitimacy.
(Unlike the previous commitments, epistemic latency may be institutionally achievable without prior ideological convergence, which makes it a plausible—if limited—site for near-term repair.)
Conclusion
It comes down to this: Marxism did not survive because it was uniquely deceptive, nor did liberalism falter because it was uniquely naïve. Both are caught in a broader epistemic transition in which shared standards of knowing have eroded faster than replacements can be built.
Gouldner saw this earlier than most. His warning still holds: critique without reconstruction becomes sterile, and reconstruction without reflexivity becomes authoritarian.
The monumental task ahead of us, if we are to escape this epistemic deadlock, is to resist the temptation to purge ideologies or retreat to common sense.
Instead, it is to rebuild the conditions under which disagreement can once again generate understanding.
That does not mean a naïve return to modern certainty, nor a further surrender to postmodern cynicism and irony.
It is, instead, a lengthy metamodern project—slow, fragile, precarious, and absolutely necessary.
If there is hope here, it lies in that work that we do towards what comes after all this.

