Great insights and thanks for the shoutout! Very interesting to map AI attitude onto the growing realignment of coalitions. Good reminder of how expansive AI is as a *social* technology and the way it can be centrally framed / understood along many more dimensions (e.g. labor automation, companionship device, centralized form of elite media) than the other mass technologies you listed. Dimensions that obviously map onto political cleavages, arguably way stronger than when those other technologies were introduced.
Curious if you have any takes on what a dot-com-bubble-esque AI crash would do to the political economy you've described here. Probably above all our paygrades, but would be interesting to write about, taking the lessons/comparisons to the Internet further.
Thanks, Soubhik—really appreciate the note (and your piece, which all of you should read).
On the “AI crash” question: my instinct is that a dot-com-style wipeout would change the tempo more than the direction. It’d probably puncture the everything-boom narrative, thin the field of firms, and slow consumer-facing diffusion for a bit.
But it might also concentrate power (survivors + incumbents), and it wouldn’t remove the state/security/enterprise uses where budgets and incentives look different--that's kind of where thinking and conversing with Art (his piece is also in there, very worth reading from the tech side about limitations).
So, politically, I could see two things happening at once: (1) a deregulatory “see, the market corrected it” story from one side, and (2) a concentrated-power/worker-disruption story from the other, and that's especially true if the crash comes after job ladder effects are already visible.
I mean there's a lot of potential crash mechanisms: capital costs/compute scarcity, demand saturation, regulatory shock, or a capability plateau--and I will have to think about if each plays out differently on the political front...
When people in the West talk about AI in terms of risks, societal disruption, capital concentration, and national security, and propose large-scale solutions that big liberal democracies must implement quickly lest AI "devour us all", people just tend to look too top-down.
The early internet and crypto indeed both saw adopters mainly as enthusiasts, until the technology gradually bled into the mainstream. The thing is though that the standards were largely set by the enthusiast pioneers who adapted to the new reality through organic use of the technology, and the law merely adapted to those standards- for example the early internet was incredibly pro libertarian, pro-privacy, and pro-net neutrality because it was a decentralized pile of forums, chat rooms, online gaming services, personal websites, open-source code, and cat pictures - this is still maintained today with the laissez-faire attitude of the tech oligarchs and many of their users. Likewise, block was mainly used as a speculative technology early on and that eventually adapted itself naturally into an investment engine, and then "stablecoins" from that lense.
The problem with AI from the West is that they are mainly technologies pushed top down by a few tech companies, closed-source and for-profit right out of the gate. Having large tech oligarchs with deep pockets helped accelerate adoption, but also resulted in a perception that AI is a system-level technology like electricity or water pipelines, where change is only possible with the resources of titans: big tech corporations like NVIDIA and OpenAI vs. big governing bodies like the EU or US vs huge industry sectors like art or accounting that face displacement.
This was not true for the early internet, not true for crypto, and not true for AI, because unlike "real world" projects like a large energy pipeline or a wind turbine, AI only requires data, which is cheap, practically free in the internet age in large quantities, accessible to anyone.
Chinese models like Moonshot, Qwen3, and RVC offer a different picture- unlike the US Chinese companies are not flush with capital nor have a ready captive audience from social media, so they must use the same strategies companies and organizations from the 90s and 00s used to gradually wean people onto the technology: offer high quality products for cheap, open-source, that enthusiasts eventually adopt as their own, and through encountering problems naturally they tackle them in small companies until standards naturally form around the reality of the technological landscape.
For this next bit, I want you to think less about "how should we regulate AI" and more about "how would you enforce an AI regulation especially if there is no natural consensus within society" (there absolutely is not, far from it):
For example, in the more centralized Western closed source ecosystem around a few large models, if individuals are concerned about unconsentual deepfakes, they can petition the few big AI players (Gemini, OpenAI, Anthropic) to add "guardrails" preventing users from creating deepfakes.
However, an open source ecosystem is decentralized, there could be thousands, even millions of forks and variants of models across many, often private websites, shared by private users often to other private users. Any discrete amount of these models could allow deepfakes, it would be impossible to have all of them add refusal guardrails, and those refusal guardrails could be circumvented with tech such as abliteration.
You could then attempt to pass a law banning deepfakes, this would at most drive them into the underground, much like software piracy and child pornography today. Whenever a new AI hosting site is created, you'll probably have a wave of deepfakes, followed by bans that get some but not all of them, as more would just be uploaded later. You could even sic an AI on catching deepfakes (much like content ID catches potential copyright issues on YouTube), but there would be strategies around that, like YouTube's nightcore scene and TikTok's algospeak, and possibly technological solutions that can circumvent the AI like adblockers or paywall bypassers.
Due to the cheap access of technology, we can already run AI models locally on our computers, from the privacy of our hard drive. From there, circumventing any regulation is child's play.
It would be a never ending arms race against entropy you are doomed to lose in the long term, as states have finite lifespans and always fall to entropy in the end.
Politicians, activists, and idealists always assume the rule of law is absolute, and always ask what *should* happen, and then pass laws attempting to model that reality. But the reality is that rule of law is not, nor was it ever absolute. It hinges on enforcement, and here the technological landscape could eventually make it impossible.
This is because *should*- values, morals, etc. differ by person to person. For this reason the people asking what *should* an AI do will never be in alignment with the users using the AI, much like how the copyright holder will never be in alignment with the pirates. That is the real alignment problem of AI.
Liberal democracy is indeed slow at solving large-scale problems with big, monolithic, top-down solutions. This is because liberal democracy was never designed to solve large-scale problems with big, monolithic, top-down solutions. In fact, it is an ideology naturally built on working with distrust, in a society distrustful of any state exerting its power top-down.
So, the advantage liberal democracies have in the AI political discussion over top-down states is that due to limited power at the top they are more flexible at the lower levels. If your federal government cannot find a consensus, your provincial/state governments can attempt it, each arriving at a different solution based on their needs. Past that, your local government. Past that, you would best rely on a word of honor with an emergency task force to only catch serious issues.
IMO AGI is not the death of humanity, it is the death of modernity- the idea that there are any sort of universal value like universal human rights, universal equality, world domination, that can bring about a large enough consensus that it can be legislated and enforced at a universal level. Power will shift away from large groups such as big companies, nation-states, and international organizations, and towards smaller provinces/states, cities, communities, and eventually back to the individual.
I think you’re exactly right that a lot of Western AI discourse implicitly assumes a world where regulation is both coherent and enforceable, when in reality rule of law has always been conditional on enforcement capacity, social buy-in, and technical feasibility. That’s not a new failure mode; AI just exposes it brutally fast. Deepfakes are a good example: even perfect laws don’t eliminate the behavior, they just shift it into gray or underground spaces, with endless cat-and-mouse dynamics. That’s not a bug of liberal democracy so much as a feature of any system trying to govern low-cost, high-diffusion technologies.
Where I think we may diverge is on what follows from that observation.
I completely agree that open-source AI makes centralized behavioral control impossible in the long run. Once models run locally and forks proliferate, “guardrails” become at best friction, not constraint. In that sense, yes—trying to regulate AI as if it were a pipeline or a reactor is category error. Enforcement will always lag capability.
But I’m less convinced that this necessarily implies a clean shift away from institutional politics and toward bottom-up equilibrium, at least not smoothly or benignly.
A few points of tension I’m still chewing on:
Bottom-up standard setting doesn’t eliminate power—it relocates it. The early internet felt libertarian and decentralized, but norms still consolidated around choke points: ISPs, platforms, browsers, later app stores and cloud infrastructure. Crypto, similarly, never escaped exchanges, custodians, or regulatory gateways. Even if AI models are cheap and local, compute, energy, and distribution are not. That creates new leverage points, even if they’re messier and more fragmented than traditional regulation.
Liberal democracies are flexible locally—but legitimacy still aggregates nationally. I agree with you that federal paralysis often pushes experimentation downward (states, cities, institutions), and that’s a real strength. But when outcomes become visibly unequal—different labor regimes, speech norms, surveillance tolerances—that itself becomes a national political problem. The conflict doesn’t disappear; it escalates because pluralism produces divergent lived realities.
The enforcement problem doesn’t stay technical—it becomes distributive. You’re right that people will always route around controls. But who bears the cost of that failure matters politically. Elites can bypass rules cheaply. Institutions can insulate themselves. Entry-level workers, creators, and communities usually cannot. That asymmetry is where politics re-enters, even if the technology itself resists control.
I’m skeptical that power flows cleanly “back to the individual.” Historically, periods where universal norms break down don’t always produce individual empowerment; they often produce fragmentation plus local strongmen, private governance, or informal coercion. I’m not predicting dystopia—but I’m wary of assuming entropy automatically favors autonomy rather than new forms of hierarchy.
So I think you’re right that asking “what should AI do?” is the wrong first question. The better questions are closer to what you’re asking:
Who can realistically enforce what, and at what cost?
Where do norms emerge organically, and where do they fracture?
Who pays when enforcement fails?
Where I still land, though, is that those enforcement limits don’t depoliticize AI—they repoliticize it, just at different layers: infrastructure, labor markets, local governance, liability, and legitimacy rather than pure model behavior.
If modernity is ending here (and I’m not convinced either way), I suspect the transition will be far more conflictual and institutional than either the “AGI kills us all” crowd or the “bottom-up equilibrium solves it” crowd expects.
When institutional power flows downwards, indeed you have a relocation of power in local elites first. However if national/international norms with its massive bureaucracy cannot contain entropy, it is unlikely that local elites would be able to fully contain entropy.
Unequal laws will create unequal outcomes at the local level => individuals will be easily able to circumvent laws they do not like (creating a free market of law).
If you do not like a national law and you do not want to contest/hide from it, you must flee the country- a daunting endeavor for many in the 21st century.
If you do not like a local law, you can simply flee to the next jurisdiction as long as there is national ease of freedom of movement (we are seeing this right now in the US with a mass exodus from expensive blue states to cheaper, lower-tax red states).
With technology this becomes much easier: VPNs allow you to masquerade as an actor from any country, fake ID generators allow you to readily circumvent ID requirements for social media (or alternatively migrate to new social media not hosted within the ID requirements).
The more the technological environment empowers the individual (easily accessible, hard to monopolize or control with top-down authority), the more likely power flowing downward will reach the individual rather than the state. For example, the invention of gunpowder allowed any peasant to wield the force needed to destroy a professional elite army that require years of training. It resulted in the death of an armed nobility, but that power didn't just go to new elites (i.e. nation-states) but also was diffused to the individual, allowing for the revolutions and guerilla warfare that we see in the early modern period.
So entropy can favor new, more local hierarchy, but sufficient entropy makes any and all hierarchy impossible as enforcement action of norms become impossible and enforcement failures impossible to pay for. AI and digital technologies are already very accessible for the average person- you can train and run models on your local computer for instance. Along with 3D printing, these represent some of the largest diffusions of power from hierarchies to individuals/small communities in the 21st century, irrespective of what legalists desire.
Exactly the kind of micro-level change that’s easy to miss in aggregate debates. The “reassignment before redesign” pattern feels like it’s going to be (very) common, and it raises tricky questions about training, mentorship, and how people even enter professions going forward.
If you’re willing to share, I’d be curious what kinds of tasks disappeared versus what juniors got pushed into because that gap is where a lot of the politics will likely show up.
Great insights and thanks for the shoutout! Very interesting to map AI attitude onto the growing realignment of coalitions. Good reminder of how expansive AI is as a *social* technology and the way it can be centrally framed / understood along many more dimensions (e.g. labor automation, companionship device, centralized form of elite media) than the other mass technologies you listed. Dimensions that obviously map onto political cleavages, arguably way stronger than when those other technologies were introduced.
Curious if you have any takes on what a dot-com-bubble-esque AI crash would do to the political economy you've described here. Probably above all our paygrades, but would be interesting to write about, taking the lessons/comparisons to the Internet further.
Thanks, Soubhik—really appreciate the note (and your piece, which all of you should read).
On the “AI crash” question: my instinct is that a dot-com-style wipeout would change the tempo more than the direction. It’d probably puncture the everything-boom narrative, thin the field of firms, and slow consumer-facing diffusion for a bit.
But it might also concentrate power (survivors + incumbents), and it wouldn’t remove the state/security/enterprise uses where budgets and incentives look different--that's kind of where thinking and conversing with Art (his piece is also in there, very worth reading from the tech side about limitations).
So, politically, I could see two things happening at once: (1) a deregulatory “see, the market corrected it” story from one side, and (2) a concentrated-power/worker-disruption story from the other, and that's especially true if the crash comes after job ladder effects are already visible.
I mean there's a lot of potential crash mechanisms: capital costs/compute scarcity, demand saturation, regulatory shock, or a capability plateau--and I will have to think about if each plays out differently on the political front...
When people in the West talk about AI in terms of risks, societal disruption, capital concentration, and national security, and propose large-scale solutions that big liberal democracies must implement quickly lest AI "devour us all", people just tend to look too top-down.
The early internet and crypto indeed both saw adopters mainly as enthusiasts, until the technology gradually bled into the mainstream. The thing is though that the standards were largely set by the enthusiast pioneers who adapted to the new reality through organic use of the technology, and the law merely adapted to those standards- for example the early internet was incredibly pro libertarian, pro-privacy, and pro-net neutrality because it was a decentralized pile of forums, chat rooms, online gaming services, personal websites, open-source code, and cat pictures - this is still maintained today with the laissez-faire attitude of the tech oligarchs and many of their users. Likewise, block was mainly used as a speculative technology early on and that eventually adapted itself naturally into an investment engine, and then "stablecoins" from that lense.
The problem with AI from the West is that they are mainly technologies pushed top down by a few tech companies, closed-source and for-profit right out of the gate. Having large tech oligarchs with deep pockets helped accelerate adoption, but also resulted in a perception that AI is a system-level technology like electricity or water pipelines, where change is only possible with the resources of titans: big tech corporations like NVIDIA and OpenAI vs. big governing bodies like the EU or US vs huge industry sectors like art or accounting that face displacement.
This was not true for the early internet, not true for crypto, and not true for AI, because unlike "real world" projects like a large energy pipeline or a wind turbine, AI only requires data, which is cheap, practically free in the internet age in large quantities, accessible to anyone.
Chinese models like Moonshot, Qwen3, and RVC offer a different picture- unlike the US Chinese companies are not flush with capital nor have a ready captive audience from social media, so they must use the same strategies companies and organizations from the 90s and 00s used to gradually wean people onto the technology: offer high quality products for cheap, open-source, that enthusiasts eventually adopt as their own, and through encountering problems naturally they tackle them in small companies until standards naturally form around the reality of the technological landscape.
For this next bit, I want you to think less about "how should we regulate AI" and more about "how would you enforce an AI regulation especially if there is no natural consensus within society" (there absolutely is not, far from it):
For example, in the more centralized Western closed source ecosystem around a few large models, if individuals are concerned about unconsentual deepfakes, they can petition the few big AI players (Gemini, OpenAI, Anthropic) to add "guardrails" preventing users from creating deepfakes.
However, an open source ecosystem is decentralized, there could be thousands, even millions of forks and variants of models across many, often private websites, shared by private users often to other private users. Any discrete amount of these models could allow deepfakes, it would be impossible to have all of them add refusal guardrails, and those refusal guardrails could be circumvented with tech such as abliteration.
You could then attempt to pass a law banning deepfakes, this would at most drive them into the underground, much like software piracy and child pornography today. Whenever a new AI hosting site is created, you'll probably have a wave of deepfakes, followed by bans that get some but not all of them, as more would just be uploaded later. You could even sic an AI on catching deepfakes (much like content ID catches potential copyright issues on YouTube), but there would be strategies around that, like YouTube's nightcore scene and TikTok's algospeak, and possibly technological solutions that can circumvent the AI like adblockers or paywall bypassers.
Due to the cheap access of technology, we can already run AI models locally on our computers, from the privacy of our hard drive. From there, circumventing any regulation is child's play.
It would be a never ending arms race against entropy you are doomed to lose in the long term, as states have finite lifespans and always fall to entropy in the end.
Politicians, activists, and idealists always assume the rule of law is absolute, and always ask what *should* happen, and then pass laws attempting to model that reality. But the reality is that rule of law is not, nor was it ever absolute. It hinges on enforcement, and here the technological landscape could eventually make it impossible.
This is because *should*- values, morals, etc. differ by person to person. For this reason the people asking what *should* an AI do will never be in alignment with the users using the AI, much like how the copyright holder will never be in alignment with the pirates. That is the real alignment problem of AI.
Liberal democracy is indeed slow at solving large-scale problems with big, monolithic, top-down solutions. This is because liberal democracy was never designed to solve large-scale problems with big, monolithic, top-down solutions. In fact, it is an ideology naturally built on working with distrust, in a society distrustful of any state exerting its power top-down.
So, the advantage liberal democracies have in the AI political discussion over top-down states is that due to limited power at the top they are more flexible at the lower levels. If your federal government cannot find a consensus, your provincial/state governments can attempt it, each arriving at a different solution based on their needs. Past that, your local government. Past that, you would best rely on a word of honor with an emergency task force to only catch serious issues.
IMO AGI is not the death of humanity, it is the death of modernity- the idea that there are any sort of universal value like universal human rights, universal equality, world domination, that can bring about a large enough consensus that it can be legislated and enforced at a universal level. Power will shift away from large groups such as big companies, nation-states, and international organizations, and towards smaller provinces/states, cities, communities, and eventually back to the individual.
I think you’re exactly right that a lot of Western AI discourse implicitly assumes a world where regulation is both coherent and enforceable, when in reality rule of law has always been conditional on enforcement capacity, social buy-in, and technical feasibility. That’s not a new failure mode; AI just exposes it brutally fast. Deepfakes are a good example: even perfect laws don’t eliminate the behavior, they just shift it into gray or underground spaces, with endless cat-and-mouse dynamics. That’s not a bug of liberal democracy so much as a feature of any system trying to govern low-cost, high-diffusion technologies.
Where I think we may diverge is on what follows from that observation.
I completely agree that open-source AI makes centralized behavioral control impossible in the long run. Once models run locally and forks proliferate, “guardrails” become at best friction, not constraint. In that sense, yes—trying to regulate AI as if it were a pipeline or a reactor is category error. Enforcement will always lag capability.
But I’m less convinced that this necessarily implies a clean shift away from institutional politics and toward bottom-up equilibrium, at least not smoothly or benignly.
A few points of tension I’m still chewing on:
Bottom-up standard setting doesn’t eliminate power—it relocates it. The early internet felt libertarian and decentralized, but norms still consolidated around choke points: ISPs, platforms, browsers, later app stores and cloud infrastructure. Crypto, similarly, never escaped exchanges, custodians, or regulatory gateways. Even if AI models are cheap and local, compute, energy, and distribution are not. That creates new leverage points, even if they’re messier and more fragmented than traditional regulation.
Liberal democracies are flexible locally—but legitimacy still aggregates nationally. I agree with you that federal paralysis often pushes experimentation downward (states, cities, institutions), and that’s a real strength. But when outcomes become visibly unequal—different labor regimes, speech norms, surveillance tolerances—that itself becomes a national political problem. The conflict doesn’t disappear; it escalates because pluralism produces divergent lived realities.
The enforcement problem doesn’t stay technical—it becomes distributive. You’re right that people will always route around controls. But who bears the cost of that failure matters politically. Elites can bypass rules cheaply. Institutions can insulate themselves. Entry-level workers, creators, and communities usually cannot. That asymmetry is where politics re-enters, even if the technology itself resists control.
I’m skeptical that power flows cleanly “back to the individual.” Historically, periods where universal norms break down don’t always produce individual empowerment; they often produce fragmentation plus local strongmen, private governance, or informal coercion. I’m not predicting dystopia—but I’m wary of assuming entropy automatically favors autonomy rather than new forms of hierarchy.
So I think you’re right that asking “what should AI do?” is the wrong first question. The better questions are closer to what you’re asking:
Who can realistically enforce what, and at what cost?
Where do norms emerge organically, and where do they fracture?
Who pays when enforcement fails?
Where I still land, though, is that those enforcement limits don’t depoliticize AI—they repoliticize it, just at different layers: infrastructure, labor markets, local governance, liability, and legitimacy rather than pure model behavior.
If modernity is ending here (and I’m not convinced either way), I suspect the transition will be far more conflictual and institutional than either the “AGI kills us all” crowd or the “bottom-up equilibrium solves it” crowd expects.
Good stuff though, made me think. :)
When institutional power flows downwards, indeed you have a relocation of power in local elites first. However if national/international norms with its massive bureaucracy cannot contain entropy, it is unlikely that local elites would be able to fully contain entropy.
Unequal laws will create unequal outcomes at the local level => individuals will be easily able to circumvent laws they do not like (creating a free market of law).
If you do not like a national law and you do not want to contest/hide from it, you must flee the country- a daunting endeavor for many in the 21st century.
If you do not like a local law, you can simply flee to the next jurisdiction as long as there is national ease of freedom of movement (we are seeing this right now in the US with a mass exodus from expensive blue states to cheaper, lower-tax red states).
With technology this becomes much easier: VPNs allow you to masquerade as an actor from any country, fake ID generators allow you to readily circumvent ID requirements for social media (or alternatively migrate to new social media not hosted within the ID requirements).
The more the technological environment empowers the individual (easily accessible, hard to monopolize or control with top-down authority), the more likely power flowing downward will reach the individual rather than the state. For example, the invention of gunpowder allowed any peasant to wield the force needed to destroy a professional elite army that require years of training. It resulted in the death of an armed nobility, but that power didn't just go to new elites (i.e. nation-states) but also was diffused to the individual, allowing for the revolutions and guerilla warfare that we see in the early modern period.
So entropy can favor new, more local hierarchy, but sufficient entropy makes any and all hierarchy impossible as enforcement action of norms become impossible and enforcement failures impossible to pay for. AI and digital technologies are already very accessible for the average person- you can train and run models on your local computer for instance. Along with 3D printing, these represent some of the largest diffusions of power from hierarchies to individuals/small communities in the 21st century, irrespective of what legalists desire.
Exactly the kind of micro-level change that’s easy to miss in aggregate debates. The “reassignment before redesign” pattern feels like it’s going to be (very) common, and it raises tricky questions about training, mentorship, and how people even enter professions going forward.
If you’re willing to share, I’d be curious what kinds of tasks disappeared versus what juniors got pushed into because that gap is where a lot of the politics will likely show up.