2 Comments
User's avatar
Jan Zilinsky's avatar

Enjoyed your piece, Kyle, can I briefly respond to this?

> “The coming decade of AI development is not going to produce one LLM that serves everyone. It’s going to produce a landscape of models with different training priorities, different guardrails, and different embedded assumptions about whose expert consensus counts as consensus.”

I listened to a podcast a few weeks ago where someone asserted exactly this (but they framed it as their prediction for the end of 2026). The reason I was skeptical is that I never want to underestimate consumer inertia.

Even thinking of software other than AI, don’t people mostly use whatever browser is pre-installed on their device, the email client the employer provided, etc.?

(Hard to predict how fashions will change, but people have many LLM choices today already, and nobody seems to be rushing test out new Qwen models - which are amazing - and very few people experiment to check whether a different chatbot than the one they already use is more politically aligned..)

Kyle Saunders's avatar

You're right that consumer inertia is a powerful force, and the browser/email client analogy is apt — defaults stick, switching costs are real, and most people don't shop around for ideologically congenial tools.

But I'd push back on one assumption embedded in the inertia argument: it treats the current defaults as stable. The browser default stuck partly because Microsoft controlled the OS. The email client stuck because switching meant migrating data. What controls the LLM default? Right now it's mostly ChatGPT's first-mover advantage and whatever gets bundled into search. Those are more contestable than OS-level defaults — and the bundling battles are already happening. Grok is the default AI for a platform with 600 million accounts. Meta AI is being pushed into WhatsApp, Instagram, and Facebook. Apple Intelligence is on every iPhone (and no one uses it, lol). These aren't niche alternatives that require users to go looking — they're being installed at the infrastructure level by companies with strong incentives to differentiate.

So I'm a little less worried about users actively shopping for politically aligned LLMs (you're right that almost nobody does that, at least not yet) and more worried about the defaults themselves diverging, with users simply inheriting whichever epistemic environment their platform chose for them. That's actually a more concerning version of the fragmentation story, not a less concerning one.