Mapping the Structural Divide in US Higher Education: Resilience, Market Position, and AI
I used federal data to score every four-year college on resilience and market position, and created a new AI exposure measure. Search for yours!
Back in 2020, Scott Galloway published his “USS University“ 2×2 — a strategic positioning map that tried to sort American higher education into quadrants based on brand strength and pandemic resilience.
It was provocative, it went viral, and it stuck with me. Not because his specific measures held up particularly well (COVID turned out to be a shock, yes, but not the kind that shut down a lot of schools, of course), but because the exercise was clarifying in a way that most writing about higher education isn’t. Mapping institutions on two dimensions forced a conversation about structural positioning that rankings and tier labels don’t capture.
I kept thinking about it as the pressures shifted. The demographic cliff that Grawe has been documenting for years is now arriving. Enrollment is contracting. State funding models are strained. And AI is starting to reshape the labor market in ways we’re only beginning to measure. The question that Galloway was asking — which institutions are positioned for what’s coming — well, it felt more urgent, and the data available to answer it had gotten substantially better.
So I did a thing. I built a version with federal measures. Eight indicators drawn from federal data — IPEDS, College Scorecard, O*NET, WICHE projections, the Anthropic Economic Index — positioning every four-year institution in the country along two dimensions: how resilient is the institution itself, and how well-positioned are its graduates in the labor market?
If you want to go have a look, the interactive version is live: kylesaunders.com/university-map.
I want to walk through what it shows, because a few of the resulting patterns surprised me.
The basic setup
Eight indicators, split across two axes. Institutional Resilience captures endowment per student, revenue diversification, enrollment trends, and admissions selectivity. Post-College Market Position captures completion rates, earnings-to-debt ratios, AI-related task exposure in graduates’ career fields, and regional demographic trajectories.
Every indicator is drawn from public federal data. Every institution gets a percentile rank on each component, and the two composite scores place it in one of four quadrants based on the sample median. High Capacity (above both medians), High Stress (below both), and two mixed categories — Market Misaligned and Structurally Exposed — that describe different flavors of vulnerability.
The full working paper, dataset, and replication code are all public. You can check my math. (I’d actually prefer it if you did—that’s how this can get better, serve more people, and facilitate discussions.)
What the map shows
The headline finding won’t shock anyone who’s been paying attention: American higher education is bifurcating. But the data puts numbers on it in a way that’s harder to wave away.
85% of R1 research universities fall in High Capacity. They have the endowments, the enrollment pipelines, the selectivity, the graduate earnings. Only 6 of 146 R1s fall below the median on both dimensions. The structural position of elite research universities isn’t just good — it’s overdetermined. You’d have to change multiple things simultaneously to move them out of that quadrant.
At the other end: 48% of Baccalaureate Diverse institutions — the regional colleges, the small schools serving local populations — fall in High Stress. Below the median on resilience and market position. Half of them are also shrinking in enrollment.
The middle is where it gets interesting. Master’s institutions of all sizes scatter across all four quadrants. These are the schools where institutional decisions actually matter — where strategic choices about program mix, pricing, enrollment management, and investment can plausibly shift positioning. Carnegie classification alone explains less than 30% of the variance in combined scores. For these schools, the tier label tells you surprisingly little.
The AI finding nobody expected
Here’s where the project takes a turn I didn’t fully anticipate going in.
AI-related task exposure already feeds into the market position axis (it’s one of four components), but as an exploratory extension I also validated it against real-world adoption data. The logic: use O*NET occupational task data to score how much of each occupation’s entry-level work involves tasks that current AI systems can perform, then map that back through degree fields to institutions using program mix weights. It’s a measure of theoretical task-level exposure, not a prediction of job loss — an important distinction.
Then Anthropic published the Economic Index, which tracks what AI is actually being used for based on real-world Claude conversations mapped to occupational tasks. So I could compare what AI could theoretically do with what AI is actually doing.
The correlation between the two is basically zero. ρ ≈ −0.09.
The fields where AI is most heavily adopted right now — computer science, education, arts — are not the fields where the task structure is most susceptible to entry-level automation. Those would be business administration, engineering technology, legal support. The theoretical exposure map and the actual adoption map describe almost entirely different landscapes.
I think the most plausible reading is that we’re looking at two distinct timelines. Current AI adoption is driven by user behavior, tool availability, and workflow compatibility. Theoretical exposure is driven by task characteristics. They’ll presumably converge eventually, but right now they’re measuring different things. And that matters for institutions trying to figure out which of their programs are “at risk” — the answer depends heavily on whether you mean at risk now or at risk in principle.
The uncomfortable earnings pattern
There’s a wrinkle that makes the AI finding more pointed. Using Census PSEO data, I checked whether the fields our model flags as most AI-exposed are the low-earning fields where disruption would be painful but at least wouldn’t upend anyone’s financial calculus.
They’re not. The correlation between AI exposure and current earnings is positive (ρ = 0.257). The most exposed fields are currently the highest-earning fields for new graduates. Engineering technology, computer science, business — these aren’t marginal career paths. They’re the ones families are counting on to justify the investment.
If the theoretical exposure eventually translates into actual labor market disruption — and that’s a real “if” — it would hit the career pathways with the highest current economic returns. That’s a different kind of problem than the one most people are worrying about when they talk about AI and education.
What I’m not claiming
I want to be careful here, because the internet has a way of turning “here’s an interesting pattern in the data” into “POLITICAL SCIENCE PROFESSOR SAYS YOUR COLLEGE IS DOOMED.”
This framework is a classification heuristic, not a predictive model. It tells you where institutions sit relative to each other on measurable dimensions, not what will happen to them. The AI exposure measure in particular is exploratory — it’s built on reasonable assumptions but hasn’t been validated against actual labor market disruption, because that disruption hasn’t happened yet at scale.
The sensitivity analysis matters. We tested 13 alternative specifications. About 31% of institutions never change quadrant regardless of how you tweak the model. Those positions are robust. But 69% of institutions are sensitive to at least one methodological choice. If your school sits near a quadrant boundary, the classification is exactly that — a classification that depends on assumptions, not a destiny.
I built it as a tool for thinking, not a tool for panicking. The interactive version lets you search for any institution, see exactly where it falls, explore the components driving its position, and compare it to peers. The dataset is downloadable. The replication script reproduces every number.
The whole thing is at kylesaunders.com/university-map — working paper, dataset, replication code, all of it.


Love this. Had been thinking about something similar, but more at a granular level. Now this gives an entirely new flavor to strategic planning.