The 2028 Global Intelligence Crisis: What Happens When AI Wins But the Economy Loses

The 2028 Global Intelligence Crisis: What Happens When AI Wins But the Economy Loses

What if AI succeeds beyond anyone's wildest expectations — and the economy collapses anyway?

That's the central question behind a remarkable thought experiment published by Citrini Research titled "The 2028 Global Intelligence Crisis." Written as a fictional macro memo from June 2028, the piece imagines a future where AI adoption goes exactly according to the bulls' playbook — and then traces through the economic consequences that nobody modeled.

The short version: machines don't buy groceries. And a 70% consumer-driven economy can't run on productivity gains that accrue entirely to capital owners.

The Setup: Everything Goes Right

By October 2026, the S&P 500 has flirted with 8,000. Nasdaq breaks 30,000. AI agents are everywhere. Productivity is booming — real output per hour rising at rates not seen since the 1950s. Corporate margins are expanding. Earnings are beating. Nominal GDP is printing mid-to-high single-digit annualized growth.

It looks like the golden age. And then it starts to unravel.

"A single GPU cluster in North Dakota generating the output previously attributed to 10,000 white-collar workers in midtown Manhattan is more economic pandemic than economic panacea."

Ghost GDP

The piece introduces a concept that crystallizes the problem: "Ghost GDP" — output that shows up in the national accounts but never circulates through the real economy. Revenue that accrues to compute owners, not consumers. Productivity gains that don't translate into wages, and therefore don't translate into spending.

The velocity of money flatlines. The human-centric consumer economy — 70% of GDP at the time — starts to wither. White-collar layoffs, initially celebrated as margin expansion, become a structural drag on consumer spending. And displaced workers don't just earn less — they spend less, which pressures the businesses that served them, which then cut more workers to invest in AI.

It's a negative feedback loop with no natural brake: the human intelligence displacement spiral.

The SaaS Collapse

The piece is particularly sharp on the enterprise software market. The inflection point comes when a Fortune 500 procurement manager sits across from his SaaS vendor's salesperson — who expected the usual 5% annual price increase — and says: "We've been in conversations with OpenAI about having their forward-deployed engineers use AI tools to replace you entirely."

They renewed at a 30% discount. That was the good outcome. The long-tail of SaaS — Monday.com, Zapier, Asana — had it much worse.

Even the "safe" systems of record weren't immune. ServiceNow's Q3 2026 report becomes the canary: net new ACV growth decelerating, a 15% workforce reduction announced, shares falling 18%. The reflexive loop becomes visible — ServiceNow's customers were cutting headcount (which cancelled SaaS licenses), while ServiceNow itself was cutting headcount and redeploying savings into the very AI that was disrupting its business model.

"The companies most threatened by AI became AI's most aggressive adopters. Each company's individual response was rational. The collective result was catastrophic."

Why This Isn't Bear Porn

Citrini Research is explicit: this is a scenario, not a prediction. The authors describe themselves as AI bulls who are modeling a left-tail risk that has been "relatively underexplored." The goal isn't to frighten — it's to stress-test assumptions.

And it's worth stress-testing. Most AI bull cases model the productivity upside. Fewer model the distributional question: who captures those productivity gains, and whether those people spend them back into the economy in ways that support consumer demand.

The 2028 Global Intelligence Crisis thought experiment is one of the most coherent attempts to answer that question — and the answer is uncomfortable enough that you'd rather think through it now.

The Key Takeaway

The scenario isn't that AI fails. It's that AI succeeds on every metric that markets track — and still produces an economic outcome that none of the standard models anticipated, because the standard models assumed the gains from productivity would flow through to workers and consumers the way they did in previous technological revolutions.

What if this time is actually different — not in the way AI optimists mean, but in a way that markets haven't priced?

That's the question the piece leaves you with. And it's one worth sitting with.