

Imagine a quiet cemetery at dusk.
Fog clings to the ground. The trees are bare. And at the center stands a simple headstone:
GenAI 1.02022–2026R.I.P.
This isn't a provocation. It's a diagnosis.
Because something important has ended in AI — not the technology itself, but the first dominant idea of what it was supposed to be.
The Data No One Wants to Talk About
For the first time since tracking began in 2022, enterprise AI adoption is declining.
The U.S. Census Bureau surveys roughly 1.2 million American firms every two weeks. According to the latest data, AI adoption among large enterprises peaked at around 14% in mid-2025 and has since fallen to approximately 11%.
That reversal matters. Enterprises don't quietly reverse course unless something fundamental has changed.
At the same time, the MIT NANDA Initiative reports that 95% of enterprise GenAI pilots delivered no measurable impact on P&L. Companies spent an estimated $30–40 billion, ran the pilots, measured the results, and then stopped.
This wasn't a lack of experimentation. It was a lack of payoff.
The Consumer Story Isn't Any Better
On the consumer side, the numbers look different, but tell the same story.
ChatGPT reportedly has ~800 million weekly users, yet only ~40 million paying subscribers. That's roughly a 5% conversion rate.
People love using AI, especially when it's free. Many people, myself included, do pay.
But the broader market never embraced AI as a $20/month product positioned as a replacement for thinking — at anything close to mass scale.
Two very different markets. Two very different failure modes. The same underlying disease.
The Mistake That Defined GenAI 1.0
GenAI 1.0 made a familiar, almost inevitable mistake.
We treated AI like every other major technology that came before it.
• The automobile replaced the horse
• The tractor replaced farm labor
• The spreadsheet replaced the accounting ledger
So, we assumed AI would replace:
• The knowledge worker
• The writer
• The coder
• The thinker
We asked the same question humanity always asks of new tools:
What can this replace?
And the market answered, slowly at first, then decisively:
Not much. Not at scale. Not profitably.
Why the Automation Thesis Is Dying
This is the part that gets misunderstood.
The automation thesis is not dying because the technology doesn't work. It does work.
It's dying because replacing human judgment turns out to be something people don't want to buy — at least not in the ways it was sold.
Judgment carries accountability. Context carries risk. And organizations don't want systems that remove humans from responsibility while quietly introducing new failure modes.
What seemed fast and efficient in demos turned brittle in reality.
Enter AI 2.0
AI 2.0 begins with a different question.
Not: What can this replace?
But: What can this make possible?
This shift changes everything.
AI 2.0 is not about substitution. It's about partnership.
• Not automation, but augmentation
• Not replacing people, but elevating them
• Not doing your thinking for you, but helping you think better than you ever could alone
This is not philosophical fluff. It's operational reality.
The 5% That Actually Works
Here's the most important detail buried in the MIT data:
The 5% of AI deployments that succeed share a common trait.
They are deeply integrated into human workflows, with humans remaining accountable for:
• Judgment
• Context
• Interpretation
• Final decisions
AI is not the authority. Humans are.
AI raises the ceiling of human performance instead of trying — and failing — to replace it.
Two Questions, Two Futures
GenAI 1.0 asked:
How do we use AI to need fewer people?
AI 2.0 asks:
How do we use AI to make our people extraordinary?
That difference is not subtle.
It is the difference between:
• A technology that stalls after a hype cycle
• And one that quietly reshapes how work, creativity, and intelligence evolve
The Clouds Aren't Where You Think They Are
The dark clouds aren't gathering over AI itself.
They're gathering over an outdated idea of what AI is for.
GenAI 1.0 had its moment. It taught us what doesn't scale, what doesn't sell, and what people quietly reject.
That's not failure.
That's learning.
And on the other side of that headstone is something far more interesting — an AI that doesn't replace human intelligence, but finally treats it as something worth amplifying.
Enterprise AI adoption is declining for the first time since 2022. Not because the technology failed — because the idea of what it was for failed. GenAI 1.0 asked the wrong question: what can AI replace? AI 2.0 asks the right one: what can AI make possible? The 5% of deployments that actually work all share one trait: humans remain in charge. That's not a limitation. That's the design.
Written by Stephen Klein, Founder/CEO of Curiouser.AI
Sources
¹ U.S. Census Bureau BTOS, Apollo analysis, September 2025
² MIT NANDA Initiative, The GenAI Divide, July 2025
³ Deutsche Bank analysis, October 2025
Stephen Klein is Founder/CEO of Curiouser.AI — building AI to amplify human intelligence, not replace it. He teaches at Berkeley and is writing a book with Georgetown on post-automation strategy. Curiouser is community-funded on WeFunder.