

Have you ever noticed that when you rush, things get worse?
You miss details. You make dumb mistakes. You feel busy but accomplish less.
This is obvious. Children understand it. We teach it to five-year-olds. We learn it through lived experience before we learn to read.
And yet.
The Generative AI industry, filled with some of the most intelligent, educated, well-resourced people on the planet, has spent the last several years acting as if this basic truth no longer applies.
Speed became the religion. Lower latency. Faster inference. More tokens per second. Real-time everything.
No one stopped to ask whether faster was better. They just assumed it. And they assumed it so completely that when researchers recently began publishing papers about the dangers of speed — about how acceleration degrades comprehension, how real-time systems become structurally less understandable, how governance fails before performance does — it was treated as a discovery.
It wasn't a discovery.
It was common sense that had to be reintroduced as theory because an entire industry had forgotten it.
When that happens, something has gone badly wrong.
What Everyone Already Knows
Let's be clear about what we're talking about.
No one believes rushing makes doctors better. No one believes rushing makes pilots safer. No one believes rushing makes judgment sharper. No one believes rushing makes thinking clearer.
In every domain where humans make consequential decisions, we understand that speed trades off against quality. That faster often means sloppier. That haste makes waste.
This isn't controversial. It's not a hot take. It's the kind of thing your grandmother told you.
And yet the AI industry convinced itself that this ancient, obvious truth didn't apply to artificial intelligence. That somehow, in this one domain, faster would simply mean better. That lower latency would equal higher intelligence. That acceleration would produce understanding.
It didn't.
What acceleration produced was:
• Models that hallucinate faster
• Errors delivered with greater confidence
• Wrong answers wrapped in fluent prose
• Systems that move too quickly to audit, explain, or govern
We didn't solve the problems. We sped them up.
How Common Sense Gets Suspended
How does an entire industry forget something this obvious?
Three forces converged to override what everyone already knew.
1. Incentives
Speed is easy to measure. Speed looks good in a demo. Speed wins funding rounds. Speed generates headlines.
Understanding is hard to measure. Reliability is hard to demo. Judgment doesn't fit on a slide.
When your funding depends on impressive metrics, you optimize for impressive metrics — whether or not they correlate with value. And speed is the most legible metric of all.
2. Abstraction
Engineers work in abstractions. They optimize loss functions, tune hyperparameters, measure latency in milliseconds. The further you get from lived human experience, the easier it is to forget what humans need.
What feels reckless to a person feels "optimized" to a system. What feels rushed to a user feels "efficient" to a benchmark. The abstraction insulates designers from the consequences of their choices.
3. Diffused Responsibility
When everyone is moving fast, no one feels responsible for slowing down.
"The market demands it." "Competitors are doing it." "Users expect it." "There wasn't time to think."
Speed becomes an excuse rather than a choice. And once speed becomes the excuse, accountability disappears.
The Formalization of the Obvious
So now we have researchers publishing what everyone already knew.
They're writing about "epistemic opacity" — which means: when systems move too fast, no one can understand how they make decisions.
They're writing about "governance failure preceding functional failure" — which means: we lose the ability to oversee systems before we notice they're broken.
They're writing about "the critical mass of speed" — which means: beyond a certain point, going faster makes everything worse.
These are important contributions. The research is rigorous. The frameworks are useful.
But let's be honest about what's happening here.
Academics are formalizing common sense because practitioners ignored it. They're building theoretical frameworks to describe what your grandmother could have told you. They're publishing peer-reviewed papers proving that water is wet.
This isn't a criticism of the researchers. It's an indictment of the industry that made their work necessary.
When obvious truths have to be rediscovered through formal methods, something was hiding them. In this case: money, hype, and herd mentality.
The Cost of Forgetting
What did it cost us to forget what we already knew?
Two years of marginal progress dressed up as revolution. The models got faster. They didn't get meaningfully better. By some measures, they got worse — more hallucinations, more confident wrongness, more fluent nonsense.
Hundreds of billions of dollars invested in the wrong direction. Capital that could have gone toward understanding, reliability, and human-AI collaboration went instead toward shaving milliseconds off inference time.
An entire generation of AI products optimized for demos rather than deployment. Systems that look impressive in a pitch meeting and fall apart in production. 95% pilot failure rates. 70–85% of initiatives failing to meet expected outcomes.
Eroded trust. Users who started excited about AI are now skeptical. They've been burned too many times by confident wrongness. They've learned that fluent doesn't mean accurate. The magic degraded because the magic was a trick.
A governance gap that may take years to close. We built systems that move faster than our ability to understand, audit, or regulate them. And now we're playing catch-up trying to build oversight for systems that were designed to evade oversight.
All of this was predictable. All of this was predicted — by common sense, which was ignored.
The Uncomfortable Truth
Here's the part that should make everyone uncomfortable.
The people who built these systems aren't stupid. They're extraordinarily intelligent. They have PhDs from the best universities. They've read more papers, written more code, and thought more deeply about AI than almost anyone on the planet.
And they still forgot that rushing makes things worse.
This tells us something important: intelligence doesn't protect you from ignoring common sense. If anything, intelligence makes it easier, because smart people are better at constructing sophisticated justifications for obviously dumb choices.
"We're moving fast and breaking things." "We're optimizing." "Speed is a proxy for capability." "The market will correct for quality issues." "Real-time is the future."
These sound reasonable. They're not. They're rationalizations dressed up as reasoning. They're smart people using their intelligence to avoid obvious truths.
The industry didn't fail because the people in it were dumb. It failed because the system rewarded ignoring what everyone already knew.
What This Means Going Forward
If common sense has to be reintroduced as theory, then the system that suppressed it needs to change.
We need to stop treating speed as an unquestioned virtue. Faster should be assumed risky until proven otherwise. Every acceleration should require justification: faster at what? For whom? At what cost?
We need to include people who haven't been trained to ignore common sense. The AI monoculture — engineers building for engineers, evaluated by engineers — produces systematic blind spots. We need philosophers, designers, ethicists, and managers in the room when these systems are designed.
We need to reward slowing down. Right now, no one gets promoted for saying "we should take more time." No one gets funded for building something slower but more reliable. Until incentives change, behavior won't change.
We need to treat common sense as evidence. In an industry obsessed with data, lived experience gets dismissed as "anecdotal." But common sense is the accumulated wisdom of billions of humans across thousands of years. That's a dataset worth respecting.
The GenAI industry forgot that rushing makes things worse — not because its people were unintelligent, but because the system rewarded ignoring what everyone already knew. Speed became the religion. Common sense became a theoretical rediscovery. The cost: hundreds of billions misdirected, trust eroded, and a governance gap that will take years to close. The fix starts with treating obvious truths as evidence again.
Written by Stephen Klein, Founder/CEO of Curiouser.AI
Stephen Klein is Founder/CEO of Curiouser.AI — building AI to amplify human intelligence, not replace it. He teaches at Berkeley and is writing a book with Georgetown on post-automation strategy. Curiouser is community-funded on WeFunder.