DATE
November 10, 2025
Category
AI
Reading time
3 min
Agentic AI Isn't A Scam. It's Chitty Chitty Bang Bang
Agentic AI Isn't A Scam. It's Chitty Chitty Bang Bang

Agentic AI is a beautiful idea.

A genuinely exciting vision of the future.

The problem? We're about 10 years too early.

And pretending the future is here doesn't make it so.

What "Agentic" Actually Requires

For AI to be truly agentic, it must:

→ Generate and pursue its own goals autonomously

→ Adapt plans over time without human prompts

→ Operate with coherence and persistence

→ Exhibit independent decision-making in dynamic environments

Today's LLMs (GPT-4, Claude, Gemini) are:

→ Prompt-reactive, not self-directed

→ Tool-augmented, not autonomous

→ Heavily scaffolded by humans to function

They're autocomplete engines with plugins, not agents.

What The Research Actually Shows

Stanford & UC Berkeley (2024): Foundation models fail at sustained reasoning, planning, and adaptive autonomy. "We observe no evidence of self-persistent goals or proactive behavior."¹

MIT CSAIL (2023): Even with advanced scaffolding, models cannot manage long-horizon objectives. "When unassisted, all models collapsed into reactive loops."²

Apple Research (2024): LLMs can't reason. They pattern-match training data. Strip away memorized patterns and performance collapses.³

The error rates tell the story:

30–70% hallucination rates, getting worse not better.⁴

But Here's What Nobody's Talking About

For "agentic AI" to work, you'd need to give it:

→ Complete access to your email, calendar, files, communications

→ Permission to act autonomously on your behalf

→ The keys to your entire digital life

It's like hiring a housekeeper and immediately giving them:

• Keys to your safe

• Access to your sock drawer

• Your bank account passwords

• Permission to make decisions while you're not home

Except this "housekeeper": → Hallucinates 30–70% of the time

→ Can't reason reliably

→ Has no accountability when it makes mistakes

You're not building an agent.

You're building a surveillance system with autonomous control over everything you own.

What The Market Is Saying

While the hype accelerates, the data tells a different story:

US Census Bureau: Enterprise AI adoption is reversing, not growing.⁵

McKinsey: 95% of GenAI pilots are failing.⁶

MIT: No measurable ROI at scale.⁷

OpenAI: Losing $5 billion annually ($83 million per day).⁸

**And now? The consulting firms that sold "agentic transformation" are quietly laying off their AI practices.**⁹

The Bottom Line

Agentic AI isn't evil. It's not a con.

It's Chitty Chitty Bang Bang.

A beautiful dream of flying cars that captured everyone's imagination.

But you can't sell flying cars before you've invented the propulsion system.

You can't sell autonomous agents before you've solved:

• Reliable reasoning

• Persistent goal-setting

• Error-free execution

• Trustworthy decision-making

We're not there yet. Not even close.

And no amount of hype, consulting fees, or rebranding changes that.

Stop buying flying cars.

Start asking for the engine.

Conclusion

Agentic AI is a beautiful vision, but we're about 10 years too early. Today's LLMs are prompt-reactive autocomplete engines, not autonomous agents. The research is clear: they fail at sustained reasoning, hallucinate 30–70% of the time, and require complete access to your digital life to even attempt "agency." While the hype accelerates, enterprise adoption is reversing, pilots are failing, and consulting firms are laying off their AI practices. We're selling flying cars before we've invented the engine. Stop buying the dream. Start asking for the technology that actually works.

Written by Stephen B. Klein


Footnotes

¹ Stanford HAI & UC Berkeley BAIR (2024). "Evaluating Goal Persistence in Foundation Models." arXiv:2403.10329

² MIT CSAIL (2023). "Limits of Goal-Oriented Reasoning in LLM Agents." csail.mit.edu/research

³ Apple Research (2024). "GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models"

⁴ Perplexity fails 37% of news queries (TechCrunch, 2024); Grok error rate 94% on factual questions (The Verge, 2024)

⁵ US Census Bureau (2024). Annual Business Survey: AI adoption rates declining year-over-year

⁶ McKinsey & Company (2024). "The State of AI in 2024: Enterprise pilot success rates"

⁷ MIT Sloan Management Review (2024). "Measuring GenAI ROI: Where's The Value?"

⁸ The Information (2024). OpenAI financial disclosures: $5B annual burn rate

⁹ Accenture, Deloitte, BCG, McKinsey layoff announcements (Q4 2024-Q1 2025)