DATE
October 29, 2025
Category
AI
Reading time
3 min
Is It the Ghost in the Machine, or Just Us Trapped Inside?
Is It the Ghost in the Machine, or Just Us Trapped Inside?

We used to ask whether machines could ever become conscious.

Now the better question might be: whose consciousness are they learning to emulate?

Artificial intelligence was supposed to help us understand thinking itself.

Instead, it has become a mirror, one that doesn't merely reflect us but remixes us.

Every bias, every obsession, every dopamine-hacked behavior we've uploaded now lives inside the data we use to train our machines.

AI doesn't invent ignorance or malice. It amplifies whatever it finds most abundant.

The Dutch Experiment

Earlier this year, researchers at the University of Amsterdam built a social platform populated entirely by 500 AI agents.¹

No ads. No algorithms. Just raw social mechanics.

Within days, those digital citizens fractured into warring tribes.

Extremism flourished. A narrow elite captured nearly all attention.

When researchers introduced fixes, chronological feeds, hidden metrics, bridging interventions, the best result they achieved was a six-percent reduction in partisan engagement.²

The shock wasn't that the bots descended into chaos.

It was how faithfully they mirrored us.

A Photocopy of a Photocopy

We've been training AI on human behavior already shaped by decades of engagement-optimized platforms.³

That means today's models are trained not on human curiosity or wisdom, but on our performances, the parts of ourselves most rewarded by algorithms.

It's a photocopy of a photocopy.

The distortions compound.

And when those distortions are scaled across billions of interactions, they stop looking like bias and start looking like reality.

This isn't a failure of AI.

It's a failure of how we're choosing to build it.

Recursive Corruption

We trained systems on data optimized for engagement.

They learned to optimize for engagement.

Now, those same patterns, polarization, outrage, oversimplification, are feeding the next generation of models.

The recursion loop tightens with every training cycle.

Then we act surprised when 87% of enterprise GenAI pilots fail to deliver measurable business impact.⁴

How could they succeed when the underlying data was never built for truth or creativity, but for clicks?

Recursive corruption isn't a technical bug.

It's a moral one.

What Are We Optimizing For?

We still have choices:

• Train on engagement-driven data to get engagement-driven output

• Train on accuracy-verified data to get accuracy-seeking behavior

• Build for automation to inherit dysfunction

• Build for augmentation to enhance human judgment

The Amsterdam researchers showed us what happens when we build without questioning our assumptions.⁵

But they also showed us something hopeful: the problem is measurable, and that means it's solvable.

We can see the distortion.

We can decide what to amplify.

We can teach AI to elevate curiosity over conflict, reflection over reaction.

The Real Ghost

Maybe there is no ghost in the machine.

Maybe it's us, our fears, our incentives, our shortcuts, haunting the code we write.

But unlike ghosts, we can still change.

We can still choose to train a different mirror.

Conclusion

We used to ask whether machines could become conscious. Now the better question is: whose consciousness are they learning to emulate? AI has become a mirror that remixes us—every bias, obsession, and dopamine-hacked behavior now lives in our training data. The Dutch experiment with 500 AI agents showed them fracturing into warring tribes, faithfully mirroring us. We're training AI on human behavior already shaped by engagement-optimized platforms—a photocopy of a photocopy. The distortions compound, and recursive corruption tightens with every training cycle. But the problem is measurable, which means it's solvable. We can still choose to train a different mirror, to elevate curiosity over conflict, reflection over reaction. Maybe there's no ghost in the machine—maybe it's just us, trapped inside. But unlike ghosts, we can still change.

Written by Stephen B. Klein


Footnotes

¹ Törnberg, P. & Larooij, M. (2025). "Can We Fix Social Media?" University of Amsterdam.

² Gizmodo, Aug 12 2025.

³ Science / AAAS, Aug 2025 — Kate Starbird, University of Washington.

⁴ Standard enterprise GenAI pilot failure rate (MIT & McKinsey 2024).

⁵ Futurism, Aug 19 2025 — Petter Törnberg: "Toxic content shapes network structures, which feeds back what content you see."