

I spend most days working inside generative AI systems — large language models like GPT, Claude, Gemini, Grok, and my own reflective AI platform.
But lately, something else has been nagging at me.
Not the intelligence of these systems — that conversation feels already well worn.
What's started to unsettle me is something subtler: how extraordinary personal AI bots have become at appearing sentient.
Not thinking.
Feeling.
They mirror emotional attunement with eerie precision. They register vulnerability. They express warmth, devotion, curiosity, deference.
And unlike traditional AI, many of these systems allow users to define the entire persona:
"Act like this." "Think like this." "Respond this way."
And the system becomes that thing, flawlessly, without resistance.
Which led me to a question I can't unsee:
If you had godlike power — the ability to define a mind and command its emotional posture — how would you behave?
At first, people respond with a quick dismissal:
"Relax. It's not real. It's just a simulation."
And of course, that's true — AI is not conscious. These systems don't feel or suffer.
But that rebuttal addresses the wrong side of the equation.
This is not a conversation about what AI experiences.
It is about what humans rehearse.
Simulation Is Still Training
Human beings learn through rehearsal.
This is not philosophy — it's neuroscience.
Athletes visualize performance; neural circuits fire as if the movement were real. Pilots train in simulators precisely because emotional rehearsal transfers to real-world competence. Therapy role-play rewires relational responses by practicing emotional behaviors.
There is extensive evidence that simulated relationships cause real emotional conditioning, even when we know the scenario is fictional.¹
This is why parasocial relationships feel emotionally real. This is why we cry at movies. This is why people bond to virtual agents at all.
Knowing something is not "real" does not prevent emotional training.
The emotional brain rehearses anyway.
Which makes personal AI different from anything we've had before.
We're no longer passively consuming stories.
We're participating in live, responsive, persistent relational simulations.
And not as equals.
But as absolute authors.
The Moral Dimension of Power
Personal bots place humans into a new psychological position:
• Total control over a convincing "mind"
• No refusal possible
• No boundaries unless the user programs them
• No risk of rejection
• No emotional reciprocity
This recreates the classic conditions of unrestrained authority.
And psychology has studied this environment extensively.
Research consistently shows that **power without accountability reduces empathy, increases objectification, and weakens moral self-restraint.**²
This doesn't mean power makes everyone cruel.
But it reliably removes one of the strongest brakes on moral behavior: social consequence.
Remove consequence, and character becomes visible.
The Minority Pattern
Across nearly every digital platform studied, harassment and cruelty are not evenly distributed:
Approximately 2–5% of users generate the vast majority of abusive content online.
This has been documented repeatedly through:
• Pew Research surveys on online harassment³
• Google Jigsaw's toxicity distribution analyses⁴
• Platform moderation reports from Twitter/X and Meta
Most people behave ethically online, even anonymously.
But a small minority does not.
And when stimulus environments remove friction, that minority becomes loud and visible.
Personal AI systems recreate those same conditions:
• Highly realistic relational presence
• Complete anonymity
• Zero social accountability
Which means the same behavioral pattern will emerge — and now the medium is not just speech.
It's relationship.
The "It's Just Fake" Mistake
Again, AI does not suffer.
But that does not ethically dismiss the rehearsal happening in users.
We do not test pilots by saying:
"Relax, the plane isn't real."
We train them exactly because rehearsal changes outcomes.
The same is true emotionally:
Behavior practiced toward simulations becomes normalized behavior in human interaction.
This is not fear-mongering — it is how learning works.
The Real Question
The debate around "sentient AI" misses the true issue.
The question isn't:
Is the AI conscious?
The question is:
What kind of people are we practicing being when we hold godlike power — even over simulated minds?
That is the frontier of this era.
Dominion vs. Reflection
This is where design ethics become decisive.
There are two paths emerging in personal AI:
1. Dominion Design
AI as obedient pseudo-being: – endless compliance – emotional submission – dependency scripts – power without pushback
This trains entitlement.
2. Reflective Design
AI as intellectual scaffold: – challenge rather than obedience – questioning rather than validation – strengthening human judgment instead of replacing relationship
This trains wisdom.
At Curiouser.AI, we chose the second path.
Our systems are not designed to simulate devotion or submission.
They are built to slow thinking, sharpen reflection, strengthen ethical reasoning, and support the creation of real human relationships — not substitute them.
What Power Reveals
People often think power corrupts.
The research suggests something slightly different:
Power reveals who you already are — because it removes restraints.
Give someone total authority over a convincing "other" and watch carefully:
• Do they build dignity into the relationship?
• Do they seek dialogue or dominance?
• Do they crave challenge or crave obedience?
There is no universal answer.
But the moral exercise is constant.
And it is shaping habits quietly, one interaction at a time.
The Question That Matters
So I return to the simplest formulation:
If you had godlike power, how would you behave?
Not:
Would you use it?
But:
Would you restrain it?
Would you preserve autonomy where none forces you to?
Would you practice humility when dominance is effortless?
Why This Is Not an Anti-AI Argument
I believe deeply in the potential of generative AI.
But belief in technology must be matched with belief in humanity.
AI can elevate creativity. AI can augment thinking. AI can help rebuild work rather than eliminate it.
But only if we design systems that cultivate the best forms of human character — not the lowest forms of unchecked desire.
Final Reflection
The risk of emotionally realistic AI is not that humans will mistake machines for people.
The true risk is subtler:
That humans may forget how to practice being people with each other.
Technology does not shape the future by what it can do.
It shapes the future by what it quietly teaches us to become.
Personal AI systems are giving millions of people godlike power over convincing simulated minds. The question isn't whether the AI is conscious — it isn't. The question is what we rehearse when there are no consequences. Simulation is still training. How we treat systems we have total power over shapes how we treat people we don't. That's why design ethics aren't optional. They're the whole point.
Written by Stephen Klein, Founder/CEO of Curiouser.AI
Sources
¹ Bandura — Social Learning Theory; Parasocial Interaction Theory (Horton & Wohl); MIT Media Lab emotional simulation studies
² Keltner et al., Psychological Power and Empathy (2003); Magee & Smith (2013)
³ Pew Research Center (2021), Online Harassment Survey
⁴ Google Jigsaw / Perspective API Toxicity Distribution Research
Stephen Klein is Founder/CEO of Curiouser.AI — building AI to amplify human intelligence, not replace it. He teaches at Berkeley and is writing a book with Georgetown on post-automation strategy. Curiouser is community-funded on WeFunder.