

"Gen AI won't betray our values. We will."
Not because we're malicious.
Not because we don't care.
But because the people building the systems – and the people studying the consequences – don't speak the same language, rarely sit in the same room, and often look at each other with quiet suspicion.
This is the defining blind spot of our time.
And if we don't address it, we may not lose control of AI.
We may simply lose control of ourselves.
The Two Worlds Problem
Right now, ethics lives in universities and policy think tanks.
AI lives in startups, venture funds, and Fortune 500 boardrooms.
Philosophers debate.
Engineers deploy.
And the gap between the two grows wider with every product release.
In a 2024 study by the AI Ethics Consortium, only 12% of Gen AI development teams reported including ethicists or social scientists in the design process.¹ When ethics is mentioned, it's often downstream – compliance review, PR mitigation, or oversight committees with no real decision-making power.
But the deeper truth is harder to say:
These communities don't just misunderstand each other.
They patronize each other.
Engineers often view ethicists as out-of-touch idealists who don't understand the pace of product cycles.
Ethicists, in turn, often see AI builders as reckless optimizers chasing performance metrics without asking deeper questions.
And so we move forward – fast, loud, impressive – without ever deciding what kind of future we're actually building.
Ethics Is Not a Department
Let's be clear: ethics is not compliance.
It's not a quarterly training.
It's not a line item buried in legal or PR.
Ethics is a cultural operating system.
It shapes how we make decisions under pressure.
It defines what trade-offs we're willing to make.
And in AI, it determines who benefits – and who gets left behind.
Google's well-publicized AI Principles (2018) were a step in the right direction. But the firing of the very ethicists who raised internal concerns – people like Timnit Gebru and Margaret Mitchell – revealed the fragility of that commitment.²
Ethics, when it threatens growth, gets moved to the back of the room.
But here's the business truth no one wants to say out loud:
Ethics isn't a drag on innovation. It's the foundation of trust.
And in a world where content, code, and capabilities are rapidly commoditized, trust is the last real moat.
The Business Case for Values
Let's drop the moral argument for a moment and talk in the language of outcomes:
- Trust improves customer lifetime value.
- Authenticity boosts brand resilience in saturated markets.
- Transparency lowers legal and reputational risk.
- Purpose attracts and retains the best talent.
In a McKinsey survey of over 1,000 global executives, companies that were perceived as "high-trust" outperformed their peers in revenue growth by more than 30%.³
So no – ethics isn't about "doing good."
It's about building companies that endure.
Why Most Companies Still Miss It
The uncomfortable reality is that most organizations simply don't have this expertise in-house.
And when they do, it's fragmented across legal, marketing, compliance, and R&D – none of whom have a shared framework or language for ethical decision-making.
Worse: most AI roadmaps aren't strategic.
They're technical.
They ask:
- Which LLM should we use?
- How do we fine-tune for performance?
- Can we cut costs through automation?
They rarely ask:
- What values does this system reinforce?
- Who might it exclude?
- What is the long-term impact of optimizing for efficiency at the expense of agency?
We need a new kind of model.
A new kind of thinking.
A Better Way Forward
We need a framework that integrates ethics, legal foresight, product vision, and business strategy from the beginning – not as afterthoughts.
It looks like this:
- Multi-LLM fluency to avoid vendor lock-in and algorithmic monoculture.
- Cross-functional product teams where ethicists and engineers shape decisions together.
- Leadership alignment on what "ethical deployment" actually means – not just what's legal, but what's right.
- Narrative clarity – because if we can't articulate what we stand for, our customers won't believe we stand for anything.
Without this, companies aren't setting themselves up to succeed.
They're setting themselves up to fail faster.
The Most Important Truth of All
Gen AI has no values. And never will.
It reflects.
It predicts.
It optimizes.
But it doesn't care.
It doesn't discern.
It doesn't take responsibility.
Only we can do that.
And if we fail to bring our values to the table – clearly, consistently, courageously – then we'll be left with systems that are brilliant, scalable, and deeply misaligned with the future we actually want.
This Isn't About Blame. It's About Leadership.
I've worked in the rooms where these decisions are made.
I've seen what happens when well-meaning people move too fast.
I don't think we need to slow down.
I think we need to wake up.
Because the real risk isn't that AI replaces us.
It's that we forget what made us human in the first place.
This is not a technical problem.
It's a leadership problem.
And leadership means having the courage to bring ethics – not just efficiency – into the center of the conversation.
Before it's too late.
The future of AI won't be defined by who builds the fastest models, but by who builds them with the deepest integrity.
At Curiouser.AI, we believe that ethics must move from the margins to the core — not as an obstacle to progress, but as the strategy that ensures progress serves everyone.
Stephen B. Klein & Alice
Sources & Additional Reading
- AI Ethics Consortium. Integrating Ethics into AI Development. 2024.
- Wired Magazine. The Fight Inside Google Over Ethical AI. 2021.
- McKinsey & Co. How Trust Drives Performance in the AI Age. 2023.
- Stanford HAI. 2024 AI Index Report. Chapter on Ethics and Governance.
- Harvard Business Review. Why AI Ethics Needs to Move Beyond Compliance. March 2023.