DATE
February 20, 2026
Category
AI
Reading time
5 min
One of the Most Dominant Thinkers on AI Is Not in Silicon Valley.
One of the Most Dominant Thinkers on AI Is Not in Silicon Valley.

He's in Rome.

The dominant narrative around artificial intelligence is being shaped by a small number of actors: technology companies, investors, consultants, and policymakers struggling to keep pace with systems evolving faster than our collective judgment.

Speed is celebrated. Scale is rewarded. And moral reflection is often treated as a delay.

Which is why it surprises many people that one of the most coherent, disciplined, and sustained ethical voices on artificial intelligence today comes not from Silicon Valley, but from Rome.

Pope Leo XIV, the first American pope in history, has emerged as a serious moral authority in the global AI conversation. Not as a technologist. Not as a regulator. But as something increasingly rare in modern innovation culture: a moral thinker focused on human consequence.

Not Anti-Technology. Not Anti-Innovation.

There is a persistent — and often convenient — misunderstanding that ethical caution implies resistance to progress.

It does not.

Pope Leo XIV is not anti-technology. He is not anti-innovation. He is anti-thoughtlessness.

That distinction matters.

His concern is not that artificial intelligence will become too powerful or too intelligent. It is that we, as its creators and beneficiaries, will become careless — ceding judgment, responsibility, and moral agency in exchange for efficiency and convenience.

This is not a theological objection. It is a human one.

The Rome Call for AI Ethics

The Vatican's engagement with AI ethics did not begin with Pope Leo XIV — but he has inherited and now carries forward a framework that was unusually prescient.

In 2020, the Vatican convened an unlikely coalition of technology companies, international organizations, governments, and academic institutions. The result was the Rome Call for AI Ethics, a formal framework intended to guide the development and deployment of artificial intelligence around human-centered principles.

The Rome Call was co-signed by major technology companies such as Microsoft and IBM, alongside United Nations agencies and state actors. Its aim was not to halt innovation, but to shape it — to insist that AI systems be designed in service of human dignity and the common good.

The framework articulated six foundational principles:

Transparency — AI systems should be explainable

Inclusion — technology should benefit everyone

Responsibility — human accountability must remain intact

Impartiality & fairness — bias and discrimination must be actively avoided

Reliability — systems must work as intended

Security & privacy — human rights and data protection are non-negotiable

What's striking is how closely these principles align with later regulatory efforts, including the EU AI Act. The Vatican was not reacting late. It was thinking early.

Under Pope Leo XIV, this ethical posture has not been abandoned or softened. It has become more urgent.

A Deeper Warning: Intelligence Is Not Wisdom

In 2025, the Vatican released a major doctrinal document examining the relationship between artificial intelligence and human intelligence. The text addressed AI's implications for work, education, healthcare, inequality, and warfare — but its deeper concern was philosophical.

Artificial intelligence, the document argued, should never be confused with human intelligence.

Why?

Because AI does not possess moral judgment. It does not carry responsibility. It does not bear consequence.

Those remain human burdens — and human privileges.

The central risk is not runaway superintelligence. It is the slow erosion of human responsibility as decisions are optimized, automated, and abstracted away from the people they affect.

Against the Culture of Optimization

Modern technology culture often equates optimization with progress.

Faster is better. Cheaper is smarter. Scalable is superior.

But optimization is a tool, not a value system.

The Vatican's critique — now carried forward by Pope Leo XIV — challenges a worldview in which efficiency becomes the highest good and human complexity becomes friction. In such a world, ethics are framed as constraints rather than foundations.

This is the heart of the argument, and its relevance to AI could not be more current.

Ethics cannot be bolted on after deployment. Responsibility cannot be delegated to machines. And the future cannot be automated into existence.

Why This Perspective Matters Now

As artificial intelligence moves from experimentation to infrastructure — embedded in education, healthcare, labor markets, governance, and warfare — the question is no longer whether AI will shape society.

It already is.

The real question is whether we are shaping AI with intention, or merely reacting to its momentum.

Pope Leo XIV's contribution to the AI conversation is not technical. It is directional. It reminds us that innovation without reflection does not lead to progress. It leads to drift.

And history is unforgiving to societies that confuse capability with wisdom.

A Final Thought

In an industry dominated by hype cycles, fear narratives, and financial abstraction, this voice from Rome offers something unexpectedly modern: restraint, responsibility, and moral clarity.

Not because it rejects technology. But because it refuses to abandon thoughtfulness.

And in an age of increasingly powerful machines, that may be the most radical position of all.

Conclusion

The most coherent ethical voice in the global AI conversation isn't in Silicon Valley — it's in Rome. Pope Leo XIV isn't anti-technology. He's anti-thoughtlessness. His warning isn't about superintelligence. It's about the slow erosion of human responsibility as decisions get optimized, automated, and abstracted away from the people they affect. Ethics can't be bolted on after deployment. That's the most radical position of our era.

Written by Stephen Klein, Founder/CEO of Curiouser.AI


Stephen Klein is Founder & CEO of Curiouser.AI, the only AI designed to augment human intelligence. He also teaches at UC Berkeley. To learn more or sign up, visit curiouser.ai. Curiouser is community-funded on WeFunder.