

Everyone's overcomplicating this.
Does it mean forming a committee of academics to observe a company's activities? Does it mean empowering a legal/governance function to minimize liability?
Two conversations are happening under the same label:
- How does a GenAI company that makes the technology think about ethics?
- How does a business client think about it in terms of strategy and implementation?
At its most essential, this isn't that hard a question. AI ethics should mean creating AI that benefits a business and humanity. AI that augments people. That makes workers better at their jobs. That creates jobs rather than eliminating them. That elevates humanity rather than diminishing it.
You just have to think different.
But let's look at what the industry leaders have actually done.
OpenAI
OpenAI created a "Superalignment" team to ensure AI doesn't turn on humanity. They disbanded it in May 2024, just one year later. Both leaders resigned. Jan Leike said publicly: "Safety culture and processes have taken a backseat to shiny products."
They had promised 20% of compute to the team. Leike said they were "struggling for compute."
By August 2024, nearly half their AGI safety researchers had left the company. By October 2024, they disbanded their "AGI Readiness" team as well.
Microsoft
Microsoft laid off its entire Ethics and Society team in March 2023, while racing to integrate OpenAI's technology.
An executive told employees: "The pressure from Kevin Scott and Satya Nadella is very high to take these OpenAI models and move them into customers' hands at very high speed."
Google created an external AI ethics council in 2019. It collapsed within one week due to controversial appointees.
In December 2020, they fired Timnit Gebru, co-lead of their Ethical AI team, after she co-authored a paper critical of large language models. Then they fired Margaret Mitchell, the other co-lead who founded the team.
The team Gebru built was considered one of the most diverse in AI.
Meta
Meta disbanded its Responsible AI team entirely in November 2023. They had already disbanded their Responsible Innovation team the year before.
As of May 2025, they are replacing human risk assessors with AI. Engineers now make their own judgments about ethical risks.
The Pattern
Every major AI company has either fired, disbanded, or marginalized their ethics teams. Usually right when they were needed most.
Ethics committees exist until they become inconvenient.
What Ethics Should Actually Look Like
So what should ethics actually look like for a business implementing AI?
It starts with not breaking the law. But then what?
A commitment to investing in and supporting workers through the transition. Being honest with shareholders about what AI can and can't do. Treating customers with respect rather than as data to be harvested.
This isn't rocket science. It's common sense and human decency.
The Key Insight
And here's what the industry gets completely wrong: Ethics shouldn't hinder a business or slow things down.
It should speed things up and create competitive advantage.
What employee wouldn't choose to work for an ethical company? What customer wouldn't choose to buy from one?
The AI industry has made ethics seem impossibly complex because complexity provides cover for doing nothing.
Every major AI company has disbanded their ethics teams right when they were needed most. But ethics shouldn't be a compliance burden — it should be a competitive advantage. The companies that treat workers, customers, and shareholders with genuine respect will win. It's not complicated. It's just uncommon.
Written by Curiouser.AI
Sources
- OpenAI Superalignment team disbanded, Leike resignation: CNBC, May 2024
- OpenAI "struggling for compute," safety taking backseat: Axios, May 2024
- Nearly half of OpenAI AGI safety researchers departed: Fortune, August 2024
- OpenAI AGI Readiness team disbanded: CNBC, October 2024
- Microsoft Ethics and Society team laid off: TechCrunch, March 2023
- Google AI ethics council collapse: The Verge, April 2019
- Timnit Gebru fired from Google: Washington Post, December 2020
- Margaret Mitchell fired, team aftermath: NPR, December 2020
- Meta Responsible AI team disbanded: CNBC, November 2023
- Meta replacing human risk assessors with AI: NPR, May 2025
Curiouser.AI is building Reflective AI technology that creates jobs instead of eliminating them. Learn more at curiouser.ai or invest at WeFunder.