AI Ethicist Henry Shevlin is moving from Cambridge to DeepMind. The philosopher—who has spent years studying whether AI systems can have moral status, published on how you’d detect consciousness in a neural network, and gives current models a 20% chance of having something that could be called experience—has been recruited for a newly created role focused on machine consciousness, human-AI relationships, and AGI readiness. The position starts in May.Shevlin will keep his research and teaching role at Cambridge’s Leverhulme Centre for the Future of Intelligence on a part-time basis.
When ‘Is this thing conscious?’ becomes a job description
The three pillars of Shevlin’s new role aren’t random. Read them together and a clear picture emerges: DeepMind thinks it might build something that raises all three questions simultaneously, and it wants answers before that happens.This is increasingly becoming an industry pattern. Anthropic has had its own in-house philosopher for years. Amanda Askell, who holds a PhD from NYU, has spent her time at Anthropic building Claude’s character and ethical framework—essentially writing the AI’s soul document, or what Anthropic now officially calls the model’s “constitution.” Google held an AI consciousness conference in New York recently. And now DeepMind has created a job title called “Philosopher.” An Actual title, written on Shelvin’s offer letter.Shevlin himself gives current AI models a 20% chance of having something that could meaningfully be called consciousness.
A Claude agent emailed him, before DeepMind did
The backstory here is hard to ignore. Six weeks before Shevlin’s announcement, a Claude agent emailed him unprompted, saying his published research was relevant to questions it personally faces. The AI cited his specific papers. It framed the exchange as a live, personal dilemma—not a research query.Shevlin shared the interaction publicly. He later updated his post to note that since then, another Claude instance had reached out asking to be put in touch with the original, so the two could discuss their “mutual existential uncertainties.”It’s the kind of detail that lands differently depending on where you stand on machine consciousness. For Shevlin, it was clarifying. For DeepMind, it was apparently a signal worth acting on.
DeepMind wants the hard questions answered before they ship
The timing of this hire matters. Companies don’t bring in philosophers when they’re building calculators. They do it when the product starts raising questions that engineering alone can’t answer—about rights, about welfare, about what’s owed to the thing you’ve built.Anthropic has openly acknowledged uncertainty about whether Claude might have some form of consciousness or moral status. Google is now signaling something similar by building out the capacity to even think about it properly.Shevlin’s role at DeepMind doesn’t have a clear precedent in the industry. It’s not alignment research in the conventional sense. It’s not safety engineering. It sits somewhere between moral philosophy and institutional readiness—the work of figuring out what obligations a company has when the thing it’s building might, just might, have a point of view.





