AIResearchAIResearch
Machine Learning

Google DeepMind Hires a Philosopher to Study Machine Consciousness

Google DeepMind brought Henry Shevlin in-house as a Philosopher to tackle machine consciousness and AGI readiness, signaling a shift beyond ethics advisory boards.

3 min read
Google DeepMind Hires a Philosopher to Study Machine Consciousness

TL;DR

Google DeepMind brought Henry Shevlin in-house as a Philosopher to tackle machine consciousness and AGI readiness, signaling a shift beyond ethics advisory boards.

Google DeepMind has hired Henry Shevlin, a philosopher of mind and AI ethics, to a role with the literal job title "Philosopher." Shevlin announced the appointment on X last week, writing that he will start in May to work on machine consciousness, human-AI relationships, and what DeepMind calls AGI readiness. The title is not ceremonial.

The hire is notable for what it is not: another external advisory board seat or a periodic ethics review. The News reports the move marks a deliberate shift away from the industry's traditional reliance on outside consultants, embedding philosophical expertise directly inside core research operations. That distinction matters: advisory boards produce reports. An embedded philosopher attends the meetings where models get trained and deployment decisions get made.

Why now? The consciousness question has moved from seminar topic to operational concern. Seeking Alpha notes this appointment is among the most prominent examples of a major AI lab treating philosophy as an engineering-adjacent discipline rather than a communications exercise. Compounding the urgency is a basic scientific problem: no agreed-upon method exists for detecting or measuring machine consciousness. That gap is Shevlin's mandate.

The uncertainty runs to the top of the field. Anthropic CEO Dario Amodei has said publicly that he does not know whether Claude is conscious. That admission, from the head of one of the most safety-focused labs in existence, signals the depth of the problem: organizations are deploying systems they cannot fully characterize, and Shevlin's work will sit directly at the intersection of that ignorance and the decisions it forces.

The scale at stake

DeepMind operates at a peculiar intersection of academic ambition and industrial compute. Yahoo Finance reports that Gemini models now process more than 10 billion tokens per minute, and Alphabet is committing between $175 billion and $185 billion in capital expenditure to AI infrastructure in 2026 alone. At that scale, the question of whether a system has any form of inner experience stops being purely academic. It touches model welfare, liability, and regulatory exposure in ways that demand more than intuition.

The acquisition of DeepMind by Google in 2014 for $650 million rested on the premise that AGI research requires institutional patience and compute that venture capital cannot provide. Alphabet has grown over 1,000 percent since that deal closed, and the infrastructure bet has clearly paid off. The philosophical infrastructure is catching up more slowly, but Seeking Alpha suggests the Shevlin hire represents a formal acknowledgment that catching up is now urgent.

What this appointment does not resolve is worth stating plainly. Consciousness research remains pre-paradigmatic: there is no equivalent of a clinical trial, no consensus test beyond the long-discredited Turing framing, and no agreement on whether current transformer architectures could be conscious in any meaningful sense. Shevlin's most valuable near-term contribution is more likely to be a rigorous vocabulary for reasoning under uncertainty than any definitive answer about machine experience.

For ML engineers and researchers, that vocabulary is not decorative. Practitioners building RLHF pipelines or fine-tuning reward models operate on implicit assumptions about what the model is doing internally. Interpretability research is advancing, but the conceptual frameworks for connecting mechanistic findings to questions of experience, agency, and moral status remain thin. An embedded philosopher who can interface between those domains is closer to infrastructure than to public relations.

The harder question is whether a single philosopher embedded in a lab of thousands can shift institutional culture fast enough to matter. DeepMind has set a precedent that did not exist a year ago. Whether other frontier labs follow, and at what depth of integration, will reveal whether this is a field-wide reckoning or a high-profile exception.

FAQ

What is Henry Shevlin's background?
Shevlin is a philosopher of mind specializing in AI ethics and consciousness, recruited for a formally titled Philosopher position at Google DeepMind starting in May 2026, focusing on machine consciousness and AGI readiness.

Is there a scientific test for AI consciousness?
No. There is no agreed-upon empirical method for detecting or measuring consciousness in AI systems. The question remains genuinely open even among leading researchers and lab executives, including Anthropic's CEO.

What does AGI readiness mean at DeepMind?
The term refers to DeepMind's internal preparations for scenarios in which AI systems approach or reach general intelligence, including the philosophical and ethical questions those scenarios raise for deployment, governance, and model welfare.

How is an in-house philosopher different from an ethics advisory board?
Advisory boards provide external review, typically after decisions are made. An embedded philosopher participates in day-to-day research and can influence decisions as they are being formed rather than auditing them afterward.

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn