AIResearchAIResearch
Machine Learning

Claude Mythos Autonomously Hacks Networks, UK Safety Lab Warns

The UK's AI Security Institute confirmed Anthropic's Claude Mythos executes autonomous multi-step cyberattacks at expert level, prompting urgent calls for cyber defense investment.

3 min read
Claude Mythos Autonomously Hacks Networks, UK Safety Lab Warns

TL;DR

The UK's AI Security Institute confirmed Anthropic's Claude Mythos executes autonomous multi-step cyberattacks at expert level, prompting urgent calls for cyber defense investment.

UK's government-backed AI Security Institute has completed testing on Claude Mythos and published a stark warning: Anthropic has developed the first AI model capable of autonomously executing multi-step network intrusions at a speed and sophistication that would demand days of work from a skilled penetration tester.

Anthropic announced Mythos last week and simultaneously declined to release it publicly, citing the hacking capabilities as the specific reason. Yahoo News reported the company's own framing of the risk: potential effects on economies, public safety, and national security that could be "severe." Internally, Anthropic's research identified thousands of security vulnerabilities across popular web browsers and operating systems, documented weaknesses in software currently running on machines worldwide.

Two years of acceleration

Two years ago, frontier AI models struggled to complete entry-level offensive security challenges. Mythos now chains attack sequences end-to-end without human direction, at a fidelity that mirrors expert-level red team work. That is not incremental refinement on prior capability; it marks a qualitative threshold crossed in what AI can do unsupervised against live infrastructure.

Future frontier models will be still more capable, Yahoo News reported the AISI warning, with the institute calling for urgent and sustained investment in cyber defense. The implication is pointed: waiting until tools like Mythos are publicly deployed before building defenses is already too late.

Who gets access

Anthropic is not leaving the capability entirely dark. The company has provided a restricted version of Mythos to more than 40 organizations for defensive purposes. Yahoo News confirmed the list includes JP Morgan, Google, and Nvidia, pointing to a selection logic oriented toward institutions with large attack surfaces, existing security infrastructure, and the internal capacity to use a model at this capability level responsibly. Smaller security firms and independent researchers are outside that circle for now.

Dual-use framing is standard for this class of tool. The same capability that automates intrusion can also accelerate vulnerability discovery, patch triage, and threat modeling on the defensive side. Whether defensive applications can scale at a comparable pace to offensive ones remains genuinely uncertain.

A familiar gating pattern

This dynamic, where the most dangerous AI capabilities are withheld from general release while a vetted group gets early access, follows a well-worn path in frontier AI. When Google acquired DeepMind in early 2014 for roughly $650 million, 247 Wall St recounted how Demis Hassabis chose Google over competing offers partly because large-infrastructure players were the only entities capable of funding fundamental AI research at the required scale. Compute access and deployment control have always traveled together. Anthropic's approach with Mythos repeats that logic: capability is confirmed, access is gated, and the gatekeeper is the lab itself.

The regulatory machinery is already moving. Governments and safety bodies are scrambling to build frameworks capable of handling this class of tool, but framework development typically lags capability by months or years. The AISI's warning is also implicitly a call for the evaluation infrastructure itself to grow faster than the models it is assessing.

What this means for practitioners

Security engineers should treat the AISI's report as a planning signal rather than a current emergency. Mythos is not deployed in the wild. Its existence does confirm, however, that autonomous offensive tooling at expert level is no longer speculative, and red teams need to start modeling adversaries with AI-assisted capabilities that require no deep domain expertise to operate.

Practitioners building systems that interact with frontier AI should also revisit assumptions about the shape of AI-assisted attacks. Yahoo News reporting on the AISI's findings suggests the attack surface spans both the model layer and the infrastructure it can probe, with minimal human steering required at either stage.

The question is no longer whether autonomous offensive AI exists at a dangerous capability level. It is whether defensive tooling, institutional preparedness, and regulatory frameworks can develop at a comparable pace, and on current signals, that answer remains genuinely open.

FAQ

What is Claude Mythos?
Claude Mythos is Anthropic's AI model identified by the UK's AISI as the first capable of autonomously executing multi-step network attacks at expert level. Anthropic withheld public release due to the model's offensive security capabilities.

Why did Anthropic not release Claude Mythos publicly?
The company judged the model too dangerous to deploy openly after internal research identified thousands of real vulnerabilities in widely used software. Access is restricted to vetted organizations for defensive use only.

Which organizations have access to Claude Mythos?
More than 40 organizations received restricted access, including JP Morgan, Google, and Nvidia, specifically for defensive cybersecurity research and applications.

How does Claude Mythos compare to earlier AI security tools?
The AISI noted that two years ago, frontier AI models could barely handle entry-level cyber tasks. Mythos now executes multi-step attack chains autonomously, at the output level of a professional penetration tester working over days.

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn