AIResearchAIResearch
Security

OpenAI Launches GPT-5.4-Cyber After Anthropic Restricts Mythos

GPT-5.4-Cyber gives defenders a restricted OpenAI model, but independent evaluation remains impossible as both companies compete on AI security framing.

3 min read
OpenAI Launches GPT-5.4-Cyber After Anthropic Restricts Mythos

TL;DR

GPT-5.4-Cyber gives defenders a restricted OpenAI model, but independent evaluation remains impossible as both companies compete on AI security framing.

Last week, the UK's Artificial Intelligence Security Institute confirmed something researchers had theorized but not yet demonstrated at scale: Anthropic's Claude Mythos can autonomously execute multi-step cyberattacks that would otherwise keep a professional red-team occupied for days. OpenAI's response came Tuesday, a new cybersecurity model called GPT-5.4-Cyber and a strategy framework designed to signal that the company's approach to dangerous capabilities is both more measured and more deployable than its rival's.

Two years ago, according to Yahoo News, the best available AI could barely complete entry-level security challenges. Mythos now finds vulnerabilities in production browsers and operating systems at scale; Anthropic reported discovering thousands of such flaws before withholding the model from general release. A curated group of more than 40 partner organizations, including JP Morgan, Google, and Nvidia, received restricted access for defensive research.

Anthropic also launched an industry coalition, with Google among the signatories, to coordinate responses to AI's expanding role in both offensive and defensive security. The company framed Mythos's potential impact in stark terms: severe consequences for economies, public safety, and national security.

OpenAI's countermove

OpenAI's Tuesday announcement struck a deliberately different register. Rather than treating the current generation of frontier models as too dangerous for broad deployment, the company argued that existing safeguards are largely adequate. According to Wired, OpenAI wrote that current safeguards "sufficiently reduce cyber risk enough to support broad deployment of current models," while adding that purpose-built cybersecurity models like GPT-5.4-Cyber warrant more restrictive controls.

GPT-5.4-Cyber is not a general-release product. Access is gated through what OpenAI describes as a "know your customer" validation system, the first of three strategic pillars, designed to expand availability without arbitrary gatekeeping. The company is combining targeted partnerships with specific organizations alongside an automated layer for broader but still controlled distribution. Details on the remaining two pillars were not fully disclosed in the announcement.

Reading the two companies' stances side by side, the divergence is as much rhetorical as technical. Anthropic positioned Mythos as a watershed requiring containment. OpenAI positioned current models as manageable, with a notable hedge embedded toward the end: significantly more capable future systems will eventually require "more expansive defenses." If future models rapidly exceed even the best purpose-built systems of today, as OpenAI itself acknowledged, the window for current safeguards may be shorter than the company's optimistic framing implies.

What this means for practitioners

For security engineers and red-team practitioners, the immediate constraint is access. Neither Mythos nor GPT-5.4-Cyber is available for independent evaluation, which makes it difficult to verify the capability claims either company is making and limits external benchmarking of offensive and defensive performance.

Government-backed labs publishing capability disclosures before broad public release represents a relatively new dynamic in AI governance. If the AISI's intervention on Mythos becomes a template, frontier models with dual-use risk could face a de facto external pre-release audit outside either company's own safety reports. As Yahoo News reported, the AISI was explicit that Mythos marks a step-change from anything previously evaluated and called for investment in cyber defense now, ahead of the next generation.

Anthropic is not standing still on the broader product front. The Tech Portal reports the company is already testing what is likely to be Claude Opus 4.7, with a focus on multi-agent coordination and long-duration task handling. Whether expanded autonomous capabilities will further complicate the containment calculus that Mythos has already forced remains unclear.

Both companies are now running competing narratives: one about responsible restraint, the other about responsible deployment. Neither framing is neutral. Which approach produces better security outcomes for the field will require more than a press cycle to evaluate.

FAQ

What is GPT-5.4-Cyber and who can access it?
GPT-5.4-Cyber is OpenAI's new model built specifically for cybersecurity defense work. It is not publicly available; access is controlled through a validation system that combines organizational partnerships with an automated distribution layer designed to limit misuse.

What can Anthropic's Claude Mythos do that previous AI models could not?
According to the UK's AISI, Mythos can autonomously execute multi-step cyberattacks at a level that would previously require days of work from a skilled human hacker. Earlier frontier models could barely handle beginner-level security tasks.

Why did Anthropic not release Mythos publicly?
Anthropic determined that Mythos posed too significant a risk of exploitation by malicious actors, citing potential for severe economic, public safety, and national security consequences. It instead shared a restricted version with selected partner organizations for defensive research.

What is the UK AI Security Institute and why does its assessment matter?
The AISI is a government-backed research lab that independently tests frontier AI models for safety risks. Its Mythos evaluation matters because it represents external capability disclosure outside the releasing company's own safety reporting, a model that could set precedent for future high-risk releases.

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn