AIResearchAIResearch
Machine Learning

OpenAI's GPT-5.5 Ships with High-Risk Rating and Restricted Cyber Tier

OpenAI's GPT-5.5 targets code, research and computer use, adding a restricted GPT-5.5-Cyber tier for vetted defenders amid growing AI security concerns.

4 min read
OpenAI's GPT-5.5 Ships with High-Risk Rating and Restricted Cyber Tier

TL;DR

OpenAI's GPT-5.5 targets code, research and computer use, adding a restricted GPT-5.5-Cyber tier for vetted defenders amid growing AI security concerns.

OpenAI shipped GPT-5.5 on April 23, less than two months after GPT-5.4 entered production. The release cadence alone signals something: the company is now cycling major model updates faster than most research labs can complete a single evaluation cycle.

Greg Brockman, OpenAI's president, described the model's defining advance during a press briefing: reduced dependence on precise instruction. The model can look at an unclear problem and determine what needs to happen next without being walked through each step. For teams building agentic pipelines, that framing matters more than benchmark scores, since it suggests the model degrades less severely when prompts are underspecified.

CNBC reported GPT-5.5 is optimized for data analysis, code writing and debugging, software operation, online research and document creation. OpenAI positions it as a foundation for computer-use agents, extending the model beyond text generation into direct interaction with software interfaces.

What GPT-5.5 actually does

OpenAI's internal safety classification placed GPT-5.5 in its "High" cybersecurity risk tier, one level below "Critical." The Critical designation applies to systems that would introduce unprecedented pathways to severe harm; High-tier systems amplify existing attack vectors without opening qualitatively new ones. Mia Glaese, OpenAI's VP of research, confirmed that third-party red-teaming for both cyber and biological risks preceded the release, with safeguard iterations running throughout model development.

Access launched for paid subscribers. LLM Stats tracked the full release family: GPT-5.5, GPT-5.5 Pro, and a lightweight GPT-5.5 Instant that arrived May 5. The three-tier structure mirrors the deployment strategy Google and Anthropic have normalized, trading compute cost against capability within the same model generation.

The cyber variant

Two weeks after the main launch, OpenAI introduced GPT-5.5-Cyber, a restricted build available through its Trusted Access for Cyber (TAC) program. SiliconAngle reported that TAC, launched in February, gives vetted security researchers expanded permissions unavailable in the standard public model.

The capability differences across tiers are substantial. A standard ChatGPT user asking the model to exploit a vulnerable system gets either a refusal or remediation suggestions. TAC participants receive technical descriptions of how an attack would unfold, including sample code that the model does not verify actually executes. GPT-5.5-Cyber goes further: it can generate an exploitation plan and then validate it by running a simulated attack against the target system, making it directly applicable to automated red-teaming exercises.

International Business Times noted that the UK AI Security Institute ran a benchmark in which GPT-5.5 completed a simulated 32-step corporate cyberattack in two of ten attempts. Anthropic's Mythos Preview cleared the same exercise in three of ten runs. Neither result is alarming in isolation, but both indicate that frontier artificial intelligence systems are crossing thresholds that earlier models required specialized tooling to approach.

The competitive picture

Anthropics's response to similar capability findings was to restrict Mythos Preview's rollout. OpenAI chose controlled expansion instead, keeping safeguards against credential theft and live malware deployment in place while opening access to defenders through the TAC tier. Whether that access-control model holds under sustained adversarial pressure from nation-state actors or criminal groups is a question the field has not yet answered.

Practitioners choosing which frontier provider to build on face a structural instability that goes beyond any single release. OpenAI, Google, and Anthropic are all compressing the interval between major updates. Any artificial intelligence review that placed GPT-5.4 at the frontier of code generation six months ago is already two revisions out of date, and teams building production systems on these APIs face capability jumps that invalidate prior evaluations faster than most organizations can run internal benchmarks.

GPT-5.5-Cyber also logged internal utility before its public debut. According to SiliconAngle, the model contributed to software development inside OpenAI and meaningfully accelerated some server cluster workloads, suggesting deployment in production settings where correctness matters, not only in controlled evaluations.

Where this goes

Separating GPT-5.5-Cyber into a restricted tier signals that OpenAI anticipates the dual-use problem intensifying as model capability increases. Its bet is that tiered access can route offensive research tools to defenders faster than adversaries can obtain equivalent capability from the public model or from open-weight alternatives.

Whether that bet pays off depends on evidence not yet available: whether GPT-5.5-Cyber's attack-validation capability translates into measurable improvements in real-world defender response times. If it does, controlled expansion earns its justification. If it does not, this is another case where capability outran the infrastructure needed to deploy it responsibly.

FAQ

What is GPT-5.5 and what can it do?
GPT-5.5 is OpenAI's latest large language model, released April 23, 2026. It is optimized for code writing and debugging, data analysis, online research, software operation and document creation, with an emphasis on acting on ambiguous instructions without step-by-step guidance.

What is GPT-5.5-Cyber and who can access it?
GPT-5.5-Cyber is a restricted variant released May 8, 2026, available only to vetted members of OpenAI's Trusted Access for Cyber (TAC) program. It can validate exploitation plans by running simulated attacks, a capability withheld from standard users.

How does GPT-5.5 compare to Anthropic's Mythos Preview on cybersecurity benchmarks?
The UK AI Security Institute found GPT-5.5 completed a 32-step simulated corporate cyberattack in 2 of 10 test runs. Anthropic's Mythos Preview completed the same benchmark in 3 of 10 runs, a marginal difference that nonetheless reflects the rapid advance of frontier AI on offensive security tasks.

Is GPT-5.5 available to free ChatGPT users?
No. The initial rollout of GPT-5.5 is limited to OpenAI's paid subscribers, with a Pro tier and a lightweight Instant variant also available to higher-tier accounts.

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn