TL;DR
OpenAI's GPT-5.5 arrives roughly six weeks after GPT-5.4 with stronger coding and autonomous task handling, but carries an explicit 'High' cybersecurity risk classification.
Less than two months after GPT-5.4 shipped, OpenAI on April 23 released GPT-5.5, a model the company says requires less user guidance to handle complex, open-ended tasks. The cadence alone signals something: the artificial intelligence arms race has compressed release cycles to the point where major model updates are arriving roughly every six weeks.
Greg Brockman, OpenAI's president, framed the launch around autonomy. At a briefing with reporters, he described a model that can examine an ambiguous problem and determine on its own what needs to happen next. That framing, more than any benchmark sheet, captures what OpenAI is actually selling right now.
GPT-5.5's stated strengths cluster around professional knowledge work: analyzing datasets, writing and debugging code, operating software interfaces, web research, and generating structured documents. Both GPT-5.5 and a companion Pro tier launched simultaneously on April 23, according to llm-stats.com, which tracks release dates across providers.
The safety picture
OpenAI's internal risk classification system deserves close reading here. According to CNBC, GPT-5.5 does not cross the company's "Critical" cybersecurity threshold, which covers capabilities that could open entirely new pathways to severe harm. It does, however, meet the criteria for "High" risk, the tier below, defined as the ability to amplify existing attack pathways. That distinction is not cosmetic: a "Critical" designation would likely have attracted regulatory attention under frameworks such as the Artificial Intelligence Act, while "High" gives OpenAI room to deploy broadly while still acknowledging real risk on the record.
Mia Glaese, OpenAI's vice president of research, said the model went through extensive third-party red teaming on cyber and biosecurity threats and that the company had been iterating on safeguards for months as successive models grew more capable in that domain.
Context for the High rating sits squarely with Anthropic's Mythos. PBS NewsHour reported that Anthropic's model is capable enough at identifying exploitable software flaws that the company restricted its rollout to roughly 40 partner organizations for adversarial testing only. Mythos does autonomously what a skilled security researcher does across a full workday, which is precisely the capability that put both tech executives and government officials on edge in recent weeks.
The competitive frame
OpenAI is clearly responding to that pressure. CNBC noted the company is racing against Google and Anthropic, whose Mythos Preview has drawn significant Wall Street interest. Releasing GPT-5.5 within weeks of the Mythos announcement keeps OpenAI in the "most capable" conversation at a moment when losing that narrative has real commercial consequences.
Meanwhile, the broader release landscape shows the proprietary tier is not the only one moving fast. llm-stats.com data shows that on the same April 23 date, DeepSeek shipped two new variants, and Alibaba's Qwen team had released a 27-billion-parameter model two days prior. The gap between closed and open-weight systems continues to narrow.
Brockman's autonomy framing is the central capability question for anyone building on top of these models. The practical threshold for agentic artificial intelligence is whether a model can take an underspecified goal and produce useful output without a human engineering a detailed prompt chain. GPT-5.5's ability to operate software and run open-ended research pipelines moves toward that threshold, though how far it actually moves is not answerable from a press briefing alone.
The High risk classification should not be read as automatic reassurance. OpenAI's risk tiers are self-reported, and the company has commercial incentives to stay below the Critical label. What independent artificial intelligence review processes exist to validate these internal assessments remains largely opaque to the public. The red teaming disclosure and the explicit acknowledgment of amplified attack pathways is more transparency than is typical in this industry, but it is still the vendor grading its own homework.
GPT-5.5 is rolling out to paid subscribers now. The verdict on the autonomy claims will come from practitioners deploying it on real, messy tasks, not from the launch briefing. Watch what engineers report back.
---
FAQ
What is GPT-5.5 and what makes it different from GPT-5.4?
GPT-5.5 is OpenAI's model released April 23, 2026, roughly six weeks after GPT-5.4. The primary claimed advance is greater autonomy on underspecified tasks, alongside stronger coding, computer use, and research capabilities. A Pro variant launched at the same time.
What does OpenAI's High cybersecurity risk rating actually mean?
OpenAI uses an internal tiered system. High means the model can amplify existing attack pathways but does not create entirely new ones. The more restrictive Critical tier was not triggered. Both tiers require safeguard testing before deployment.
Is GPT-5.5 available to free users?
No. OpenAI is rolling it out to paid subscribers. No timeline for free-tier access has been announced.
How does GPT-5.5 compare to Anthropic's Mythos?
Mythos has not been publicly released. Anthropic limited access to 40 organizations for vulnerability testing due to the model's advanced ability to find exploitable software flaws. GPT-5.5 is a general deployment with a High rather than Critical risk classification.
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn