TL;DR
At its second Code with Claude conference, Anthropic disclosed 80x quarterly growth and introduced Dreaming, a mechanism letting Claude agents self-improve between sessions.
Anthropic planned for 10x annual growth in Q1 2026. It got 80x instead.
That single number, delivered by CEO Dario Amodei at the company's second Code with Claude conference in San Francisco, reframes every infrastructure deal Anthropic has announced in the past month. The $33 billion compute agreement with Amazon, the Microsoft Azure expansion, a Google TPU deal, and a SpaceX Memphis data center lease all looked like aggressive overbuilding against a 10x growth target. At 80x, they look like catch-up. API volume on the Claude platform is up nearly 70 times year-over-year, and the average developer using Claude Code now logs 20 hours per week with the tool.
But the growth disclosure was not the headline. Anthropic used the conference to announce a feature it calls "Dreaming," a capability designed to let agents improve themselves between active sessions.
The Dreaming feature
The name is deliberate, not metaphorical window dressing. According to Forbes, Anthropic is building a mechanism that allows agents to process and refine their own behavior outside of live user interactions, loosely analogous to the consolidation processes that occur in biological memory during sleep. The company's framing positions this as a shift in what artificial intelligence agents fundamentally are: not static tools that respond to prompts, but systems capable of iterative self-modification across sessions.
Details on the technical implementation remain sparse. Anthropic has not specified whether Dreaming involves trajectory-based fine-tuning, structured memory updates, or reflection loops that alter internal representations, though the Claude Code developer focus suggests initial priority is on coding agent workflows. A clearer picture will depend on whether the company publishes documentation beyond the conference announcement.
The infrastructure math
The 80x growth figure explains why Anthropic would invest in self-improvement at the agent level. When API volume scales 70x in a single year, the economics of human oversight per agent action become untenable. Agents that can refine their own workflows reduce both developer burden and the compute overhead of repetitive correction cycles at scale.
Enterprise adoption data reinforces the pressure. A May 2026 report from AI.cc, drawn from 2.4 billion API calls across 8,000 developers, found enterprise token costs fell 67% year-over-year. Per Cincinnati.com's coverage of the report, teams using multi-model routing strategies achieved median cost reductions of 71% versus single-provider deployments. Even as Claude usage grows, open-source alternatives now account for 38% of enterprise token volume, squeezing the margin case for any single proprietary model.
Competitive stakes
The announcement lands during a dense stretch of model releases. OpenAI shipped GPT-5.5 in late April, and CNBC reported that the model carries a "High" risk classification under OpenAI's internal safety framework, while excelling at code, computer use, and autonomous research. The same report flagged Anthropic's Claude Mythos Preview as a model that had drawn significant Wall Street attention earlier in May, before Anthropic limited its rollout over identified security concerns.
Against that backdrop, "Dreaming" is Anthropic staking a claim on a distinct product layer: not a better-scoring model in the traditional benchmark sense, but a different architecture for how agents persist and evolve across time. That is a harder capability for competitors to replicate through a single model release.
What it means for practitioners
For ML engineers building production systems on Claude, the practical implications hinge on implementation details Anthropic has not yet published. The central question is whether self-improvement means something reproducible and auditable, or whether agent behavior will diverge in ways that are difficult to version, test, or roll back. Those are not hypothetical concerns for teams running agents in regulated domains.
LLM-Stats tracks quality shifts across model versions over time, and that kind of monitoring becomes substantially more complex when behavior can change between sessions without a formal version increment. If Dreaming ships without robust observability tooling alongside it, practitioners will be slow to adopt it in production regardless of the headline capability.
Whether Anthropic can build self-improvement that is genuinely auditable is the open question. A compelling feature name is the easy part.
---
FAQ
What is Claude's Dreaming feature?
Dreaming is a new Anthropic capability that allows Claude agents to process and refine their own behavior between active user sessions, rather than only responding during live interactions.
How fast is Anthropic growing in 2026?
Anthropic reported 80x annualized growth in Q1 2026 against an internal target of 10x, with API volume up nearly 70 times year-over-year.
How does Claude Dreaming work technically?
Anthropic has not published technical specifics. Possible mechanisms include trajectory-based fine-tuning, structured memory updates, or in-context reflection loops, but the company has not confirmed the implementation.
What was announced at Code with Claude 2026?
Anthropic CEO Dario Amodei disclosed the 80x growth figure and introduced the Dreaming self-improvement feature at the company's second annual Code with Claude developer conference in San Francisco.
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn