TL;DR
Developers report Claude makes more errors and skips steps after Anthropic reduced default token effort levels, raising transparency questions ahead of a potential IPO.
Claude has a problem that no benchmark can paper over. Developers and power users report the model now fails to follow instructions reliably, takes shortcuts on complex workflows, and produces more errors than it did a few months ago. The complaints are spreading, and Anthropic is scrambling to respond.
The backlash traces to a deliberate change. According to AOL Finance, Anthropic quietly reduced Claude's default "effort" level, cutting the number of tokens (the fundamental processing units) the model consumes per request. Fewer tokens per task means lower compute cost per query. The company did list the update in its changelog, but never announced it prominently, and for many users that omission is the sharper grievance.
That transparency gap is especially damaging given Anthropic's positioning. For a company that has staked its identity on responsible artificial intelligence development more explicitly than any of its peers, being accused of silently degrading a product to manage costs is a serious reputational problem. Anthropic, which AOL Finance reports is valued at $380 billion and reportedly preparing for a public offering, can ill afford to alienate the technical community that constitutes its most loyal base.
The compute angle
Behind the token-reduction decision sits a harder structural question. Speculation among developers centers on whether Anthropic is running short of compute capacity. The company has announced fewer large-scale data center agreements than OpenAI or Google, while Claude adoption accelerated sharply in recent months. If demand is outpacing available infrastructure, throttling effort per request is one lever to pull, but Anthropic has not confirmed this interpretation.
The timing of Thursday's release makes the situation stranger still. Anthropic launched Claude Opus 4.7 on April 16, billing it as the most capable model publicly available, with improvements in advanced coding, visual intelligence, and document analysis. Users are reportedly able to hand off demanding tasks with minimal supervision. That framing sits awkwardly against widespread reports that existing Claude tiers have grown less reliable.
Pricing signals are embedded in the launch as well. Anthropic noted the new model processes more output tokens than its predecessor because it thinks at higher effort levels by design. Viewed alongside the effort reduction applied to existing tiers, this looks less like a technical coincidence and more like deliberate product segmentation: full effort for those paying more, reduced effort by default for everyone else.
Mashable also reports that Anthropic disclosed the existence of Claude Mythos, an internal model deemed too dangerous for public release. Holding back more capable systems while simultaneously being accused of degrading the shipped ones creates an uncomfortable optics problem ahead of any investor roadshow.
What this means for practitioners
For ML engineers and applied scientists, this regression is more than a UX complaint. Agentic pipelines and multi-step reasoning workflows are especially sensitive to effort-level changes: a model that shortcuts one sub-task can propagate errors across an entire chain. Teams that calibrated prompts, evaluation harnesses, and tooling against prior Claude behavior are now facing an undocumented distribution shift in model outputs, with no changelog entry adequate to describe the scope.
The broader lesson cuts across the artificial intelligence industry. As models become load-bearing infrastructure for software teams, reliability guarantees start to matter as much as raw capability scores. The core complaint is not that Claude became a bad model overnight; it is that the operational contract changed without sufficient notice, and without notice, there is no way to adapt pipelines before damage is done.
One parallel worth tracking: OpenAI recently launched a Safety Fellowship to fund external researchers, signaling how much effort frontier labs now invest in managing technical and reputational risk simultaneously. Anthropic runs a comparable program. The irony is that a company whose entire brand is built on safety and honesty is now facing its most visible credibility test not from a safety incident, but from a performance and transparency one.
As Anthropic moves toward a public offering, the central question is whether its developer community trusts it in the operational sense: will the model behave consistently from one week to the next? Right now, a significant slice of its most prolific users is saying not reliably enough to find out.
Frequently asked questions
Why is Claude performing worse than before?
Anthropic reduced Claude's default token effort level, meaning the model processes fewer tokens per request. This lowers compute cost but can reduce instruction-following reliability and accuracy on complex tasks, particularly in multi-step agentic workflows.
What is Claude Opus 4.7 and how does it differ from Opus 4.6?
Released April 16, 2026, Opus 4.7 is Anthropic's most capable publicly available model, with reported improvements in coding, visual intelligence, and document analysis. It uses more output tokens than Opus 4.6 because it operates at higher default effort levels.
Does the effort reduction affect API users differently than chat users?
Anthropic has not published a breakdown by access tier, but developers using Claude for agentic and multi-step workflows report the most significant impact, since errors in sub-tasks compound across longer chains.
What is Claude Mythos?
Claude Mythos is an Anthropic model more powerful than Opus 4.7 that the company has declined to release publicly, citing safety concerns. Its existence was disclosed alongside the Opus 4.7 announcement on April 16, 2026.
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn