Anthropic has built much of its recent success on the loyalty of developers. Its Claude Code tool, launched in early 2025, has been popular with solo developers and enterprise engineering teams. The runaway success of the tool has helped send the company’s annualized recurring revenue run rate to $30 billion—more than triple its figure at the end of last year. However, the weeks-long performance decline and the lab’s slow response to user complaints, as well as several changes that users argue amount to stealth price hikes, are testing that loyalty.
The controversy could dent Anthropic’s bottom line amid an increasingly bitter race with rival OpenAI. The issues also come at a critical time, with both companies reportedly gearing up for initial public stock offerings later this year.
Beyond the performance issues with Claude Code, the AI lab has also suffered a series of outages as usage has surged, introduced usage cap limits during peak hours, and is limiting the rollout of its newest, larger, and more expensive model, Mythos, to a select group of large firms. (Anthropic has said that the model’s cautious rollout is meant to guard against security risks posed by the model’s unprecedented cyber capabilities.)
Anthropic declined to answer CNBC’s questions about the memo. Anthropic has also publicly stated it does not purposely degrade the performance of its Claude models.
In a post published to its engineering blog on Thursday, Anthropic said it had traced the problems to three distinct changes. The first, rolled out on March 4, reduced Claude Code’s default reasoning effort from “high” to “medium” to cut latency—a tradeoff the company said in the blog post was the wrong one. The second change, shipped on March 26, contained a bug that caused the model to continuously discard its own reasoning history mid-session, making it appear forgetful and erratic, and draining users’ usage limits faster than expected. The third, introduced on April 16, added a system prompt instruction capping the model’s responses at 25 words between tool calls—a change Anthropic said measurably hurt coding quality before it was reverted four days later.
Anthropic noted that all three issues were resolved as of April 20, with the API unaffected throughout. On April 23, the company reset usage limits for all subscribers.
The company acknowledged users’ frustration with the tool, saying: “This isn’t the experience users should expect from Claude Code.” The lab as also promised greater transparency around changes to Claude Code in the future.
Despite Anthropic’s public acknowledgement, some users have taken to social media to express their frustration with the lab’s initial response to users’ concerns about Claude’s performance.
The issues with Claude Code appear to have significantly affected the quality of code produced by Anthropic’s tools in the past month or so, especially when compared with OpenAI’s offerings.
Dave Kennedy, CEO of cybersecurity firm TrustedSec and a former U.S. Marine Corps intelligence officer, told Forbes his team had measured a 47% drop in Claude’s code quality, tracking defects, security issues, and task completion rates. The risk, Kennedy warned, is that novice developers using Claude won’t catch the flaws, “introducing serious defects” into production code.



