Anthropic, the high-flying AI company, is facing a backlash from some of its most prolific users over a perceived decline in the performance of its Claude AI models.
The issues have left the company—recently valued at $380 billion and reportedly en route to an IPO—scrambling to respond to user revolt and online speculation about its motives and its ability to serve its newest wave of customers.
Over the past few years, Anthropic has gained significant ground in the AI race, emerging as a leader in enterprise AI and building up significant goodwill among developers and enterprise users. But if the anger around Claude’s performance issues persists, it risks eroding some of that goodwill and could lead the company to stumble at a critical moment.
To resolve some of the user issues, Cherny said, the company will test “defaulting Teams and Enterprise users to high effort, to benefit from extended thinking even if it comes at the cost of additional tokens and latency” going forward.
He also pushed back on speculation that the model had been purposely watered down, and on complaints from users that the change was rolled out with a lack of transparency, claiming the changes were made in response to user feedback and were flagged to users via a pop-up within the Claude Code interface.
Most of the user complaints center on Claude Code, Anthropic’s AI-powered coding tool, which has become one of the company’s most popular and fastest-growing products.
Launched in early 2025, Claude Code operates as a command-line agent that can read, write, and execute code autonomously within a developer’s environment. Since its debut, it has been widely adopted by individual developers and large enterprise engineering teams who rely on it for complex, multistep coding tasks.
In her analysis, she found that from late February into early March, Claude moved from a “research-first” approach—reading multiple files and gathering context before making changes—to a more direct “edit-first” style. The model reads less context before acting, makes more mistakes, and requires significantly more user intervention, according to the analysis. The analysis also points to a rise in behaviors like stopping too early, avoiding responsibility, or asking unnecessary permission, which it links to a reduction in “thinking” depth over the same period.
“Claude has regressed to the point [that] it cannot be trusted to perform complex engineering,” she wrote.
In a comment responding to the analysis, Anthropic’s Cherny says the analysis is likely misreading at least part of the data, claiming that the model’s reasoning hasn’t been reduced but that Anthropic had made a change so that the full “reasoning trace” of the model is no longer visible to the user.
But Laurenzo is far from the only person having issues with the tool.



