Users Report Claude Performance Decline — Anthropic Denies Intentional Degradation but Admits Setting Changes

A growing number of Claude users have been reporting noticeable degradation in the AI assistant's response quality — with complaints ranging from shorter, less detailed answers to increased refusals and less nuanced reasoning. Anthropic's staff have publicly denied any intentional model degradation, but have acknowledged that changes to default settings and system prompts may be responsible for the behavior some users are experiencing. The controversy, reported by VentureBeat, has sparked significant debate in AI developer communities.
What Users Are Reporting
Complaints on Reddit, Hacker News, X, and developer forums describe a Claude that feels "dumbed down" compared to earlier versions — producing shorter responses, declining tasks it previously handled without issue, adding excessive caveats, and showing reduced capability on complex coding and reasoning tasks. Power users who rely on Claude for professional workflows have been among the most vocal critics, with some reporting that the model feels materially worse on benchmarks they run personally, even if official Anthropic benchmarks remain unchanged.
Anthropic's Response
Anthropic employees, including researchers who post publicly, have denied that the underlying model weights have been changed or that any intentional capability reduction has occurred. However, they have acknowledged that changes to system prompts, default context settings, and output formatting defaults could account for the behavior differences users are observing. This is a meaningful distinction: the model itself may be unchanged, but how it is prompted and configured by default can significantly affect perceived quality.
The System Prompt and Default Settings Problem
The gap between model capability and user-perceived performance is a recurring challenge for AI companies. When Anthropic adjusts default system prompts — for safety, liability, or product experience reasons — users who interact with Claude through claude.ai or API defaults may see different behavior without any model update having occurred. This creates a frustrating experience: the model is technically unchanged, but its outputs are meaningfully different in ways that matter to users. Transparency about these defaults has been an ongoing criticism of major AI providers.
Timing and the Broader Pattern
The complaints arrive at a sensitive moment for Anthropic — ahead of the anticipated release of Claude Opus 4.7 and as the company shifts to consumption-based enterprise pricing. If users perceive current Claude versions as declining in quality, it complicates both the upsell to Opus 4.7 and the argument that the new pricing model is justified. The pattern of users perceiving AI capability regression is not unique to Anthropic: OpenAI faced nearly identical controversy in 2024 when users claimed GPT-4 had been "nerfed," later partially confirmed by OpenAI itself.
The Bottom Line
Whether Claude has genuinely degraded or whether users are experiencing the effects of changed defaults, the perception problem is real and matters. For a company whose entire brand is built on being the thoughtful, high-quality AI alternative, reports of performance decline — even if technically inaccurate — are damaging. Anthropic needs to be significantly more transparent about what changes to default settings and system prompts are made and when, or it will continue to face user trust erosion every time behavior shifts.