The company also released a new automated method for measuring political bias and published results suggesting its latest model, Claude Sonnet 4.5, outperforms or matches competitors on neutrality.
That scrutiny appears to have rattled the San Francisco firm, which is now actively advertising its efforts to shed perceptions that Claude is more left-leaning than rival models.
The company’s neutrality push indeed goes well beyond the typical marketing language. Anthropic says it has rewritten Claude’s system prompt—its always-on instructions—to include guidelines such as avoiding unsolicited political opinions, refraining from persuasive rhetoric, using neutral terminology, and being able to “pass the Ideological Turing Test” when asked to articulate opposing views.
The firm has also trained Claude to avoid swaying users in “high-stakes political questions,” implying one ideology is superior, and pushing users to “challenge their perspectives.”
Anthropic’s evaluation found Claude Sonnet 4.5 scored a 94% “even-handedness” rating, roughly on par with Google’s Gemini 2.5 Pro and Elon Musk’s Grok 4, but higher than OpenAI’s GPT-5 and Meta’s Llama 4. It also showed low refusal rates, meaning the model was typically willing to engage with both sides of political arguments rather than declining out of caution.



