ChatGPT Uninstalls Surged 295% After Pentagon Deal as Users Flee to Claude

Smartphone showing app being deleted with Pentagon building in background

ChatGPT app uninstalls surged by 295% in the days following OpenAI’s announcement that it had signed a contract with the Pentagon, according to app analytics data. In an ironic twist, Anthropic — the company the Defense Department just labeled a “supply chain risk” for refusing military use — is seeing its Claude app downloads hit record highs.

The numbers tell a clear story: consumers don’t want their AI chatbot working for the military. And they’re voting with the uninstall button.

The Uninstall Surge

The 295% spike in ChatGPT uninstalls began almost immediately after OpenAI confirmed its Pentagon partnership — the same deal that came together after the Department of Defense’s $200 million contract with Anthropic collapsed over acceptable use restrictions.

OpenAI had previously maintained a policy against military applications, but quietly revised its usage policies in early 2024 to allow military and government work. When the company actually signed the Pentagon contract, the backlash was swift and measurable.

Social media lit up with users announcing they were deleting ChatGPT and switching to alternatives, with Claude being the most commonly cited replacement. The #DeleteChatGPT hashtag trended on X for multiple days.

Claude’s Consumer Growth Surge

While ChatGPT was hemorrhaging users, Anthropic’s Claude was picking them up. TechCrunch reported that Claude’s consumer growth continues to surge in the wake of the Pentagon deal debacle — with the app seeing significant increases in both downloads and daily active users.

The timing creates a fascinating paradox. The Pentagon labeled Anthropic a “supply chain risk” because the company insisted on maintaining ethical guardrails around its AI — essentially refusing to let the military use Claude for anything it deemed inappropriate. That principled stance cost Anthropic a $200 million government contract. But it’s now winning the company something potentially more valuable: consumer trust.

OpenAI Won the Contract, Anthropic Won the Users

This is the great irony of the AI military debate. OpenAI embraced Pentagon work and immediately started losing its consumer base. Anthropic refused Pentagon work and immediately started gaining one.

The market is sending a clear signal: in the age of AI, ethics isn’t just a nice-to-have — it’s a competitive advantage. Users want to know that the AI they’re talking to every day isn’t also being used for military surveillance or autonomous weapons systems.

Consider what happened:

  • Anthropic said no to unrestricted military use → Pentagon labels them a “supply chain risk” → Consumer growth surges
  • OpenAI said yes to military contracts → Pentagon signs deal → Uninstalls surge 295%

The Pentagon’s attempt to punish Anthropic for having ethics may have inadvertently handed them the consumer market.

The Bigger Picture: AI’s Trust Crisis

These numbers reflect a deeper anxiety about AI that goes beyond any single company. People are increasingly aware that the AI tools they use daily are being developed by companies making consequential decisions about military applications, surveillance, and government use.

When OpenAI was just a chatbot that helped you write emails and debug code, nobody cared about its corporate partnerships. But once that same AI is potentially involved in military operations, the relationship changes. Users start asking: Am I comfortable using a tool that’s also being deployed for national defense purposes I might disagree with?

For a significant number of people, the answer is no.

Can OpenAI Recover?

A 295% surge in uninstalls sounds catastrophic, but context matters. ChatGPT still has a massive user base — over 300 million weekly active users as of late 2025. The uninstall spike, while dramatic in percentage terms, likely represents a small fraction of total users.

But the trend matters more than the absolute numbers. If ethically-minded early adopters and power users — the people who drive organic growth and social media buzz — systematically shift to Claude, it could affect ChatGPT’s growth trajectory over time. These are the users who write the tutorials, share the prompts, and evangelize the tool to their networks.

OpenAI will likely try to distance its consumer products from its military work, drawing a clear line between “ChatGPT the helpful assistant” and “OpenAI the defense contractor.” Whether users buy that distinction remains to be seen.

The Bottom Line

The AI industry just learned an expensive lesson: you can’t serve both the Pentagon and privacy-conscious consumers without friction. OpenAI chose the military contract and watched its uninstalls spike 295%. Anthropic chose ethics and is watching its consumer downloads hit records.

In the short term, OpenAI has a lucrative government contract. In the long term, Anthropic may have something more valuable — the trust of the people who actually use AI every day. And in a market where switching costs are nearly zero, trust might be the only moat that matters.