Sam Altman Admits OpenAI's Pentagon Deal Was 'Opportunistic and Sloppy'

Sam Altman just did something rare in Silicon Valley: he admitted a mistake. The OpenAI CEO said Monday that the company "shouldn't have rushed" its Pentagon deal, calling the optics "opportunistic and sloppy." The admission comes after massive public backlash — and a reported 295% surge in ChatGPT uninstalls.
But here's what matters more than the apology: what actually changed in the contract, why the original deal was so controversial, and whether any of this actually protects anyone.
What Happened
Last Friday, just hours after Trump's administration banned rival AI company Anthropic from government use (and hours before US strikes on Iran), OpenAI announced a new contract with the Department of Defense. The timing was — to put it generously — terrible.
The backlash was immediate. Users fled ChatGPT for Claude on app stores. Activists chalked "What are your red lines?" outside OpenAI's offices. Employees reportedly raised concerns internally.
The Contract Changes
Altman shared an internal memo outlining specific amendments:
- "The AI system shall not be intentionally used for domestic surveillance of US persons and nationals"
- The DOD affirmed OpenAI's tools would not be used by intelligence agencies like the NSA
- Any services to intelligence agencies would require a further contract modification
- The prohibition covers "deliberate tracking, surveillance, or monitoring" including through commercially acquired personal data
The Uncomfortable Context
This deal exists because Anthropic — the AI safety company founded by former OpenAI researchers — refused to proceed without similar guarantees. When Anthropic asked for contractual protections against domestic surveillance and autonomous weapons without human control, negotiations broke down. Defense Secretary Pete Hegseth then designated Anthropic a "supply-chain threat."
OpenAI swooped in within hours. The company now says it shares the same "red lines" as Anthropic — and that the DOD agreed to them. Which raises an obvious question: if the DOD was willing to agree to these terms all along, why did they punish Anthropic for asking?
What's Still Missing
The word "intentionally" in the surveillance clause is doing a lot of heavy lifting. It doesn't cover incidental data collection, algorithmic bias in targeting decisions, or the use of AI-generated intelligence products. The contract also doesn't address:
- Autonomous weapons development or deployment
- Use in offensive cyber operations
- Application in foreign surveillance that could sweep up US persons incidentally
- What happens when a future administration reinterprets "intentionally"
The Bottom Line
Altman deserves some credit for the rare public admission. But the fundamental problem remains: OpenAI signed a defense contract at the exact moment its safety-focused competitor was being punished for asking too many questions. No amount of contract amendments changes the message that sent to every AI company watching.
The real test isn't what the contract says. It's what happens when the Pentagon asks for something that falls into the gray area between "intentional surveillance" and "AI-assisted intelligence analysis." That's where the word "intentionally" starts to look less like a protection and more like a loophole.