How OpenAI Caved to the Pentagon on AI Surveillance

OpenAI CEO Sam Altman announced on Friday that his company had successfully negotiated a contract with the Pentagon — now rebranded as the Department of War under the Trump Administration. Altman claimed the deal includes the same red lines Anthropic had demanded: no mass surveillance of Americans and no lethal autonomous weapons.
But according to sources who spoke to The Verge, the reality is far less reassuring. OpenAI’s deal boils down to three words that change everything: “any lawful use.”
What the Contract Actually Says
The key distinction between Anthropic’s failed negotiation and OpenAI’s successful one is simple: Anthropic wanted legally binding guarantees that its AI would never be used for domestic mass surveillance. The Pentagon refused. OpenAI, on the other hand, agreed to follow existing US laws — the same laws that have historically been stretched to justify exactly the kind of surveillance Altman claims to prohibit.
As one source familiar with the negotiations told The Verge: “If you look line-by-line at the OpenAI terms, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out.”
The Mass Surveillance Loophole
OpenAI’s contract includes a clause prohibiting “unconstrained monitoring of US persons’ private information.” But legal experts and national security analysts have flagged several problems:
- “Unconstrained” is dangerously vague — any constraint at all would arguably mean this clause doesn’t apply
- “Private information” is undefined — the DIA already argues it can collect commercially available location and browsing data without a warrant
- “As consistent with these authorities” ties the prohibition to existing legal frameworks — the very ones the government uses to justify mass data collection
When pressed, OpenAI executives couldn’t point to a specific contract clause that prevents bulk data collection.
A History of Stretching “Lawful”
The US government has a long track record of redefining what counts as “lawful” surveillance:
- In 2021, the Defense Intelligence Agency told Congress it purchases bulk smartphone location data without warrants
- In 2024, the NSA confirmed it buys Americans’ browsing data
- Senator Ron Wyden revealed in 2020 that data brokers were selling phone data collected from US citizens to military customers
OpenAI’s head of national security partnerships, Katrina Mulligan, initially claimed the Pentagon has “no legal authority” for domestic surveillance — a claim that is factually incorrect. Notably, Mulligan previously managed the White House response to the Snowden disclosures, and one of OpenAI’s board members served as NSA Director from 2018 to 2024.
Can OpenAI’s Safety Stack Override the Contract?
OpenAI argues it has “full discretion” over its safety stack. But procurement law expert Jessica Tillipman identified a fundamental tension: if the safety stack blocks a lawful use, which provision controls?
More practically, UK AI Security Institute researchers found universal jailbreaks for all AI systems last year, making technical guardrails unreliable against the US military.
The Bottom Line
While Anthropic stood firm on requiring legally binding protections against mass surveillance — and was blacklisted by the Pentagon as a result — OpenAI took a different path. It agreed to vague language tied to legal frameworks that have repeatedly been used to justify the very surveillance practices Altman claims to prohibit.
As former OpenAI geopolitics team lead Sarah Shoker told The Verge: “There are a lot of modifying words — ‘unconstrained,’ ‘generalized,’ ‘open-ended’ — that’s not a complete prohibition.”
The question isn’t whether OpenAI’s red lines exist on paper. It’s whether they mean anything in practice.