OpenAI Rushes to Add Surveillance Protections to Pentagon Deal After Backlash

OpenAI and the Pentagon have agreed to strengthen surveillance protections in their recently announced AI contract, following widespread backlash that the original deal left the door open to domestic mass surveillance. According to Axios, the amended language hasn't been formally signed yet, but both sides have agreed to the changes.
How We Got Here
The saga began when the Department of Defense blacklisted Anthropic — OpenAI's chief rival — after the AI company refused to allow its Claude models to be used for "all lawful purposes" by the military. Anthropic drew a hard line against mass domestic surveillance and fully autonomous weapons.
Hours after Anthropic was sidelined, Sam Altman swooped in to announce OpenAI had struck a deal with the Pentagon. The speed raised immediate red flags. Critics pointed out that OpenAI's original contract language was vague enough to permit exactly the kind of mass surveillance Anthropic had refused to enable.
The "Protections" That Followed
After mounting pressure from civil liberties groups, Congress members, and the tech community, Altman shared an internal memo stating the contract would be amended. The new language reportedly says:
- "The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals"
- The DOD affirmed that OpenAI's tools would not be used by intelligence agencies like the NSA
- OpenAI will build "technical safeguards" and deploy personnel to monitor model behavior
According to sources, Altman personally approached DOD's Emil Michael to rework the deal's terms.
What the Critics Say
Privacy advocates remain skeptical. The protections focus on "intentional" use for surveillance — a qualifier that leaves significant gray area. There's no independent oversight mechanism described, and the "technical safeguards" remain unspecified. The contract still allows OpenAI models to operate on classified DOD networks for undisclosed purposes.
Meanwhile, Anthropic — despite being blacklisted — submitted a bid to compete in a $100M DOD contest to develop voice-controlled, autonomous drone swarming technology. Several federal agencies including the Treasury Department, State Department, and federal housing agency are now terminating all use of Anthropic products, with the State Department switching to OpenAI.
The Bigger Picture
This is the first major public battle over who controls frontier AI when it meets national security. As commentator Dean Ball noted, "institutions behaved erratically, maliciously, and without clarity" throughout the ordeal. The Anthropic-DOD standoff and OpenAI's opportunistic deal-making reveal how unprepared both the tech industry and the government are for the governance challenges that AI in defense presents.
The Bottom Line
OpenAI got the contract Anthropic wouldn't take — then had to scramble to add the protections Anthropic demanded from the start. The amended deal is better than the original, but "trust us, we added safeguards" isn't a governance framework. When AI companies are competing for Pentagon contracts worth potentially billions, the incentive to draw red lines grows weaker with every dollar on the table. The question isn't whether these protections exist on paper — it's whether they'll hold when classified programs push against them.