Pentagon's Project Maven: How AI Warfare Went From Protest to 5,500 Targets in Iran

Pentagon Project Maven AI command center with targeting displays

In 2017, the Pentagon launched Project Maven — the Algorithmic Warfare Cross Functional Team — to bring AI into military operations. Led by Marine Colonel Drew Cukor, the initiative was born from a simple premise: the military was drowning in surveillance data and needed machine learning to process it. What started as a controversial pilot with Google has evolved into the most consequential AI deployment in military history, now actively powering Operation Epic Fury against Iran.

From Google's Exit to Palantir's Embrace

The original Project Maven partnership with Google sparked one of Silicon Valley's most dramatic ethical revolts. In 2018, thousands of Google employees signed a petition demanding the company exit the program, arguing that AI should not be weaponized. Google pulled out. But the Pentagon's appetite for AI-powered warfare only grew, and companies like Palantir, Amazon Web Services, and Microsoft were happy to fill the void.

Seven years later, Project Maven has evolved into the Maven Smart System, built primarily by Palantir Technologies and integrated with Anthropic's Claude AI via Amazon Web Services. The system processes classified intelligence on secure networks, generating targeting data at speeds that human analysts simply cannot match.

5,500+ Targets in Iran: AI Warfare at Scale

The numbers from Operation Epic Fury tell a staggering story. Over 5,500 targets have been hit inside Iran, with approximately 50,000 US service members deployed in and around the Middle East. In the first 24 hours alone, the scale of firepower more than doubled that of the US's initial assault on Iraq in 2003 — an expansion made possible by AI.

Pentagon AI Chief Cameron Stanley described the system's impact in stark terms: "We've gone from identifying the target to now coming up with a course of action, to now actioning that target, all from one system. This is revolutionary." He added a phrase that captures the Pentagon's philosophy perfectly: "No fair fights."

2,000 Analysts Replaced by 20

Perhaps the most telling metric is what the Maven Smart System has done to human requirements. Operations that previously required 2,000 intelligence officers now need just 20. The system consolidates what used to require eight or nine separate systems into a single platform, orchestrating data collection, analysis, targeting, and action execution in one unified interface.

The Maven Smart System suggested hundreds of targets during planning, found precise location coordinates, and prioritized those targets — all at machine speed. The result: Iranian drone attacks decreased 83% since the operation began, and ballistic missile attacks dropped 90%.

Anthropic's Uncomfortable Position

The involvement of Anthropic's Claude AI in the targeting system has created a particularly uncomfortable dynamic. The company, which markets itself as focused on AI safety, has seen its technology become central to military targeting operations. The situation escalated when the Trump administration declared Anthropic a "supply chain risk" after a contract dispute over usage terms. Anthropic subsequently sued the Defense Department, arguing it should be able to set limits on how its technology is used.

The core disagreement: Anthropic refused to allow its tech to enable mass domestic surveillance or fully autonomous weapons, while the government insisted it should be able to use the technology for all lawful purposes.

The Bottom Line

Project Maven represents the full arc of Silicon Valley's relationship with military AI. From Google's principled exit to Palantir's enthusiastic embrace, from safety-focused Anthropic to autonomous targeting systems, the story tracks how ethical objections gave way to operational necessity — and enormous defense contracts. The book "Project Maven" by Katrina Manson, arriving March 24, promises to reveal even more about how the Pentagon got Silicon Valley hooked on AI warfare. Whether this represents progress or a line that should never have been crossed depends entirely on which side of the targeting algorithm you're standing on.