AI-Generated Propaganda Videos Are Flooding Social Media With Pro-Iranian Disinformation

A new study has found that most AI-generated videos circulating on social media about the Iran war push pro-Iranian views, often exaggerating the country's military capabilities and sophistication. According to the New York Times investigation, AI-generated propaganda has become a significant weapon in the information war surrounding the conflict — and social media platforms are struggling to keep up.
AI as a Propaganda Machine
The study documents a flood of AI-generated videos on platforms like TikTok, X (Twitter), YouTube, and Instagram that portray Iran's military as more powerful and technologically advanced than it actually is. These deepfake videos show realistic-looking missile launches, drone swarms, and military operations that never happened — designed to boost Iranian morale and undermine US public support for the conflict.
Unlike traditional propaganda, which requires production studios and editing teams, AI-generated content can be produced at scale by anyone with access to consumer-grade video generation tools. A single operator can produce dozens of convincing military propaganda videos per day, making it nearly impossible for platforms to moderate effectively.
Why Pro-Iranian Content Dominates
The study found an asymmetry in AI-generated content: pro-Iranian videos significantly outnumber pro-US content. This may reflect Iran's greater incentive to use information warfare — when you're losing the kinetic war (Iranian drone attacks down 83%, ballistic missiles down 90%), winning the narrative becomes even more critical.
Iran has a well-documented history of sophisticated information operations. Adding AI video generation to that playbook was inevitable. The tools that make AI video creation accessible to everyone also make it accessible to state-sponsored propaganda operations.
Platforms Can't Keep Up
Social media companies have struggled with disinformation for years, but AI-generated video content represents a qualitative leap in difficulty. Traditional detection methods — checking metadata, reverse image searches, looking for editing artifacts — are increasingly ineffective against AI-generated content that leaves few traditional forensic traces.
The irony is sharp: the same AI technology that Silicon Valley companies are racing to commercialize is being weaponized to manipulate public opinion about a war that their AI tools are helping to fight. Project Maven uses AI to identify targets; AI deepfakes are used to shape public perception of those strikes.
The Bigger Picture
The Iran conflict is the first major war where AI-generated propaganda plays a significant role. It won't be the last. Every future conflict will feature AI-generated content designed to manipulate public opinion, and the tools to create it will only get better and more accessible.
The question isn't whether AI propaganda can be stopped — it can't, not entirely. The question is whether democratic societies can develop the media literacy and institutional responses to survive in a world where seeing is no longer believing.
The Bottom Line
AI-generated propaganda about the Iran war demonstrates that deepfakes are no longer a theoretical threat — they're an active weapon being deployed at scale in a real conflict. The study's finding that most AI content pushes pro-Iranian views should concern everyone who cares about information integrity. When AI can generate convincing military propaganda faster than platforms can remove it, the information war is already lost. The real question is what this means for the next conflict, and the one after that.