X Suspends Creators Who Post AI War Videos Without Disclosure

X Twitter AI disclosure policy warning badge on smartphone

X Introduces Its First AI Disclosure Rule for Creators

X (formerly Twitter) has announced a significant policy change targeting creators in its revenue sharing program. Starting immediately, any creator who posts AI-generated videos depicting armed conflicts without disclosing they were made with AI will be suspended from the revenue sharing program for 90 days. Repeat offenders face permanent removal.

Head of product Nikita Bier announced the change on March 3, framing it as a necessary measure "during times of war." This marks the platform's first rule requiring any kind of AI content disclosure from its users.

What the New Policy Actually Covers

The policy is notably narrow in scope. It applies only to creators enrolled in X's revenue sharing program and only to AI-generated videos of armed conflicts. Regular users who are not monetizing their content are not affected. AI-generated images, text, or videos on topics other than armed conflict are also not covered.

Violations will be detected through two mechanisms: Community Notes, X's crowd-sourced fact-checking system, and automated detection of metadata from generative AI tools embedded in video files.

Why Only Armed Conflict Videos?

The quality of AI video generation has progressed rapidly in recent months. Generated footage has become almost indistinguishable from real video for most viewers, making AI-generated war footage particularly dangerous for spreading misinformation.

The timing coincides with the ongoing conflict between the United States, Israel, and Iran, though no formal declaration of war has been made. The potential for AI-generated combat footage to inflame tensions or spread false narratives about the conflict appears to be the driving concern behind this policy.

What X Already Does About AI Content

X already watermarks images and videos generated by its own Grok chatbot, but has not previously required users to disclose AI-generated content from other tools. The platform is separately testing a broader AI labeling toggle that would let users voluntarily mark any post as containing synthetic content, though no timeline has been shared for that feature.

The Enforcement Problem

Relying on Community Notes for enforcement raises questions about effectiveness. Community Notes operates on a consensus-based system that can take hours or days to surface corrections. By the time a misleading AI-generated war video gets flagged, it may have already been viewed millions of times.

The metadata detection approach is also limited. Many AI video generation tools allow users to strip metadata before uploading, and screen recordings of AI-generated content would not carry any AI metadata at all.

The Bottom Line

X's new policy is a step in the right direction, but its narrow scope raises more questions than it answers. Why only armed conflict? Why only paid creators? If AI-generated war footage is dangerous enough to warrant revenue suspension, should it not also be restricted for non-monetized accounts spreading the same content? The policy feels like a calculated minimum rather than a comprehensive approach to AI content moderation.