Wikipedia Bans AI-Generated Articles on English Site With Just Two Exceptions

English Wikipedia has officially banned the use of AI-generated text for writing or rewriting articles, marking one of the most significant policy decisions in the encyclopedia's 25-year history. The ban passed with near-unanimous support — 44 votes in favor and just two opposed.
What the Ban Covers
The new policy prohibits editors from using large language models (LLMs) like ChatGPT, Claude, or Gemini to generate or rewrite article content on English Wikipedia. The rule is clear: AI cannot be the author of encyclopedic content.
However, the policy includes two narrow exceptions:
- Basic copyediting: Editors can run their own writing through an LLM for grammar and style suggestions, but must carefully verify the output doesn't change meaning or introduce unsourced claims
- First-pass translation: LLMs can produce an initial translation of articles from other language editions, but editors must verify accuracy against cited sources
In both cases, the human editor remains fully responsible for the final content. The AI is a tool, not an author.
Why Wikipedia Drew the Line
The ban addresses what Wikipedia's community sees as an existential threat to the encyclopedia's reliability. The core problems with AI-generated content on Wikipedia include:
- Hallucination contamination: LLMs frequently generate plausible-sounding but factually incorrect information. On Wikipedia, this is particularly dangerous because the encyclopedia is a primary training source for those same AI models — creating a feedback loop of inaccuracy
- Source fabrication: AI models often invent citations that don't exist, undermining Wikipedia's fundamental requirement for verifiable sourcing
- Violation of core content policies: Wikipedia requires neutral point of view, verifiability, and no original research — all of which are routinely violated by AI-generated text
- Detection challenges: Current AI detection tools are unreliable, making enforcement through technical means alone impossible
The Compounding Risk Problem
Wikipedia's editors identified a particularly insidious cycle: AI-generated misinformation enters Wikipedia, gets scraped by AI companies for training data, and re-enters future model outputs. Each cycle amplifies the original error, making it harder to trace and correct. Breaking this cycle was a key motivation for the ban.
English Wikipedia Only — For Now
The ban currently applies only to the English-language edition. Each of Wikipedia's 300+ language editions operates independently with its own policies. Spanish Wikipedia has already implemented a similar but stricter ban — prohibiting LLMs for creating or expanding articles without the copyediting and translation exceptions that the English edition allows.
Other language editions are watching closely, and similar proposals are expected to emerge across the platform in the coming months.
What This Means for the AI Industry
Wikipedia's decision sends a clear signal to the broader internet: the world's largest knowledge repository doesn't trust AI to write accurately. For AI companies that rely on Wikipedia as a primary training data source, this creates an interesting tension — the very platform that feeds their models has concluded those models aren't reliable enough to contribute back.
Bottom Line
Wikipedia's AI ban isn't about being anti-technology — it's about protecting the integrity of human knowledge. When the world's most-referenced encyclopedia concludes that AI-generated text violates its "core content policies," that's a verdict worth paying attention to. The two narrow exceptions (copyediting and translation) show Wikipedia isn't opposed to AI as a tool — just as an author. That distinction may become the template for how other knowledge institutions handle AI content going forward.
Frequently Asked Questions
Can Wikipedia editors use AI at all?
Yes, but only in two limited ways: basic copyediting of their own writing and first-pass translation from other language editions. In both cases, the editor must verify all output and remains responsible for accuracy.
Does the ban apply to all Wikipedia languages?
No, currently only English Wikipedia. Each language edition sets its own policies. Spanish Wikipedia has a similar but stricter ban. Other editions are expected to follow.
How will Wikipedia enforce the ban?
Through community-based editorial review. Editors who are found to be submitting AI-generated content will face warnings and potential sanctions. Technical detection tools supplement but don't replace human review.
Why does Wikipedia care about AI content?
Wikipedia's core principles require verifiable, neutral, and accurately sourced content. AI models frequently hallucinate facts, fabricate sources, and produce text that violates these principles. Additionally, AI-contaminated Wikipedia articles could poison the training data for future AI models.