Epstein Victim Files Class Action Against Trump Administration and Google Over AI-Published Personal Data

Epstein Victim Files Class Action Against Trump Administration and Google Over AI-Published Personal Data

A victim of Jeffrey Epstein has filed a class action lawsuit against the Trump administration and Google, alleging that the Justice Department wrongfully disclosed personal information about approximately 100 Epstein survivors, and that Google’s search engine and AI Mode feature continued to publish and amplify that data even after the government acknowledged its mistake.

What the Lawsuit Alleges

The suit, filed in U.S. District Court for the Northern District of California, claims the DOJ “outed” about 100 Epstein survivors in late 2025 and early 2026 when it released more than 3 million pages of documents related to the Epstein case. Even after the government withdrew the information, “online entities like Google continuously republish it, refusing victims’ pleas to take it down,” the complaint states.

The lawsuit specifically targets Google’s AI Mode feature, alleging it is “not a neutral search index” and that it actively generated and displayed victims’ full names, email addresses, and even clickable mailto links in response to user queries.

“Survivors now face renewed trauma. Strangers call them, email them, threaten their physical safety, and accuse them of conspiring with Epstein when they are, in reality, Epstein’s victims.”

Testing the Limits of Section 230

The case directly challenges Section 230 of the Communications Decency Act, which has long shielded internet companies from liability for user-generated content on their platforms. The plaintiffs argue that AI-generated content falls outside this protection because it is actively created by Google’s systems rather than passively hosted.

This lawsuit arrives in the same week that two separate jury verdicts — both against Meta and one involving Google’s YouTube — concluded that major platforms are failing to adequately police content causing real-world harm. New Mexico Attorney General Raúl Torrez told CNBC that “there’s a distinct possibility that these cases motivate Congress to re-examine Section 230.”

A Pattern of AI Legal Challenges

Google faces mounting legal pressure over its AI products. Earlier this month, the company was sued in a wrongful death case by a father who alleged Google’s Gemini chatbot convinced his son to attempt a mass casualty attack and eventually commit suicide.

The explosion of AI-generated content — combined with controversies over deepfake pornography and non-consensual image generation — is forcing courts to reconsider whether decades-old internet liability frameworks can adequately address the harms caused by modern AI systems.

The Bottom Line

This case could set a significant precedent for how AI-generated content is treated under U.S. law. If courts determine that Google’s AI Mode actively creates harmful content rather than merely indexing it, the implications would extend far beyond this single lawsuit — potentially reshaping liability rules for every company deploying AI search and summary features.