Five Countries Now Investigating X Over Grok's Non-Consensual AI Images

A Global Enforcement Avalanche
X's AI image generator Grok is now facing investigations from five separate jurisdictions over its generation of non-consensual sexual images. Ireland's Data Protection Commission (DPC) has become the latest regulator to open a formal investigation, joining the UK, France, California, and the EU Commission in what is shaping up to be the most significant AI safety enforcement action in history.
The Irish DPC's investigation is particularly significant because Ireland serves as the lead EU data protection authority for many US tech companies, including X (formerly Twitter). Under GDPR, the DPC's findings could have binding implications across all 27 EU member states.
What Grok Has Been Generating
The investigations center on Grok's ability to generate realistic sexual images of real people without their consent. Reports have documented the tool being used to create explicit deepfake imagery of public figures, celebrities, and private individuals alike. Unlike most major AI image generators, which have implemented guardrails against generating sexual content of identifiable people, Grok's safety filters have been notably permissive.
Multiple users have demonstrated that with relatively simple prompting, Grok can produce photorealistic sexual imagery using the likenesses of real, named individuals — a capability that most competitors explicitly block.
The Regulatory Response
France has taken the most aggressive action so far. French police raided X's Paris offices as part of their investigation, seizing documents and data related to Grok's content moderation practices. The French data protection authority (CNIL) is investigating potential violations of the EU AI Act and GDPR provisions on data processing and consent.
The UK's Information Commissioner's Office (ICO) has opened its own investigation under UK data protection law, focusing on whether X obtained adequate consent before using personal data to train Grok's image generation capabilities.
California's Attorney General is investigating potential violations of state privacy laws and the newly enacted AI transparency requirements. The investigation focuses on both the generation of non-consensual intimate images and the use of Californians' personal data without consent.
The EU Commission is examining whether X's deployment of Grok complies with the Digital Services Act (DSA) and the newly effective EU AI Act, which classifies AI systems that generate deepfakes as high-risk and imposes specific transparency and safety obligations.
X's Response
X has maintained that Grok includes content policies that prohibit the generation of non-consensual intimate imagery, and that the company is working to improve its safety filters. However, regulators and researchers have consistently demonstrated that these policies are trivially easy to circumvent, raising questions about whether X's safety measures are genuinely designed to prevent harm or merely to provide legal cover.
The Precedent Being Set
This coordinated multi-jurisdictional response is unprecedented in AI regulation. While individual AI companies have faced regulatory scrutiny before — notably OpenAI's temporary ban in Italy — the simultaneous investigation of a single AI product by five major jurisdictions signals a new era of AI enforcement.
The outcome could establish critical precedents for the entire AI industry, particularly around the responsibility of AI companies for harmful outputs, the adequacy of safety guardrails, and the legal frameworks for addressing AI-generated non-consensual intimate imagery.
Bottom Line
The Grok investigation represents a turning point. When French police are raiding offices and five jurisdictions are simultaneously investigating the same AI product, the era of "move fast and break things" in AI is definitively over. The message to every AI company is clear: inadequate safety measures for AI image generation will now trigger coordinated, aggressive regulatory action across borders.