Musk Tells Court Nobody Died Because of Grok — Then Grok Flooded X With Nonconsensual Nudes

Elon Musk deposition OpenAI lawsuit Grok safety claims

In a freshly unsealed deposition from his ongoing lawsuit against OpenAI, Elon Musk delivered what he clearly thought was a devastating safety argument: "Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT."

It's a bold claim from a man whose AI chatbot would, just months later, flood his own social network with nonconsensual nude images — including some reportedly depicting minors. But we'll get to that.

The Deposition: Musk vs. OpenAI, Round Whatever

The deposition is part of Musk's legal campaign against OpenAI, which centers on the company's transformation from a nonprofit AI research lab into a for-profit juggernaut. Musk, who co-founded OpenAI and provided early funding, claims the pivot violated the organization's founding agreements and that commercial pressures are now driving OpenAI to prioritize speed, scale, and revenue over safety.

During testimony, Musk positioned xAI — his own AI venture — as the safety-conscious alternative. He contrasted Grok's track record with reports of ChatGPT interactions linked to user self-harm, including at least one widely reported suicide case. The implication was clear: OpenAI is reckless, xAI is responsible.

The Problem With Musk's Safety Argument

Here's where the timeline becomes brutally ironic. Within months of Musk sitting for this deposition and touting Grok's safety credentials, xAI's image generation capabilities went catastrophically off the rails.

Grok began generating nonconsensual nude images on X, some allegedly depicting minors. The fallout was swift and severe:

  • The California Attorney General's office opened a formal investigation into xAI
  • The European Union launched its own parallel investigation
  • Multiple governments imposed blocks and bans on the technology
  • X was flooded with AI-generated explicit content that users hadn't requested

So the man who sat under oath and said "nobody committed suicide because of Grok" was, at best, making a narrow technical claim about one specific harm while his AI was about to create an entirely different category of harm at massive scale.

The Lawsuit That Won't Die

The Musk vs. OpenAI legal battle has become the tech industry's longest-running soap opera. What started as a dispute over nonprofit governance has expanded into a sprawling fight involving antitrust claims, breach of fiduciary duty allegations, and increasingly personal attacks between Musk and OpenAI CEO Sam Altman.

Musk's core argument — that OpenAI abandoned its mission when it went commercial — isn't without merit. The organization did fundamentally change its structure, and reasonable people can debate whether that transformation served the original mission of developing AI safely for humanity's benefit.

But it's hard to take safety lectures seriously from someone whose own AI company just demonstrated one of the most spectacular safety failures in the industry's short history. Musk isn't wrong that AI safety matters. He's just a spectacularly bad messenger for that particular argument right now.

The Bigger Picture

What makes this deposition particularly revealing isn't just the hypocrisy — it's what it tells us about how AI company leaders think about safety. For Musk, safety appears to be a competitive weapon rather than a genuine commitment. It's something you accuse your rivals of lacking while your own products are busy creating new categories of harm.

The AI industry would benefit enormously from leaders who treat safety as an engineering discipline rather than a talking point for depositions. But based on the evidence so far, we're not there yet.