The New Yorker Asked If Sam Altman Can Be Trusted With AI. The Answer Is Complicated.

The New Yorker has published what may be the most significant profile of Sam Altman to date. Written by Ronan Farrow and Andrew Marantz, "Sam Altman May Control Our Future — Can He Be Trusted?" draws on hundreds of pages of confidential internal documentation, interviews with former board members, and private memos that were never meant to become public.
The central question the piece asks is not whether OpenAI's technology is powerful — everyone agrees it is — but whether the man leading the company can be trusted to wield it responsibly.
What the Documents Show
The most striking material comes from two former insiders who left OpenAI under difficult circumstances. The first is Ilya Sutskever, the former chief scientist who briefly joined the board's unsuccessful attempt to fire Altman in November 2023. Sutskever compiled a secret memo documenting his concerns about Altman's leadership. The memo listed "Lying" as the first and primary pattern of concern.
The second is Dario Amodei — who left OpenAI to found Anthropic and is now one of Altman's chief competitors. The New Yorker obtained more than 200 pages of private notes Amodei kept during his time at the company. They include a blunt assessment: "The problem with OpenAI is Sam himself."
The Board Member Quote
Perhaps the most striking passage in the piece comes from an unnamed OpenAI board member, who described Altman as possessing "a strong desire to please people" combined with "almost a sociopathic lack of concern for the consequences that may come from deceiving someone."
"He's unconstrained by truth," the source told Farrow and Marantz.
The Pattern the Piece Describes
The New Yorker doesn't portray Altman as a simple villain. The portrait is more nuanced — and in some ways more troubling. It describes a leader who is genuinely charismatic, who has built one of the most valuable companies in history, and who sincerely believes in AI's potential to benefit humanity.
But the piece argues that Altman has repeatedly subordinated his stated safety commitments to competitive and commercial pressures. He's told safety researchers one thing while making decisions that contradicted those promises. He's told investors another. He's told the public something else again.
This pattern, the New Yorker argues, is not a series of isolated missteps but a consistent mode of operation — one that becomes considerably more alarming when the product being built could soon exceed human-level intelligence.
OpenAI's Response
OpenAI disputed the characterizations in the piece, with Altman calling several of the allegations "false" in a statement. The company argued that the individuals quoted had axes to grind — Sutskever after the failed board coup, Amodei after leaving to start a direct competitor.
Critics of those defenses noted that the documentation quoted in the piece predates both events, suggesting the concerns were not retrospective grievances but contemporaneous observations made by people who still believed in the company's mission at the time.
Why This Matters
The New Yorker profile lands at a moment when OpenAI's influence over global AI policy, research norms, and public understanding of artificial intelligence is at its peak. The question of whether Altman is a trustworthy steward of that influence is not an academic one — it has direct implications for how governments regulate AI, how competitors behave, and how the public evaluates the promises AI companies make.
Whether you believe the portrait is accurate or a hit job assembled from disgruntled ex-employees, the fact that it was published in The New Yorker — with Ronan Farrow's byline — ensures it will redefine how Sam Altman is discussed for years to come.