Sam Altman Warns of World-Shaking Cyberattack and Calls for AI Superintelligence New Deal

Sam Altman Warns of World-Shaking Cyberattack and Calls for AI Superintelligence New Deal

Sam Altman does not mince words when the topic is what comes next. In a wide-ranging interview with Axios published in early April 2026, the OpenAI CEO laid out a vision of AI so disruptive that he called for a societal response on the scale of the New Deal — while simultaneously warning that a world-shaking cyberattack could hit before the end of this year.

The Cyberattack Warning

Altman said the two threats he fears most are cyberattacks and biological attacks enabled by near-future AI capabilities. When asked whether a "world-shaking" cyberattack this year was possible, he replied plainly: "I think that's totally possible." He added, "I suspect in the next year, we will see significant threats we have to mitigate from cyber."

The warning is not abstract. Soon-to-be-released AI models could enable attacks on critical infrastructure — power grids, financial systems, communications networks — at a scale and speed that current defenses are not built to handle. It is a scenario that top tech, business, and government officials have been quietly modeling for months.

AI Is Already Replacing Software Teams

Altman said current AI models are already performing coding and research tasks that previously required teams of programmers. "A programmer in 2026 already works differently to one a year earlier," he said. His projection: within the next few years, a single developer equipped with AI tools will be able to do the work of an entire software organization.

This is not a hypothetical — it reflects what is already happening in early-adopter companies. The question is whether the rest of the workforce, and the economy, can adapt fast enough.

The Case for a Superintelligence New Deal

Altman's broader argument is that AI superintelligence is so close and so disruptive that it requires a new social contract — comparable in scale to the Progressive Era reforms of the early 1900s or Roosevelt's New Deal during the Great Depression. The threats of moving too slowly are, in his words, "grave": widespread job displacement, cyberattacks, social upheaval, and machines that humans can no longer control.

He is pushing for global oversight structures, arguing that the U.S. in particular needs to lead on building the governance frameworks before the technology outpaces any institution's ability to regulate it. OpenAI has floated specific policy proposals including AI access frameworks that treat the technology more like a public utility — like electricity — than a private product.

The Bottom Line

Altman is simultaneously the person most likely to build superintelligence and the person loudest about how dangerous that is. Whether you find that credible or self-serving, his specific predictions — cyberattack risk this year, single developers replacing teams, need for a New Deal-scale policy response — are worth taking seriously as signals of where the AI industry's most informed insider sees the next 18 months heading.