The Pro-Human AI Declaration: Left and Right Finally Agree on AI Safety

The Pro-Human AI Declaration: Steve Bannon, Richard Branson, and 40+ Orgs Agree on Something
In what might be the most unlikely coalition in recent political history, over 40 organizations and prominent individuals have signed the Pro-Human AI Declaration — a framework that argues AI should serve humanity, not replace it. The signatories include former Trump adviser Steve Bannon, billionaire Richard Branson, conservative commentator Glenn Beck, consumer advocate Ralph Nader, Biden’s national security adviser Susan Rice, and Nobel Prize-winning economist Daron Acemoglu.
When Steve Bannon and Susan Rice agree on something, it’s either deeply obvious or deeply important. In this case, it might be both.
The Five Pillars
The declaration, convened by the Future of Life Institute and finalized after multiple in-person gatherings culminating in a ratification meeting in New Orleans in January 2026, is built on five pillars:
1. Human Control: Humans must retain the ability to understand, guide, restrict, and override AI behavior. This includes an “off switch” requirement for powerful systems and — most controversially — a prohibition on the superintelligence race until there is broad scientific consensus that it can be built safely, plus strong public buy-in.
2. Anti-Monopoly Power: The declaration pushes back against the concentration of AI capabilities in a handful of companies. It argues that no single entity should control systems powerful enough to reshape society.
3. Protection of Childhood and Relationships: Strict bans on AI designed to replace human relationships, exploit children’s cognitive abilities, or create artificial emotional dependencies. The framework explicitly rejects AI personhood.
4. Liberty and Privacy: Strong protections for human data rights, rejecting mass surveillance applications, and ensuring AI systems don’t erode individual freedoms in the name of efficiency or security.
5. Real Accountability: AI companies should face meaningful consequences for harm caused by their systems — not just voluntary commitments and self-regulation.
The Superintelligence Ban Is the Big One
Most of the declaration reads like reasonable, broadly agreeable principles — the kind of thing that’s hard to argue against in the abstract. But the superintelligence moratorium is where it gets real. The declaration calls for halting the race to build artificial superintelligence until safety can be credibly demonstrated. This puts it in direct conflict with the stated goals of OpenAI, Google DeepMind, and Anthropic, all of which are actively pursuing increasingly powerful AI systems.
Whether a bipartisan declaration can actually slow down a multi-hundred-billion-dollar technology race is a different question entirely. The AI companies signing safety pledges in Washington are the same ones racing to build more powerful systems in San Francisco. Words on paper don’t slow down GPU clusters.
What It Ignores: Open Source
As several commentators have noted, the declaration has a notable blind spot: open source AI. Meta’s Llama, Mistral, and dozens of other open-weight models have democratized access to powerful AI capabilities. Any framework that doesn’t address how open source models fit into safety and accountability discussions is incomplete. You can regulate OpenAI and Anthropic, but you can’t regulate a model that’s already been downloaded by millions of developers worldwide.
The Bottom Line
The Pro-Human AI Declaration is significant not for what it says — most of the principles are common sense — but for who is saying it. When the political left and right, tech billionaires and consumer advocates, and national security hawks and civil libertarians all agree that AI development needs guardrails, it suggests a genuine political consensus is forming. The question is whether that consensus translates into legislation, or whether it remains another well-intentioned document that the AI industry acknowledges, applauds, and quietly ignores while continuing to build.