California Signs First-of-Its-Kind AI Executive Order Requiring Safety Guardrails from Contractors

California just drew a line in the sand on AI regulation. Governor Gavin Newsom signed a first-of-its-kind executive order requiring safety and privacy guardrails from any AI company that contracts with the state — and it’s as much about tech policy as it is about politics.
What the Order Requires
Contractor vetting: Companies bidding for state contracts must explain their safety and privacy policies around AI. This includes how they prevent exploitation of individuals, the spread of child sexual abuse materials, and whether their AI models monitor individuals or block certain speech. Anti-bias measures are also required.
Independence from federal blacklists: If the Pentagon designates a company a supply chain risk — as it recently did with Anthropic — California will conduct its own independent assessment. If the company isn’t determined to be a risk by the state, it may remain a contractor.
Watermarking requirement: State officials must begin watermarking AI-generated or manipulated videos to guard against misinformation.
The Political Context
Newsom signed the order partly as a message to President Trump, who has been actively trying to prevent states from regulating AI. California was already the first state to pass a law mandating safety and transparency from the biggest AI companies. This executive order goes further by setting contractor-specific requirements.
The Anthropic angle is particularly significant. The Pentagon terminated its contract with Anthropic after the company refused to allow its models to be used for mass domestic surveillance and autonomous weaponry. California’s independent assessment means companies blacklisted by the federal government for ethical reasons could still work with the state.
The Bottom Line
California is doing what California does: leading on tech regulation while the federal government drags its feet. Whether you see this as responsible governance or regulatory overreach depends on your politics. But the practical effect is clear: AI companies wanting California’s business will need to prove their systems are safe, private, and unbiased — regardless of what Washington thinks.