Anthropic Signs Multi-Gigawatt Compute Deal with Google and Broadcom as Revenue Triples to $30B+

Anthropic has signed what the company calls its most significant compute commitment to date — a multi-gigawatt agreement with Google and Broadcom for next-generation TPU capacity expected to come online starting in 2027. The announcement, made on April 6, comes as Anthropic's revenue run rate has tripled from approximately $9 billion at the end of 2025 to over $30 billion in early 2026, and as the number of enterprise customers spending $1 million or more annually has doubled to over 1,000 in under two months. The deal deepens Anthropic's ties to major financial and enterprise customers who need guaranteed model availability at scale.
What the Google and Broadcom Compute Deal Covers
The agreement secures multiple gigawatts of next-generation TPU capacity for Anthropic, with Broadcom acting as the custom chip design partner for the TPU silicon and Google providing the cloud infrastructure to run it. The vast majority of the new compute capacity will be sited in the United States — an important detail given the current regulatory and national security environment around AI infrastructure.
According to Anthropic's announcement, CFO Krishna Rao described it as "a continuation of our disciplined approach to scaling infrastructure" driven by "unprecedented growth." The deal extends Anthropic's November 2025 commitment to invest $50 billion in American AI infrastructure. Claude currently runs across AWS Trainium chips, Google TPUs, and NVIDIA GPUs — available on AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry — making Anthropic one of the few AI labs with a genuine multi-cloud, multi-chip infrastructure strategy.
Revenue Growth That Justifies the Scale
The scale of the compute commitment is underwritten by extraordinary revenue growth. Anthropic's run rate tripling from $9 billion to $30 billion in roughly six months — while also doubling its count of $1 million-plus enterprise customers to over 1,000 — provides concrete justification for locking in gigawatts of capacity more than a year in advance.
This trajectory reflects broader enterprise AI adoption trends. As noted in recent data showing 30.6% of US businesses now paying for Anthropic AI tools, Claude has moved from research darling to enterprise infrastructure in less than two years. The 1,000-plus customers at $1 million or more in annual spending represent stable, committed revenue that makes infrastructure investment predictable rather than speculative.
Multi-Cloud Strategy as Competitive Moat
Anthropic's decision to maintain infrastructure relationships across Google, AWS, and Microsoft Azure — while securing its own custom TPU capacity through this new deal — is a calculated hedge against vendor dependency. For enterprise customers, this means Claude's availability is not tied to the uptime or pricing decisions of any single cloud provider. For Anthropic, it means negotiating leverage across all three platforms and the ability to shift workloads as economics dictate.
The Broadcom partnership for custom chip design mirrors the approach taken by Google (TPUs), Amazon (Trainium/Graviton), and Microsoft (Maia) — all of which have invested heavily in custom silicon to reduce dependence on NVIDIA and lower per-token inference costs. Anthropic is now the first major AI lab without a hyperscaler parent to secure custom silicon capacity at gigawatt scale.
Frequently Asked Questions
What is Anthropic's compute deal with Google and Broadcom?
Anthropic signed an agreement with Google and Broadcom for multiple gigawatts of next-generation TPU compute capacity, expected to come online in 2027. Broadcom designs the custom TPU silicon while Google provides cloud infrastructure. Anthropic's CFO called it the company's most significant compute commitment to date.
What is Anthropic's current revenue run rate?
Anthropic's revenue run rate exceeded $30 billion as of early 2026, up from approximately $9 billion at the end of 2025 — roughly tripling in under six months. The company also surpassed 1,000 enterprise customers spending $1 million or more annually, doubling that figure in under two months.
Why is Anthropic building its own compute infrastructure?
Anthropic is securing dedicated compute to meet unprecedented demand growth and avoid supply constraints that could limit Claude's availability. Custom TPU capacity also reduces per-token inference costs compared to renting commodity GPU compute, and a multi-cloud strategy gives Anthropic leverage and resilience across AWS, Google Cloud, and Microsoft Azure.
The Bottom Line
Anthropic's multi-gigawatt compute deal is the infrastructure story behind its revenue story. Tripling revenue to $30 billion run rate while doubling enterprise customers creates both the need and the justification for securing gigawatts of dedicated capacity 12-plus months out. The company is no longer just a model lab — it is becoming an AI infrastructure player, and this deal is the clearest signal yet of how seriously it is building for that future.