AWS AI Business Hits $15 Billion Revenue Run Rate as Amazon Chip Division Tops $20 Billion

AWS data center with orange glow showing AI revenue growth, Trainium chip closeup and revenue chart overlay

Amazon Web Services' AI business has reached a $15 billion annual revenue run rate, with the company's internal custom chip division — producing Trainium and Inferentia processors — generating more than $20 billion annually, GeekWire reported citing new data from Amazon. The figures establish AWS as one of the largest AI infrastructure businesses in the world by revenue, competing directly with Microsoft Azure and Google Cloud in the race to supply compute to the AI buildout. The numbers come as AWS continues to expand its AI infrastructure services and position Amazon's cloud business as a full-stack AI provider from silicon to application layer.

AWS AI Revenue: What the $15 Billion Run Rate Includes

AWS's $15 billion AI run rate encompasses revenue from several product lines: Amazon Bedrock (the managed foundation model service offering access to Claude, Llama, Titan, and other models), SageMaker (the ML platform for model training and deployment), Amazon Q (the enterprise AI assistant), and AI-enhanced infrastructure services. This figure represents AI-attributed revenue — workloads and services that customers are purchasing specifically for AI purposes — rather than the total AWS cloud revenue, which is significantly larger.

The $20 billion+ figure for Amazon's internal chip business is more striking. Amazon began designing custom silicon with Graviton (general-purpose compute) and Inferentia (AI inference), later adding Trainium (AI training). These chips power Amazon's own AI infrastructure and are sold to AWS customers who prefer Amazon silicon over NVIDIA GPUs for cost or availability reasons. A $20 billion chip revenue line makes Amazon's silicon division larger than many standalone semiconductor companies — and creates a structural cost advantage for AWS AI services over competitors who must purchase all their chips from NVIDIA.

The Three-Way Cloud AI Race

AWS's $15 billion AI run rate positions it in a clear three-way race with Microsoft Azure and Google Cloud for dominance of AI infrastructure. Microsoft's cloud AI business, powered by its OpenAI partnership and Azure AI services, has been growing rapidly through enterprise Copilot deployments. Google Cloud's AI business benefits from Gemini integration and Google's TPU silicon advantage. AWS competes with breadth — the largest cloud platform by total revenue, the widest enterprise customer base, and now a custom silicon division that reduces its NVIDIA dependency. As Anthropic's multi-gigawatt compute deal with Google illustrates, the cloud AI infrastructure race is increasingly about who can supply compute at the scale frontier AI companies require.

Frequently Asked Questions

What is AWS's AI annual revenue run rate?

AWS's AI business has reached a $15 billion annual revenue run rate, covering services including Amazon Bedrock, SageMaker, Amazon Q, and AI-attributed infrastructure workloads. Amazon's internal custom chip division generates an additional $20 billion+ annually from Graviton, Trainium, and Inferentia processors.

What are Amazon's custom AI chips?

Amazon designs custom AI chips under two primary product lines: Trainium (optimized for AI model training) and Inferentia (optimized for AI inference/deployment). These chips are used internally to power AWS AI services and are available to AWS customers as an alternative to NVIDIA GPUs, typically at lower cost.

How does AWS compare to Microsoft Azure and Google Cloud in AI?

All three cloud providers are competing aggressively in AI infrastructure. Microsoft Azure benefits from its deep OpenAI partnership and enterprise Copilot deployments. Google Cloud leverages Gemini integration and its TPU custom silicon. AWS competes with its largest enterprise customer base, widest service breadth, and its growing custom chip business that reduces NVIDIA dependency.

The Bottom Line

A $15 billion AI run rate and a $20 billion+ chip business make AWS one of the most significant AI infrastructure companies in the world — not just a cloud provider that offers AI services, but a vertically integrated AI stack from custom silicon through managed model APIs. The chip revenue figure is the more surprising number: it means Amazon has quietly built a semiconductor business larger than many dedicated chip companies, entirely to support its own cloud AI ambitions. That internal silicon advantage is the long-term competitive moat that could determine which cloud provider wins the AI infrastructure race.