AWS AI Revenue Just Hit $15 Billion — Amazon Reveals Its AI Number for the First Time

Andy Jassy disclosed the figure in his annual shareholder letter. Custom silicon at $20B. $200B capex committed for 2026. Over $100B already locked in by customers. Here is what the number actually means for builders.

$15B
AWS AI annual run rate
$20B
Custom silicon run rate
$200B
Amazon capex in 2026
$100B+
Customer commitments

Amazon has never broken out its AI revenue before. Quarterly earnings calls, investor days, product announcements — the company consistently declined to give a specific number. That changed on April 15, 2026, when Andy Jassy published his annual letter to shareholders and put a dollar figure on the table for the first time: AWS AI services exceeded a $15 billion annual run rate in Q1 2026.

That disclosure alone would make headlines. But the shareholder letter went further. Jassy also revealed that Amazon’s custom silicon business — Graviton, Trainium, and Nitro chips — has crossed a $20 billion annual run rate. And Amazon plans to spend approximately $200 billion in capital expenditure in 2026, the majority directed at cloud and AI infrastructure. Over $100 billion of that is already committed by customers, including OpenAI.

This is not a forecast. These are realized numbers, disclosed for the first time, in a letter where Jassy clearly decided the AI story was big enough to finally say out loud.

The 5-Second Version

01

Why $15 Billion Is a Bigger Deal Than It Sounds

$15 billion sounds large in isolation. In context, it is almost underplayed. AWS total revenue is running at approximately $142 billion annually, which means AI services are already 10% of the whole business — and that 10% is growing far faster than the base.

To frame the trajectory, Jassy made a pointed comparison in his letter: AWS overall had a $58 million annual run rate three years after its launch in 2006. The AI services business is starting from a $15 billion base at a comparable stage in its development arc. The implication is unmistakable — AWS AI is on a path to become as large as AWS itself.

$15B
AWS AI annual run rate, Q1 2026
10%
Share of total AWS revenue
$58M
AWS total run rate, 3 years after launch

This is also the clearest signal yet that hyperscaler AI spending is translating into real revenue — not just capex burns and press releases. Microsoft has disclosed Copilot and Azure AI metrics in fragments. Google has referenced AI contributions to Cloud. Amazon just gave a hard number, and it is $15 billion.

02

The Custom Chip Story Nobody Is Talking About Enough

Buried in the same letter is a disclosure that deserves its own headline: Amazon’s custom silicon business has crossed a $20 billion annual run rate. This covers three chip lines — Graviton for general compute, Trainium for AI model training, and Nitro for virtualization and networking acceleration.

For perspective, $20 billion in chip revenue puts Amazon in the same conversation as major semiconductor companies. This is not a cost-reduction play anymore. It is a revenue line, and a substantial one. Amazon is now both the largest cloud provider and one of the largest custom chip vendors in the world. The two businesses reinforce each other: Trainium-trained models run more cheaply on Graviton instances, which encourages more customers to consolidate workloads on AWS, which funds more chip R&D.

Graviton

General Compute

ARM-based CPUs for general cloud workloads. Graviton 4 offers up to 30% better price-performance than comparable x86 instances. Now the default recommendation for most AWS compute workloads.

Builder signal: default to Graviton for cost savings
Trainium

AI Training Silicon

Purpose-built for large model training. Trainium 2 clusters are what Amazon is offering as an alternative to Nvidia H100s. OpenAI’s $100B+ commitment presumably includes significant Trainium capacity.

Builder signal: Trainium competes directly with Nvidia for training
Nitro

Virtualization & Networking

Offloads hypervisor and networking from the main CPU, giving EC2 instances near-bare-metal performance. Every AWS instance runs on Nitro. The chip that makes AWS’s density advantage possible.

Builder signal: Nitro is why AWS compute is faster than it looks on paper
$20B

Combined Run Rate

All three chip lines together have crossed $20 billion in annual revenue. This puts Amazon’s silicon business larger than AMD’s total revenue in 2024 and roughly half the size of Intel’s.

Builder signal: Amazon is a chip company now, not just a cloud provider
03

$200 Billion Committed — What That Means in Practice

Amazon’s planned 2026 capex of approximately $200 billion is the largest single-year capital investment in the company’s history. For context, AWS capex in 2024 was around $75 billion. The number has nearly tripled in two years. This is not incremental infrastructure expansion — it is a structural bet that AI workloads will consume far more compute than current capacity can support.

The demand side is already visible: over $100 billion in customer commitments are already locked in, including a major multi-year deal with OpenAI that represents one of the largest cloud infrastructure contracts ever signed. When customers pre-commit at that scale, Amazon is effectively de-risking the capex before the data centers are even built.

The Q1 2026 earnings report, scheduled for April 29, will be the first chance to see how this capex commitment is showing up in operating leverage. Analysts are modeling operating income of $16.5 to $21.5 billion for Q1 alone — a range that reflects real uncertainty about how quickly AI revenue growth is outpacing infrastructure spend.

04

The Flywheel That Matters for Builders

Here is the thing that most coverage will miss: the $15 billion AWS AI run rate is not just a business milestone. It is a signal about where the AWS product roadmap is going for the next three years.

When AWS generates serious revenue from AI services, those margins fund accelerated R&D on Bedrock, SageMaker, and Trainium. More R&D means more capable models available through Bedrock’s model catalog. More Trainium capacity means lower training costs. Lower training costs mean more frequent fine-tuning cycles become economically viable for smaller teams. The flywheel spins faster as revenue grows, and the people who benefit most are the builders running workloads on top of the infrastructure.

This is the same dynamic that played out with raw AWS compute between 2010 and 2020. S3 storage costs dropped more than 80% over that decade. EC2 instance prices fell similarly. As AWS compute revenue scaled, Amazon competed on price to grow the market, and builders got dramatically cheaper infrastructure as a byproduct. AI services are likely to follow the same curve — the $15 billion run rate is the inflection point where that flywheel starts to visibly accelerate.

05

How This Fits the Broader Hyperscaler Picture

Amazon’s disclosure lands at a specific moment in the AI infrastructure story. Microsoft has been the most transparent hyperscaler about AI revenue, regularly citing Azure AI growth rates in the 150-200% range. Google Cloud has referenced AI as its fastest-growing segment without putting a hard number on it. Amazon held the number back the longest — and when Jassy finally released it, the figure was $15 billion.

The rough picture emerging is that the three major hyperscalers are collectively generating somewhere in the range of $50-60 billion in annualized AI cloud revenue right now, growing at rates that will make that figure look small within 18 months. The spending is not purely speculative. Real customers are running real workloads and paying real bills. The question was always whether AI infrastructure investment would find paying customers at scale. The $15 billion answer is: yes.

For anyone building on AWS today, the practical takeaways are concrete: Bedrock’s model catalog will expand significantly over 2026-2027 as Amazon deploys its capex. Trainium-based training will get cheaper and more accessible. And the competitive pressure on Nvidia will intensify as Amazon’s custom silicon revenue grows — which is good news for builders who want more infrastructure options at lower price points.

The Verdict
$15 billion is not a vanity number. It is confirmation that AI has moved from experimental line item to core AWS revenue driver. The capex commitment, the customer lock-ins, and the chip business all point in the same direction: Amazon is treating AI infrastructure as its most important product investment in 20 years. Builders who understand how to deploy on this infrastructure are building on top of a flywheel that is just starting to turn.

The practical implication for anyone building AI-powered products or workflows today: the AWS ecosystem is not going to get more expensive or more complicated over the next two years. It is going to get cheaper, more capable, and more integrated. That is a tailwind worth understanding before the April 29 earnings call makes these numbers mainstream.

Build on the Infrastructure That Just Crossed $15B.

The 2-day in-person Precision AI Academy bootcamp. 5 cities. $1,490. 40 seats max. Thursday-Friday cohorts, June-October 2026.

Reserve Your Seat
PA

Published By

Precision AI Academy

Practitioner-focused AI education · 2-day in-person bootcamp in 5 U.S. cities

Precision AI Academy publishes deep-dives on applied AI engineering for working professionals. Founded by Bo Peng (Kaggle Top 200) who leads the in-person bootcamp in Denver, NYC, Dallas, LA, and Chicago.

Kaggle Top 200 Federal AI Practitioner 5 U.S. Cities Thu-Fri Cohorts