Amazon has never broken out its AI revenue before. Quarterly earnings calls, investor days, product announcements — the company consistently declined to give a specific number. That changed on April 15, 2026, when Andy Jassy published his annual letter to shareholders and put a dollar figure on the table for the first time: AWS AI services exceeded a $15 billion annual run rate in Q1 2026.
That disclosure alone would make headlines. But the shareholder letter went further. Jassy also revealed that Amazon’s custom silicon business — Graviton, Trainium, and Nitro chips — has crossed a $20 billion annual run rate. And Amazon plans to spend approximately $200 billion in capital expenditure in 2026, the majority directed at cloud and AI infrastructure. Over $100 billion of that is already committed by customers, including OpenAI.
This is not a forecast. These are realized numbers, disclosed for the first time, in a letter where Jassy clearly decided the AI story was big enough to finally say out loud.
The 5-Second Version
- $15B AI run rate — first-ever disclosure from Amazon, representing roughly 10% of AWS’s $142B total run rate.
- $20B custom silicon — Graviton, Trainium, and Nitro chips now form one of the largest custom chip businesses in the world.
- $200B capex in 2026 — the single largest capital investment program in Amazon’s history, mostly going to AI infrastructure.
- $100B+ already committed — customers including OpenAI have locked in long-term contracts, reducing demand risk.
- The flywheel signal — as AWS AI revenue scales, Amazon invests more in making AI services cheaper and easier. Builders benefit.
Why $15 Billion Is a Bigger Deal Than It Sounds
$15 billion sounds large in isolation. In context, it is almost underplayed. AWS total revenue is running at approximately $142 billion annually, which means AI services are already 10% of the whole business — and that 10% is growing far faster than the base.
To frame the trajectory, Jassy made a pointed comparison in his letter: AWS overall had a $58 million annual run rate three years after its launch in 2006. The AI services business is starting from a $15 billion base at a comparable stage in its development arc. The implication is unmistakable — AWS AI is on a path to become as large as AWS itself.
This is also the clearest signal yet that hyperscaler AI spending is translating into real revenue — not just capex burns and press releases. Microsoft has disclosed Copilot and Azure AI metrics in fragments. Google has referenced AI contributions to Cloud. Amazon just gave a hard number, and it is $15 billion.
The Custom Chip Story Nobody Is Talking About Enough
Buried in the same letter is a disclosure that deserves its own headline: Amazon’s custom silicon business has crossed a $20 billion annual run rate. This covers three chip lines — Graviton for general compute, Trainium for AI model training, and Nitro for virtualization and networking acceleration.
For perspective, $20 billion in chip revenue puts Amazon in the same conversation as major semiconductor companies. This is not a cost-reduction play anymore. It is a revenue line, and a substantial one. Amazon is now both the largest cloud provider and one of the largest custom chip vendors in the world. The two businesses reinforce each other: Trainium-trained models run more cheaply on Graviton instances, which encourages more customers to consolidate workloads on AWS, which funds more chip R&D.
General Compute
ARM-based CPUs for general cloud workloads. Graviton 4 offers up to 30% better price-performance than comparable x86 instances. Now the default recommendation for most AWS compute workloads.
AI Training Silicon
Purpose-built for large model training. Trainium 2 clusters are what Amazon is offering as an alternative to Nvidia H100s. OpenAI’s $100B+ commitment presumably includes significant Trainium capacity.
Virtualization & Networking
Offloads hypervisor and networking from the main CPU, giving EC2 instances near-bare-metal performance. Every AWS instance runs on Nitro. The chip that makes AWS’s density advantage possible.
Combined Run Rate
All three chip lines together have crossed $20 billion in annual revenue. This puts Amazon’s silicon business larger than AMD’s total revenue in 2024 and roughly half the size of Intel’s.
$200 Billion Committed — What That Means in Practice
Amazon’s planned 2026 capex of approximately $200 billion is the largest single-year capital investment in the company’s history. For context, AWS capex in 2024 was around $75 billion. The number has nearly tripled in two years. This is not incremental infrastructure expansion — it is a structural bet that AI workloads will consume far more compute than current capacity can support.
The demand side is already visible: over $100 billion in customer commitments are already locked in, including a major multi-year deal with OpenAI that represents one of the largest cloud infrastructure contracts ever signed. When customers pre-commit at that scale, Amazon is effectively de-risking the capex before the data centers are even built.
The Q1 2026 earnings report, scheduled for April 29, will be the first chance to see how this capex commitment is showing up in operating leverage. Analysts are modeling operating income of $16.5 to $21.5 billion for Q1 alone — a range that reflects real uncertainty about how quickly AI revenue growth is outpacing infrastructure spend.
The Flywheel That Matters for Builders
Here is the thing that most coverage will miss: the $15 billion AWS AI run rate is not just a business milestone. It is a signal about where the AWS product roadmap is going for the next three years.
When AWS generates serious revenue from AI services, those margins fund accelerated R&D on Bedrock, SageMaker, and Trainium. More R&D means more capable models available through Bedrock’s model catalog. More Trainium capacity means lower training costs. Lower training costs mean more frequent fine-tuning cycles become economically viable for smaller teams. The flywheel spins faster as revenue grows, and the people who benefit most are the builders running workloads on top of the infrastructure.
This is the same dynamic that played out with raw AWS compute between 2010 and 2020. S3 storage costs dropped more than 80% over that decade. EC2 instance prices fell similarly. As AWS compute revenue scaled, Amazon competed on price to grow the market, and builders got dramatically cheaper infrastructure as a byproduct. AI services are likely to follow the same curve — the $15 billion run rate is the inflection point where that flywheel starts to visibly accelerate.
How This Fits the Broader Hyperscaler Picture
Amazon’s disclosure lands at a specific moment in the AI infrastructure story. Microsoft has been the most transparent hyperscaler about AI revenue, regularly citing Azure AI growth rates in the 150-200% range. Google Cloud has referenced AI as its fastest-growing segment without putting a hard number on it. Amazon held the number back the longest — and when Jassy finally released it, the figure was $15 billion.
The rough picture emerging is that the three major hyperscalers are collectively generating somewhere in the range of $50-60 billion in annualized AI cloud revenue right now, growing at rates that will make that figure look small within 18 months. The spending is not purely speculative. Real customers are running real workloads and paying real bills. The question was always whether AI infrastructure investment would find paying customers at scale. The $15 billion answer is: yes.
For anyone building on AWS today, the practical takeaways are concrete: Bedrock’s model catalog will expand significantly over 2026-2027 as Amazon deploys its capex. Trainium-based training will get cheaper and more accessible. And the competitive pressure on Nvidia will intensify as Amazon’s custom silicon revenue grows — which is good news for builders who want more infrastructure options at lower price points.
The practical implication for anyone building AI-powered products or workflows today: the AWS ecosystem is not going to get more expensive or more complicated over the next two years. It is going to get cheaper, more capable, and more integrated. That is a tailwind worth understanding before the April 29 earnings call makes these numbers mainstream.
Build on the Infrastructure That Just Crossed $15B.
The 2-day in-person Precision AI Academy bootcamp. 5 cities. $1,490. 40 seats max. Thursday-Friday cohorts, June-October 2026.
Reserve Your Seat