Anthropic Just Bought 3.5 Gigawatts of Google TPUs

Broadcom, Google, and Anthropic announced a multi-billion-dollar compute partnership on April 6. Anthropic revealed a $30B annual revenue run rate. Here is what the compute war actually means for everyone who is not Anthropic.

3.5GW
TPU capacity committed
$30B
Anthropic annual run rate
$21B
Broadcom 2026 AI revenue
2027
New capacity starts

On April 6, 2026, Broadcom announced an expanded chip deal with Google that formalizes their 10-year-old partnership on Google's custom TPU processors — and a separate partnership with Anthropic that gives the Claude maker access to roughly 3.5 gigawatts of computing capacity drawn from those same TPUs, starting in 2027. In the same filing, Anthropic disclosed that its annual revenue run rate has crossed $30 billion.

That is not a typo. Less than three years after Claude 2 shipped, Anthropic is running at a revenue pace that beats almost every publicly traded SaaS company in existence. The compute commitment makes sense only if you think the revenue is still accelerating — and Anthropic obviously does.

The 5-Second Version

01

What Actually Got Announced

The easy way to understand this is that three separate deals got stacked on top of each other in a single announcement, and the press compressed them into one headline. Here is what each piece actually is.

Deal 1: Broadcom ↔ Google. Broadcom committed to designing and supplying future generations of Google's Tensor Processing Units (TPUs) through 2031. Broadcom has been Google's silent silicon partner since 2016 — quietly engineering the chips while Google owned the brand and the cloud business. This new agreement formalizes that relationship and expands it to cover networking and the full next-generation AI rack, not just the chips.

Deal 2: Anthropic ↔ Google. Anthropic signed an expanded cloud agreement giving it access to about 3.5 gigawatts of TPU-based computing capacity, starting in 2027. This is on top of roughly 1 gigawatt of capacity that was already coming online through the Google Cloud agreement announced in October 2025. Total committed Anthropic TPU capacity through 2028: north of 4.5 GW.

Deal 3: Anthropic reveals its numbers. In the same SEC filing, Broadcom disclosed — and Anthropic confirmed — that Anthropic's annual run rate has crossed $30 billion. Earlier industry estimates had them at around $19 billion as recently as February. That is a jump of more than 50% in roughly two months.

02

How Big Is 3.5 Gigawatts, Actually

Most of the coverage has treated "3.5 gigawatts" as an abstract number. It is not abstract. It is huge, and the scale is the story.

3.5 GW
Anthropic's committed TPU capacity
~3
Large nuclear reactors equivalent
2.8M
US homes worth of continuous power

For comparison: the entire city of San Francisco uses about 1.2 gigawatts of continuous electric power. Anthropic's committed TPU footprint is roughly three San Franciscos worth of AI computing, running 24/7, dedicated to training and serving Claude.

For even more context: as recently as 2022, OpenAI's total compute footprint was estimated to be under 100 megawatts. In three years, frontier-lab compute has grown roughly 35x. This is the part most people are still not internalizing. The scale of AI infrastructure is now comparable to heavy industry, and the cost structure of frontier training is closer to building a steel mill than writing software.

03

Why Anthropic Is Buying TPUs Instead of NVIDIA GPUs

The obvious question: Anthropic, like every other major AI lab, has built almost everything on NVIDIA. Why TPUs now?

Three honest reasons.

Reason 1: Availability. NVIDIA H100s and B200s are still capacity-constrained. You can order them, but you wait. Google has quietly built out massive TPU fabrication capacity through Broadcom, and most of it has been used internally for Search, YouTube, and Gemini. Making it available to Anthropic is one of the largest contract shifts in cloud computing history, and it gets Anthropic to the front of the compute queue without fighting every other lab for the same NVIDIA allocation.

Reason 2: Vendor risk. Anthropic is now running a business with a $30 billion run rate, a business that would literally stop working if NVIDIA's supply chain hiccuped for two weeks. Diversifying to a second chip architecture is not a preference, it is a survival move. Every serious infrastructure operator eventually learns that single-vendor dependencies are existential risks, and Anthropic just learned it.

Reason 3: Google as an investor. Google Cloud has been an Anthropic investor since 2023 and has made multiple multi-billion-dollar commitments to Anthropic since. The economic relationship between the two companies is already intertwined enough that moving Claude workloads onto Google-designed chips on Google Cloud is more a formalization of an existing alliance than a new deal.

04

What This Actually Means for Everyone Else

01

Claude Pricing Is Going to Get Cheaper

A 3.5 GW capacity expansion paired with TPU economics (which are more favorable than NVIDIA rentals at scale) means Anthropic's cost per million tokens drops. When infrastructure costs drop, pricing follows. Expect Claude Sonnet and Haiku pricing to get more aggressive through 2026 and 2027, especially for long-context workloads.

Per-token costs will keep falling
02

The Compute Gap Between Labs and Everyone Else Widens

An indie AI startup cannot buy 3.5 GW of compute at any price. The gap between what frontier labs can afford and what the rest of the industry can touch is now measured in gigawatts, not millions of dollars. This cements the API/foundation-model era: everyone who is not a frontier lab builds ON TOP of models they rent.

Build with APIs, not from scratch
03

Broadcom Just Became the Most Important AI Company Nobody Talks About

If Broadcom is designing Google's TPUs and being paid $21B this year and $42B next year for AI silicon, it is functionally the second most important chip company in the world after NVIDIA. Stock jumped 6% on the announcement. Builders should watch the Broadcom/Google/Anthropic triangle the same way they watch NVIDIA/OpenAI/Microsoft.

Broadcom is the new hidden giant
04

There Is a Quiet SEC Clause Everyone Missed

Broadcom's SEC filing states the 3.5 GW is "dependent on Anthropic's continued commercial success." Translation: if Anthropic's growth stalls, the capacity doesn't fully materialize. This is not a guaranteed commitment — it is a conditional one, structured to give Anthropic leverage without locking Broadcom into capacity they can't resell. Watch this clause over the next 12 months.

Read the fine print, always
05

The Bottom Line

The Verdict
The most important number in the announcement is not 3.5 GW. It is $30B. Anthropic's revenue is real, it is growing fast, and the compute deal is the downstream consequence of a business that actually works. Everything else is accounting.

If you are a builder, the takeaway is not "Anthropic is buying chips, how exciting." It is "Claude pricing is going to keep dropping, capacity is going to keep expanding, and the right move is to build more aggressively on the APIs that are already available today." The compute war is being fought by the people who can afford to fight it. You are not one of them. Neither am I. Our job is to take the cheap, abundant tokens that fall out of their war and build useful things with them.

That is exactly what we teach at Precision AI Academy. Two days. In person. In your city. Real deployed projects using the exact APIs Anthropic is about to burn 3.5 gigawatts serving.

Stop Reading AI News. Start Building With It.

The 2-day in-person Precision AI Academy bootcamp. 5 cities. $1,490. 40 seats max. Thursday-Friday cohorts, June-October 2026.

Reserve Your Seat
PA

Published By

Precision AI Academy

Practitioner-focused AI education · 2-day in-person bootcamp in 5 U.S. cities

Precision AI Academy publishes deep-dives on applied AI engineering for working professionals. Founded by Bo Peng (Kaggle Top 200) who leads the in-person bootcamp in Denver, NYC, Dallas, LA, and Chicago.

Kaggle Top 200 Federal AI Practitioner 5 U.S. Cities Thu-Fri Cohorts