On April 6, 2026, Broadcom announced an expanded chip deal with Google that formalizes their 10-year-old partnership on Google's custom TPU processors — and a separate partnership with Anthropic that gives the Claude maker access to roughly 3.5 gigawatts of computing capacity drawn from those same TPUs, starting in 2027. In the same filing, Anthropic disclosed that its annual revenue run rate has crossed $30 billion.
That is not a typo. Less than three years after Claude 2 shipped, Anthropic is running at a revenue pace that beats almost every publicly traded SaaS company in existence. The compute commitment makes sense only if you think the revenue is still accelerating — and Anthropic obviously does.
The 5-Second Version
- Broadcom, Google, and Anthropic announced a multi-party deal on April 6, 2026.
- Anthropic gets access to 3.5 GW of Google TPU capacity starting 2027 — on top of 1 GW already coming online in 2026.
- Anthropic revealed its annual revenue run rate is $30 billion, up from roughly $19 billion earlier this year.
- Broadcom committed to designing and supplying Google TPUs through 2031 — formalizing a partnership that has been quiet since 2016.
- Broadcom stock jumped 6% and picked up an estimated $21B in AI revenue from Anthropic in 2026, $42B in 2027.
What Actually Got Announced
The easy way to understand this is that three separate deals got stacked on top of each other in a single announcement, and the press compressed them into one headline. Here is what each piece actually is.
Deal 1: Broadcom ↔ Google. Broadcom committed to designing and supplying future generations of Google's Tensor Processing Units (TPUs) through 2031. Broadcom has been Google's silent silicon partner since 2016 — quietly engineering the chips while Google owned the brand and the cloud business. This new agreement formalizes that relationship and expands it to cover networking and the full next-generation AI rack, not just the chips.
Deal 2: Anthropic ↔ Google. Anthropic signed an expanded cloud agreement giving it access to about 3.5 gigawatts of TPU-based computing capacity, starting in 2027. This is on top of roughly 1 gigawatt of capacity that was already coming online through the Google Cloud agreement announced in October 2025. Total committed Anthropic TPU capacity through 2028: north of 4.5 GW.
Deal 3: Anthropic reveals its numbers. In the same SEC filing, Broadcom disclosed — and Anthropic confirmed — that Anthropic's annual run rate has crossed $30 billion. Earlier industry estimates had them at around $19 billion as recently as February. That is a jump of more than 50% in roughly two months.
How Big Is 3.5 Gigawatts, Actually
Most of the coverage has treated "3.5 gigawatts" as an abstract number. It is not abstract. It is huge, and the scale is the story.
For comparison: the entire city of San Francisco uses about 1.2 gigawatts of continuous electric power. Anthropic's committed TPU footprint is roughly three San Franciscos worth of AI computing, running 24/7, dedicated to training and serving Claude.
For even more context: as recently as 2022, OpenAI's total compute footprint was estimated to be under 100 megawatts. In three years, frontier-lab compute has grown roughly 35x. This is the part most people are still not internalizing. The scale of AI infrastructure is now comparable to heavy industry, and the cost structure of frontier training is closer to building a steel mill than writing software.
Why Anthropic Is Buying TPUs Instead of NVIDIA GPUs
The obvious question: Anthropic, like every other major AI lab, has built almost everything on NVIDIA. Why TPUs now?
Three honest reasons.
Reason 1: Availability. NVIDIA H100s and B200s are still capacity-constrained. You can order them, but you wait. Google has quietly built out massive TPU fabrication capacity through Broadcom, and most of it has been used internally for Search, YouTube, and Gemini. Making it available to Anthropic is one of the largest contract shifts in cloud computing history, and it gets Anthropic to the front of the compute queue without fighting every other lab for the same NVIDIA allocation.
Reason 2: Vendor risk. Anthropic is now running a business with a $30 billion run rate, a business that would literally stop working if NVIDIA's supply chain hiccuped for two weeks. Diversifying to a second chip architecture is not a preference, it is a survival move. Every serious infrastructure operator eventually learns that single-vendor dependencies are existential risks, and Anthropic just learned it.
Reason 3: Google as an investor. Google Cloud has been an Anthropic investor since 2023 and has made multiple multi-billion-dollar commitments to Anthropic since. The economic relationship between the two companies is already intertwined enough that moving Claude workloads onto Google-designed chips on Google Cloud is more a formalization of an existing alliance than a new deal.
What This Actually Means for Everyone Else
Claude Pricing Is Going to Get Cheaper
A 3.5 GW capacity expansion paired with TPU economics (which are more favorable than NVIDIA rentals at scale) means Anthropic's cost per million tokens drops. When infrastructure costs drop, pricing follows. Expect Claude Sonnet and Haiku pricing to get more aggressive through 2026 and 2027, especially for long-context workloads.
The Compute Gap Between Labs and Everyone Else Widens
An indie AI startup cannot buy 3.5 GW of compute at any price. The gap between what frontier labs can afford and what the rest of the industry can touch is now measured in gigawatts, not millions of dollars. This cements the API/foundation-model era: everyone who is not a frontier lab builds ON TOP of models they rent.
Broadcom Just Became the Most Important AI Company Nobody Talks About
If Broadcom is designing Google's TPUs and being paid $21B this year and $42B next year for AI silicon, it is functionally the second most important chip company in the world after NVIDIA. Stock jumped 6% on the announcement. Builders should watch the Broadcom/Google/Anthropic triangle the same way they watch NVIDIA/OpenAI/Microsoft.
There Is a Quiet SEC Clause Everyone Missed
Broadcom's SEC filing states the 3.5 GW is "dependent on Anthropic's continued commercial success." Translation: if Anthropic's growth stalls, the capacity doesn't fully materialize. This is not a guaranteed commitment — it is a conditional one, structured to give Anthropic leverage without locking Broadcom into capacity they can't resell. Watch this clause over the next 12 months.
The Bottom Line
If you are a builder, the takeaway is not "Anthropic is buying chips, how exciting." It is "Claude pricing is going to keep dropping, capacity is going to keep expanding, and the right move is to build more aggressively on the APIs that are already available today." The compute war is being fought by the people who can afford to fight it. You are not one of them. Neither am I. Our job is to take the cheap, abundant tokens that fall out of their war and build useful things with them.
That is exactly what we teach at Precision AI Academy. Two days. In person. In your city. Real deployed projects using the exact APIs Anthropic is about to burn 3.5 gigawatts serving.
Stop Reading AI News. Start Building With It.
The 2-day in-person Precision AI Academy bootcamp. 5 cities. $1,490. 40 seats max. Thursday-Friday cohorts, June-October 2026.
Reserve Your Seat