The Internet Built the Pipe Last. AI Is Building It First.

The Internet Built the Pipe Last. AI Is Building It First.

This Time, the Pipes Come First
shashi.co  ·  Enterprise AI  ·  Infrastructure

This Time, the Pipes Come First

Anthropic's multi-gigawatt compute deal with Google and Broadcom is not the early internet infrastructure story repeating itself. It is that story running in reverse.

$30B Anthropic annualized run-rate revenue, April 2026
Revenue growth since end of 2025
1,000+ Customers spending $1M+ annually, up from 500 in February

The early internet had a plumbing problem that nobody wanted to talk about at the time. Bandwidth providers, backbone operators, and data center builders were treated as unglamorous inputs to the real story: browsers, portals, search engines, and e-commerce storefronts that promised to change everything. The infrastructure investment followed the application hype, not the other way around. When the hype corrected, companies like Global Crossing and WorldCom collapsed under the weight of capacity they had overbuilt chasing a demand curve that had not yet arrived.

Anthropic's announcement today, a new agreement with Google and Broadcom for multiple gigawatts of next-generation Tensor Processing Unit capacity coming online starting in 2027, is structurally different from that episode. The capital is flowing to the pipe first, before the application layer is fully defined, and with something the early internet era rarely had: documented, named, and accelerating enterprise demand pulling it forward.

The Demand Signal Is Not Speculative

Anthropic's run-rate revenue has gone from approximately $9B at the end of 2025 to over $30B today. The more telling number is the enterprise customer cohort. In February, when the company announced its Series G funding round, it reported that over 500 business customers were each spending more than $1 million annually. As of this announcement, that number has crossed 1,000. That doubling happened in less than two months.

For context on what that means: these are not trial accounts or pilot deployments. A business committing more than $1 million annually to a single AI provider has made a procurement decision, cleared legal and security review, and integrated the capability into at least some production workflow. The velocity at which this cohort doubled is not a marketing metric. It is a leading indicator of enterprise adoption moving from evaluation to dependency.

Where the Internet Parallel Breaks Down

The early internet comparison is worth taking seriously, because the concern it carries is legitimate. Compute capacity coming online in 2027, committed today, ahead of the application layer that will consume it, is structurally the same bet that destroyed Global Crossing. The assumption is that demand arrives before the capital cost does. That assumption was wrong in 2000. The question is whether it is wrong now.

The most important difference is not in the technology. It is in the supply chain's geopolitical status. Semiconductor export controls, the concentration of advanced chip manufacturing in Taiwan and South Korea, and the active policy competition between Washington and Beijing have made compute capacity a national security asset. Anthropic's language about siting the vast majority of this new capacity in the United States is not boilerplate. It lands inside a policy environment that is actively rewarding domestic infrastructure investment and penalizing foreign-controlled dependencies. That did not exist for the internet buildout. Global Crossing was not a strategic asset. These gigawatts arguably are.

The chip layer itself also carries weight that 1990s internet plumbing never did. NVIDIA's processors, Google's TPUs, and AWS Trainium are not interchangeable. Anthropic running workloads across all three platforms and matching each to the architecture best suited for it is a resilience strategy, not a procurement efficiency play. Enterprise buyers notice that distinction. A vendor whose infrastructure can survive a chip supply disruption on any single platform is a different kind of counterparty than one whose entire model depends on a single hardware relationship.

And unlike early internet bandwidth, which was invisible to enterprise buyers until they ran out of it, AI compute scarcity is already a named operational variable. Chief information officers are managing inference latency, token costs, and throughput limits in production today. The constraint is priced into planning cycles. That is a different demand signal than the speculative build-it-and-they-will-come logic that sank the fiber overbuild.

The Multi-Cloud Position as Moat

One structural fact in this announcement deserves more attention than it typically receives. Claude is currently the only frontier AI model available to enterprise customers on all three major cloud platforms: Amazon Web Services Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry. Amazon remains the primary cloud provider and training partner through Project Rainier.

For an enterprise technology buyer, that availability profile removes a significant procurement friction. Organizations that have existing cloud agreements, committed spend, and established security posture on any of the three major platforms can access Claude without renegotiating their fundamental cloud relationships. That is a distribution advantage that took years to build and would be extremely difficult for a new entrant to replicate.

A competitor building toward this distribution position today would need years of separate negotiation with AWS, Google, and Microsoft, each of which now has a structural interest in Claude's continued presence on their platform.

The Bet This Announcement Is Actually Making

The gap between capacity coming online in 2027 and the application layer that will consume it is real, and it is worth naming clearly. Enterprise AI adoption still concentrates heavily in a handful of use cases: code generation, customer support automation, document summarization, and internal knowledge retrieval. The gigawatt-scale infrastructure being committed today is priced for a world where AI inference becomes as ambient as cloud storage, embedded in every enterprise workflow rather than deployed selectively in a few of them.

Whether the application adoption curve is steep enough and sustained enough to fill that capacity is the central question that no infrastructure announcement can answer on its own. The revenue numbers suggest the trajectory is real. The pace of the $1 million customer cohort doubling suggests the trajectory is accelerating. Neither of those facts eliminates the possibility of a gap between capacity and demand, particularly if a significant competing model capability emerges at materially lower inference cost and disrupts the pricing assumptions embedded in these contracts.

That is the scenario a technology leader building AI strategy on top of this infrastructure should be stress-testing, not the scenario that the infrastructure investment itself is directionally wrong.

Viability Question · For Technology Leaders

The Anthropic-Google-Broadcom compute commitment is large enough and long enough in duration that it will shape the AI infrastructure supply landscape through the end of this decade. The practical question for a chief information officer or chief technology officer is not whether AI compute matters, that is settled, but whether your organization's AI infrastructure dependencies are positioned to benefit from this capacity expansion rather than be constrained by it.

If the next 18 months bring a step-change in AI inference capacity, will your current cloud agreements and model provider relationships allow you to absorb that capacity at competitive cost, or will you be renegotiating from a position of lock-in when the leverage has shifted to the supply side?

Sources
  1. Anthropic. "Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute." April 6, 2026. anthropic.com
  2. Anthropic. "Anthropic raises $30 billion Series G funding." February 2026. anthropic.com
  3. Anthropic. "Anthropic invests $50 billion in American AI infrastructure." November 2025. anthropic.com
  4. Anthropic. "Expanding our use of Google Cloud TPUs and services." October 2025. anthropic.com
Disclaimer: This blog reflects my personal views only. Content does not represent the views of my employer, Info-Tech Research Group. AI tools may have been used for brevity, structure, or research support. Please independently verify any information before relying on it.