The announcement from NVIDIA and Google Cloud at Google Cloud Next this week runs about 1,200 words and mentions chip architectures, cluster topologies, and acronyms that belong in a data center engineering briefing, not an enterprise buyer conversation. Strip all of that away and you have a simpler story: two of the most powerful technology companies in the world are jointly building infrastructure that moves AI out of the cloud and into physical space, including your factory floor, your hospital network, and your government facility.
That shift is worth understanding clearly, because the sales motion that follows it will not be simple.
The chip partnership is old. The physical world ambition is new.
NVIDIA builds the specialized processors that power AI computations. Google Cloud runs the data centers where those processors live. They have co-engineered that arrangement for more than a decade. What changed this week is the direction of travel.
Previous partnership announcements were about scale inside the cloud. Bigger clusters, faster chips, lower cost per AI output. This announcement is about moving the stack outside the cloud entirely, into facilities where data cannot leave for regulatory, security, or competitive reasons, and into machines that operate in the physical world.
That is a fundamentally different product motion, and it changes what enterprise buyers are actually purchasing.
Four things they announced, in plain terms
The first is a new tier of computing hardware called Vera Rubin, the successor to NVIDIA's Blackwell chip generation. Google Cloud announced it will offer these processors as a cloud service called A5X. NVIDIA claims the new generation delivers ten times lower cost per AI output and ten times better energy efficiency than the prior generation. Both figures are vendor-supplied. The maximum cluster size cited, nearly one million chips working in concert across multiple sites, is an engineering ceiling relevant mostly to frontier AI labs and hyperscale operators. For most enterprise buyers, the relevant headline is that the cost of running large AI workloads is continuing to fall.
The second is on-premises deployment with encryption guarantees. Google's Gemini AI models can now run on NVIDIA hardware inside your own facility. The encryption is designed so that even the infrastructure operators, meaning NVIDIA and Google personnel, cannot see the data being processed. For companies in regulated industries, finance, healthcare, defense, this removes the primary objection to using frontier AI models: that sensitive data has to leave the building.
The third is agentic AI tooling. "Agentic" means software that does not just answer questions but takes actions autonomously, completing multi-step tasks without a human approving each move. NVIDIA and Google are providing the underlying models and infrastructure for companies to build these agents. CrowdStrike is using the platform for automated threat detection. Factory, an autonomous software development startup, is using NVIDIA models on Google Cloud to write and review code.
The fourth is what they are calling physical AI. This is the part that carries the most operational weight for manufacturers, logistics companies, and anyone running large physical facilities.
Physical AI means building a complete digital replica of your factory, simulating how robots behave inside it, and only then deploying hardware into the real world.
The digital twin is the product, not the robot
A digital twin is a software model of a physical environment, a factory floor, a supply chain, a hospital wing, that updates in real time as conditions change. The concept has been discussed in manufacturing circles for years. What NVIDIA and Google are doing is packaging the tools to build these twins alongside the AI models that reason inside them and the simulation frameworks that train robots before deployment.
Siemens Digital Industries Software and Cadence are already available on the platform for chip design, aerospace, heavy machinery, and automotive applications. Schrödinger, the drug discovery firm, has cut simulations that previously took weeks down to hours using NVIDIA accelerated computing on Google Cloud.
The practical sequence is: build a digital replica of your facility, train and test robots inside the simulation, validate their behavior without risk, and then deploy. The simulation layer is doing the work that physical prototyping used to do, faster and without the capital cost of physical failure.
What makes this a vendor dependency conversation, not just a technology one
NVIDIA and Google Cloud need each other. NVIDIA needs cloud distribution to reach enterprise buyers at scale. Google Cloud needs NVIDIA chips because its own custom silicon does not cover every workload type. That mutual dependency is healthy for the partnership. It is worth examining carefully from the buyer side.
When you buy cloud AI, switching costs are real but manageable. When you build factory automation on a joint NVIDIA-Google infrastructure stack, with simulation frameworks, robot training pipelines, digital twin software, and on-premises hardware, all co-engineered by the same two vendors, the switching cost conversation becomes a different one entirely.
My time managing analytics infrastructure at Network Solutions taught me that the tools you choose at the foundation of an operational workflow are not neutral choices. They accumulate. The data formats, the model weights, the simulation parameters, the robot training datasets: each layer becomes a reason the next layer stays. This announcement is building a foundation that is designed to be very sticky, and it is designed to be sticky in the physical world, where the cost of migration is not a data export but a capital project.
The technology is genuinely impressive. The performance claims, if they hold in production, represent a meaningful cost reduction for compute-intensive workloads. The confidential computing capability addresses a legitimate objection for regulated industries. The physical AI toolchain is among the most complete available from any single vendor ecosystem.
None of that makes the dependency question go away.
Before you commit to NVIDIA and Google Cloud's joint physical AI stack for factory automation or on-premises AI deployment, map every layer of the architecture, simulation frameworks, robot training pipelines, model weights, hardware, and networking, and ask your legal and procurement teams what a migration from this stack would cost in year three. If the answer is unclear, the contract terms matter more than the performance benchmarks.
Buck, Ian. "NVIDIA and Google Cloud Collaborate to Advance Agentic and Physical AI." NVIDIA Blog, 22 Apr. 2026, nvidia.com.
Google Cloud. "Google Cloud AI Infrastructure at Google Cloud Next '26." Google Cloud Blog, 22 Apr. 2026, cloud.google.com.
NVIDIA. "NVIDIA Vera Rubin NVL72." NVIDIA Newsroom, 22 Apr. 2026, nvidia.com.
