The Frog in the Well Cannot See the Chip War

The Frog in the Well Cannot See the Chip War

The Frog in the Well Cannot See the Chip War | shashi.co
The Frog in the Well Cannot See the Chip War

DeepSeek's next flagship model is being built to run entirely on Huawei silicon. If you are not watching this, you are making decisions with an incomplete map.

60% Huawei Ascend 910C performance vs. Nvidia H100 in real-world inference (DeepSeek researchers, per CFR)
95% → 50% Nvidia's China market share drop over four years, per Jensen Huang
Weeks Estimated time to DeepSeek-V4 launch on Huawei Ascend 950PR chips
+20% Price increase for Ascend 950PR chips in recent weeks due to demand surge

There is an old proverb about a frog living at the bottom of a well. It looks up and sees a small circle of sky. It concludes that is all the sky there is. The frog is not stupid. It is just working with the view it has.

Many influential folks in politics and technology are that frog right now when it comes to the global chip landscape. They are watching Nvidia's Blackwell announcements, tracking hyperscaler capital expenditure, and building their AI roadmaps around a supply chain that assumes Western hardware is the only hardware that matters. Meanwhile, something significant is happening outside that circle of sky.

DeepSeek, the Chinese artificial intelligence lab that rattled markets in January 2025 with its R1 model, is preparing to launch its next flagship model, DeepSeek-V4, built to run entirely on Huawei Ascend 950PR chips. No Nvidia. No AMD. No Western silicon required.

What Is Actually Happening

Reports confirmed in early April 2026 indicate that DeepSeek has ordered hundreds of thousands of Huawei Ascend 950PR chips for V4. The company spent months working with Huawei and chip designer Cambricon Technologies to rewrite core code components, moving away from Nvidia's CUDA (Compute Unified Device Architecture) ecosystem and optimizing for Huawei's CANN software framework.

This is not a workaround. It is a deliberate architectural decision. DeepSeek is building a model that does not need Western hardware to function at a frontier level.

The broader market is following. Alibaba, ByteDance, and Tencent are placing bulk orders for Huawei chips to integrate DeepSeek models into their cloud services. The Ascend 950PR price has risen 20% in recent weeks on demand alone.

The Jensen Huang Paradox

Jensen Huang has been vocal about US export controls. At Computex in May 2025, he called them a "failure." During Nvidia's earnings call, he said: "Export restrictions spurred China's innovation. The US has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable. Now it is clearly wrong."

Huang is self-interested here. Nvidia lost billions in write-downs on unsold H20 chips blocked from sale. His market share in China dropped from 95% to 50% over four years. He has every commercial reason to argue for loosening controls.

But the underlying observation is not wrong. When you restrict access to a tool, the people who need that tool do not stop working. They build a different tool.

"The question is not whether China will have AI. It already does." — Jensen Huang, Nvidia earnings call, 2025

The Council on Foreign Relations published a detailed analysis in December 2025 arguing the opposite case: that Huawei is falling further behind Nvidia, not catching up. Their data shows the Ascend 950PR has lower total processing performance than the 910C on paper, and that real-world performance of the 910C sits at roughly 60% of the Nvidia H100. By 2027, they project Nvidia's best chips will be seventeen times more powerful than Huawei's best offerings.

Both things can be true. Huawei may be behind on raw performance. And DeepSeek may still build a commercially viable frontier model on that hardware, because DeepSeek's entire reputation is built on doing more with less.

The Training Gap Is Real, But Narrowing

The path to V4 was not smooth. Earlier attempts to train on Huawei Ascend 910B hardware ran into stability problems and efficiency gaps. Reports from late 2025 indicate that Nvidia hardware was used for training while Huawei chips were prioritized for inference, which is the process of running the model after it is trained.

The gap between training capability and inference capability matters for enterprise buyers. A model trained on Nvidia hardware but deployed on Huawei hardware is still a model that runs without Western silicon in production. That is the part that affects your supply chain decisions.

What the Compatibility Table Tells You

Model Version Hardware Status
DeepSeek-V3 / R1 Ascend 910B Requires BF16 conversion (memory intensive) or W8A8 quantization
DeepSeek-V4 Ascend 950PR Native support. Primary launch hardware. Launch expected within weeks

BF16 is a numeric format (Brain Float 16) used to reduce memory requirements during model computation. W8A8 quantization is a compression technique that reduces model size and memory use at some cost to precision. Both are workarounds. Native support on V4 means no workarounds needed.

Why This Belongs on Your Radar

If you are a technology leader in North America or Europe, you might reasonably ask why a Chinese model running on Chinese chips is your problem.

Your competitors in Asia-Pacific markets are not asking that question. They are evaluating DeepSeek models for production deployment right now. If those models run efficiently on locally available hardware, the cost and latency advantages are real.

The software ecosystem matters too. DeepSeek's move away from CUDA is a signal that the assumption of CUDA as the universal AI development standard is weakening. Brookings noted in August 2025 that Huawei has significant penetration at the network edge in Latin America, the Middle East, and Africa. Where the edge devices are Huawei, the pressure to build all-Huawei AI stacks grows.

The efficiency argument also travels. DeepSeek built its reputation by achieving frontier-level results at a fraction of the compute cost of US labs. If V4 demonstrates that a reportedly 1-trillion parameter model can run at competitive speeds on domestic Chinese hardware, the "you need Nvidia to do serious AI" argument weakens globally, not just in China.

Keep the Scoreboard in Perspective

None of this means Huawei has caught Nvidia. The CFR analysis is credible and the performance gap is real. Nvidia's Blackwell chips remain significantly more powerful than anything Huawei currently produces at scale.

Whether US export controls were the right call is a legitimate policy debate with serious national security dimensions that go well beyond chip performance benchmarks. Reasonable people disagree, and the data cuts both ways.

The point is simpler than that. The technology landscape does not pause while you are focused elsewhere. The frog in the well is comfortable. The well is just not the whole picture.

For the CIO / CTO: Questions Worth Asking Now

  • Do your AI vendor assessments include models and infrastructure developed outside the US and EU? If not, you have a blind spot in your competitive intelligence.
  • If your organization operates in Asia-Pacific, Middle East, or Latin American markets, what is your read on how local competitors are evaluating DeepSeek for production use?
  • Your AI strategy likely assumes CUDA-based infrastructure. How exposed are you if that assumption becomes a premium rather than a default?
  • Are you tracking the software ecosystem shift, not just the hardware benchmarks? The CANN framework is not a curiosity. It is a parallel development path.
Disclaimer: This blog reflects my personal views only. Content does not represent the views of my employer, Info-Tech Research Group. AI tools may have been used for brevity, structure, or research support. Please independently verify any information before relying on it.