/ Generative AI / Beyond Nvidia: How China Built a Top-Tier AI Model Without US Chips
Generative AI 3 min read

Beyond Nvidia: How China Built a Top-Tier AI Model Without US Chips

GLM-5.1 trained entirely on Huawei chips challenges the effectiveness of US AI export controls. An analysis of what this means for the global AI race.

Beyond Nvidia: How China Built a Top-Tier AI Model Without US Chips - Complete Generative AI guide and tutorial

The US government's rationale for restricting AI chip exports to China was straightforward: deny access to cutting-edge hardware, and you slow down the competition.

GLM-5.1 just provided the first compelling piece of evidence that strategy may have a flaw.

The Hardware Reality

Since January 2025, Z.ai (formerly Zhipu AI) has been on the US Entity List—effectively banned from purchasing Nvidia GPUs, the backbone of virtually every major AI training operation worldwide.

Their response: train GLM-5.1 on approximately 100,000 Huawei chips, using Huawei's own software stack.

No American silicon. No Nvidia hardware. No external help.

The result: a model that tops one of the most respected coding benchmarks in the world.

What This Means for Export Controls

The conventional argument goes: without access to the best chips, foreign AI labs will fall behind.

Narrative What Happened
"No access = fall behind" GLM-5.1 tops SWE-Bench Pro
"Chinese models 2 years behind" Now leading on specific benchmarks
"Export controls protect US advantage" Questioned by new data point

This doesn't settle the policy debate—but it introduces real-world evidence that contradicts the core assumption.

The Numbers Tell a Story

Training at this scale requires massive infrastructure:

  • 100,000+ Huawei chips deployed
  • No reliance on any US technology
  • 8 weeks of rapid iteration between February and April 2026
  • 4 significant model updates in under two months

The $558 million Z.ai raised in their January 2026 Hong Kong IPO—the first publicly traded AI foundation model company—showed up in the pace of innovation.

Broader Implications for the AI Race

The GLM-5.1 result represents a shift in the global AI landscape:

  • 80% of AI startups now gravitating toward open-source Chinese models
  • Open-source alternatives at competitive performance levels
  • No subscription required for top-tier performance

This isn't just about one model. It's about the changing economics of AI development when open-source alternatives match or exceed closed proprietary systems.

What Remains Uncertain

Important caveats remain:

  • Self-reported benchmarks — not yet independently verified
  • Earlier results held up under external testing, lending some credibility
  • Benchmark optimization concerns always apply to headline numbers
  • Combined scoring still gives Claude Opus 4.6 a slight edge across multiple tests

Bottom Line

GLM-5.1 is a data point that export control advocates cannot ignore: a model trained entirely on domestic Chinese hardware has achieved top-tier performance on a respected benchmark.

Whether this represents a fundamental shift or a temporary advantage remains to be seen. But the question has changed—from "Will export controls work?" to "What happens when they don't?"