FB Pixel no scriptChina’s automakers pour billions into AI compute race, chasing Tesla’s lead | KrASIA
MENU
KrASIA
Insights

China’s automakers pour billions into AI compute race, chasing Tesla’s lead

Written by 36Kr English Published on   7 mins read

Share
Li Auto, Xpeng, and Huawei lead a Chinese push to scale up AI compute, aiming to close Tesla’s lead in autonomous driving.

“Since the latter half of this year, Li Auto has nearly bought out its dealers’ entire stock of GPU cards,” an insider said.

This scramble for computing resources, initially driven by startups working with large language models, has now swept into the automotive sector. Companies like Li Auto, Huawei, and Xpeng Motors—driving hard toward full autonomous capabilities—are at the forefront of this push.

End-to-end (E2E) intelligent driving technology, operating on billions of parameters much like large language models (LLMs), aims for even grander scales. Computing power is the fuel for this data-intensive engine, making the hunt for resources a new battleground in autonomous driving.

“Li Xiang often asks me if we have enough computational power, and if not, to get more,” said Lang Xianpeng, head of intelligent driving at Li Auto, in an interview with 36Kr. According to sources familiar with 36Kr, Li Auto has already stockpiled thousands of high-performance chips and is actively scouting new locations for data centers.

In July, Li Auto’s cloud computing capacity reached 2.4 exaflops. By late August, it had surged to 5.39 exaflops—an increase of nearly 3 exaflops in under a month. Xpeng has similarly ambitious goals, aiming to expand its cloud capacity from 2.51 to 10 exaflops by 2025. Huawei has been equally aggressive, boosting its training capacity from 5 to 7.5 exaflops in just two months.

Industry experts highlight that automakers primarily rely on Nvidia’s H100 and A800 GPUs for model training. With US export restrictions in effect, the A800—Nvidia’s downgraded model—has become the most accessible option in China. According to 36Kr, a server equipped with eight A800 cards costs around RMB 950,000 (USD 133,000). Operating at FP16 precision, each A800 card delivers 320 teraflops, meaning that one exaflop would require about 3,125 A800 cards or around 390 eight-card modules.

With each eight-card module costing RMB 950,000, securing one exaflop of computing power requires an investment of approximately RMB 3.7 billion (USD 518 million). This indicates that, in the past month alone, Li Auto has spent over RMB 1 billion (USD 140 million) on GPU chips. For Xpeng to reach its 2025 target, it will need to invest around RMB 3.7 billion.

Despite these substantial expenses, automakers can’t afford to slow down. Autonomous driving has entered a new era, driven by AI and data rather than rule-based programming. To mass-produce E2E autonomous driving systems, automakers must double down on both data collection and cloud computing.

Tesla has set the benchmark, with an estimated 67.5 exaflops of AI computing power, equivalent to roughly 67,500 Nvidia H100 GPUs. Over the past year, Tesla’s GPU resources have increased sixfold, giving it a commanding lead. Tesla’s current computing power stands at 67.5 exaflops, a significant portion of last year’s global total of 910 exaflops.

Tesla’s latest Full Self-Driving (FSD) model, V12, powered by this immense data and computing capacity, provides a smoother, more human-like driving experience. This achievement has intensified the industry-wide competition for data and computing power.

Automakers’ hunger for data

E2E autonomous driving depends on the synergy between data and computing resources. Tesla, for instance, has indicated that robust E2E autonomous systems require at least a million high-quality, diverse video clips. With datasets scaled to ten million instances, system performance improves significantly. One industry expert told 36Kr that each video clip typically lasts 15–30 seconds, though timing can vary.

Tesla’s data collection scale is unmatched, with millions of cars globally and, even if only a fraction contribute data, the potential to amass a million training clips daily. Another insider mentioned that training an 8-billion-parameter model in the cloud requires at least 10,000 hours of data, updated every two weeks.

Building a data-driven feedback loop early on gives automakers a competitive edge, making it difficult for rivals to catch up. Li Auto aims to launch an E2E autonomous system, trained on over ten million clips, by early next year. Xpeng’s head of intelligent driving, Li Liyun, recently announced that Xpeng has accumulated 20 million clips for model training.

However, high-quality data is not easy to come by. Elon Musk noted that capturing valuable user intervention data—high-quality training material—is becoming increasingly challenging. “Out of every 10,000 miles driven, only one mile is useful for training the FSD neural network.”

Li Auto also reported that only 3% of its 800,000 customers provide high-quality data.

Automakers and AI firms typically rely on two data collection strategies. One approach mines data from production cars, where engineers set rules for tens of thousands of vehicles in operation, selecting and anonymizing data that fits these criteria. Alternatively, intelligent driving suppliers often use skilled drivers to gather high-quality data, lacking access to production cars.

However, data collection itself is costly. According to 36Kr, a leading intelligent driving supplier spends a nine-figure RMB sum annually on data transmission alone, with newer automakers facing even higher expenses.

Extracting useful information from large volumes of existing data also proves crucial, with high-quality data fueling system iterations across the entire feedback loop—from collection and cleaning to training, simulation, and validation.

Making E2E systems profitable

E2E autonomous systems are inching closer to profitability. Tesla rolled out its E2E FSD in late 2023, with Musk instructing the sales team to offer more test drives due to its improved performance. In 2024, Tesla reduced its monthly subscription fee from USD 199 to USD 99 and its purchase price from USD 12,000 to USD 4,500 to boost adoption across North America. The company has also confirmed plans to launch FSD in China by Q1 2025, which would open a new revenue frontier.

This shift signals that E2E technology is nearer to full commercialization than ever before.

In China, monetization efforts are accelerating. Huawei profited early with its collaboration with Seres on the Aito M7, which secured 100,000 orders within two months, with over 60% opting for the autonomous driving version.

Huawei has also adopted a subscription model for its intelligent driving software. While Tesla reduced prices, Huawei’s software costs have steadily risen. According to a HarmonyOS sales representative, Huawei’s autonomous driving software is priced at RMB 3,000 (USD 420), RMB 6,000 (USD 840), and RMB 10,000 (USD 1,400) for the first, second, and third versions, respectively, with more price hikes anticipated.

Li Auto has also bolstered its brand with intelligent driving, offering E2E capabilities across all Max models. As a result, user satisfaction has risen significantly.

In Q2, Li Auto reported that 70% of its vehicles priced above RMB 300,000 (USD 42,000) are equipped with AD Max, its advanced autonomous system, priced at RMB 20,000 (USD 1,400) more than the Pro model—reflecting a strong demand for intelligent driving features.

Buying GPUs and building data centers

In addition to car sales for data collection, automakers are rapidly building their GPU capabilities. Tesla, according to its Q3 earnings report, currently holds AI capacity equivalent to 67,500 Nvidia H100 chips—totaling about 67.5 exaflops. Tesla aims to increase this capacity to 88.5 exaflops with the addition of another 21,000 H100 chips by the end of October.

Beyond substantial Nvidia GPU acquisitions, Tesla is also advancing its in-house Dojo supercomputer, targeting a capacity of 100 exaflops, bolstered by 8,000 H100 GPUs expected by year-end.

This intensifying GPU competition is sparking reactions among Chinese automakers. Due to US export restrictions, Nvidia’s H100 is more challenging to acquire in China, pushing companies there to rely on the A800, a more affordable but less powerful alternative. Currently, Huawei leads in computing power within China, with 7.5 exaflops derived from both Nvidia GPUs and its own Ascend chips. While Ascend’s toolchain has limitations, Huawei’s proprietary supply secures steady progress in cloud processing.

Following Huawei, Li Auto holds 5.39 exaflops of capacity, achieved with approximately 10,000 Nvidia GPUs. According to industry estimates, attaining this capacity with A800 chips would require around 16,800 units. Cloud GPUs have become more accessible since last year, with A800 servers priced at around RMB 950,000 per eight-card module. Nevertheless, achieving such computational strength remains a significant financial undertaking for Chinese automakers.

Li Auto aims to reach 8 exaflops by the end of the year, recently partnering with Volcano Engine to establish a joint data center, with further expansion under consideration. Meanwhile, Xpeng has reached 2.51 exaflops—around 7,800 A800 GPUs—and targets 10 exaflops by 2025. Nio currently trails with 1.4 exaflops, or about 4,300 A800 cards.

For context, China’s national compute capacity was 246 exaflops as of June 2024 (FP32 basis), equivalent to roughly 492 exaflops at FP16 precision. Huawei, Li Auto, Xpeng, and Nio collectively contribute around 3.5% of this national total.

Amid this race, smaller companies are making strides through partnerships: IM Motors collaborates with Momenta, and Great Wall Motors has joined forces with DeepRoute.ai to accelerate market entry. The global AI model landscape has reached unprecedented scales, with some Chinese companies investing over USD 50 million just to enter the field. Leading Chinese AI firms are now building trillion-parameter models, fueled by extensive computing resources, with companies like Kimi and Stepfun assembling massive GPU clusters in collaboration with cloud providers.

A parallel shift is emerging in the automotive industry. Automakers are evolving from conventional car sellers into AI-driven tech companies, leveraging massive data and computing resources to push toward fully autonomous vehicles and embodied intelligence. In this intensely competitive, low-margin field, the pursuit of greater computing power is relentless, with the ongoing price war signaling that AI differentiation could be the ultimate key. For automakers, the high-stakes game of compute has only just begun.

KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Li Anqi for 36Kr.

Share

Auto loading next article...

Loading...