At a time when Huawei is preparing to deploy version 4.0 of its Qiankun autonomous driving system (ADS) on a large scale, Richard Jin, CEO of the company’s intelligent automotive solution unit, shared new figures on its progress.
As of July, one million vehicles had been equipped with Qiankun. Shipments of Huawei’s LiDAR (light detection and ranging) technology exceeded one million units, and assisted driving mileage reached four billion kilometers.
By the end of August, 28 models co-developed with Huawei had launched. These include models under the Harmony Intelligent Mobility Alliance (HIMA) as well as Avatr, Deepal, Voyah, M-Hero, Trumpchi, Fangchengbao, and Audi.
According to Jin, Huawei’s advances in the automotive sector stem from its long-term strategy. The company began investing in automotive technology in 2014 and spent more than a decade on R&D before achieving profitability. Even now, it does not set strict commercialization targets for the business.
“The problem with focusing only on commercialization is that it often backfires,” Jin said. “But if we stay focused on R&D and meeting user demand, profitability, both in a single year and over the long run, is inevitable.”
As leading automakers turn to the vision-language-action (VLA) model to accelerate assisted driving, Jin argued that Huawei is pursuing a different path called “world action” (WA), which he sees as the true route to autonomy.
In his view, VLA builds on the maturity of language models by converting video into tokens that are then trained to produce actions and control a car’s trajectory. “It looks like a shortcut, but it’s not the ultimate solution,” he said.
WA, on the other hand, processes visual, auditory, and tactile data directly into driving actions without converting everything into language first. Based on this framework, Huawei developed its WEWA (world engine, world action) model, which will be deployed in ADS 4.0.
Asked whether assisted driving should be free, Jin was blunt:
“Nothing is free. What looks free is usually covered elsewhere, whether through ads or bundled costs.”
He said that automakers claiming to provide free assisted driving are either offering it for a limited period, embedding the cost in vehicle pricing, or releasing immature products and using customers as test subjects.
From a business perspective, Jin argued, charging for assisted driving is logical. Providers must continually update, maintain, and deliver over-the-air upgrades over the vehicle’s lifecycle, and those costs cannot be ignored.
“For buyers of the earliest ADS versions, Huawei has been providing yearly upgrades,” he said. “It may have been expensive upfront, but over time, the user experience keeps improving. In the end, the cost is not high when you spread it out over the lifetime of the car.”
This full lifecycle management approach applies not only to Qiankun but also to Huawei’s HarmonyOS cockpit solution, which integrates both software and hardware.
Huawei’s cockpit is built on the MoLA architecture, linking applications and hardware vertically and across industries horizontally.
Jin noted that while some automakers try to decouple software and hardware to reduce costs, the result is often poor user experiences and challenging maintenance. Huawei instead adheres to a full-stack model to ensure long-term performance and usability.
Looking ahead, Huawei has outlined a multiyear plan:
- By 2026, its ADS will be equipped with Level 3, highway-compatible capabilities and pilot-level Level 4 capabilities in urban areas.
- By 2027, Huawei plans to launch trials of driverless trunk logistics and scale up Level 4 operations in cities.
- By 2028, it aims to commercialize driverless trunk logistics at scale.
The following transcript has been edited and consolidated for brevity and clarity.
Q: Some automakers believe VLA is the ultimate technical path for assisted driving and could even deliver true Level 4 performance. How does Huawei view this?
Richard Jin (RJ): Companies on the VLA path think that language models like those developed by OpenAI have already mastered vast online information. They are now converting video into tokens for training, which then become actions to control vehicle trajectories.
Huawei won’t follow that path. We think it looks clever but won’t lead to full autonomy. WA removes the language layer and directly turns perception into control. It’s harder, but it’s the way to real autonomous driving.
Q: How many global players do you think will truly achieve Level 3 or 4 capabilities in the coming years?
RJ: We don’t know the exact number, but certainly not many. Just like embodied intelligence five or six years ago, there were many entrants, but now far fewer remain, and the field will shrink further.
Autonomous driving is heavily data-driven, requiring massive amounts of data, computing power, and algorithms. Eventually, a shared intelligent platform will be essential because it’s too costly for a single company to carry alone.
With Nvidia chips banned from sale in China, that’s why we’re seeing so many domestic alternatives emerging.
Q: How long does it take Huawei to match ADS to a new car model?
RJ: The fastest is about six to nine months.
Q: Bosch has said users should pay for autonomous driving. If Huawei’s Qiankun costs RMB 70,000 (USD 9,800), but rivals offer it for RMB 40,000 (USD 5,600) or RMB 50,000 (USD 7,000), how do you see that?
RJ: First, nothing is free. What looks free is usually covered elsewhere, whether through ads or bundled costs.
Second, some automakers only offer it free for a few years, or include the cost in the car, or release subpar versions and plan to charge later.
Third, pricing must be rational. Assisted driving is not a one-off sale as it requires ongoing upgrades and maintenance throughout the vehicle lifecycle.
At Huawei, we’ve taken our ADS through various versions with many small updates in between. This requires long-term investment. Early buyers of ADS hardware can still upgrade today, unlike some competitors where hardware becomes obsolete after two years.
A good product must be designed for longevity and continuous iteration. Users may pay more upfront, but in the long run, they get more value.
Q: Huawei’s Qiankun uses more LiDAR sensors than rivals. Is this just to justify a higher price?
RJ: It’s about safety, not markup. We want zero accidents. For example, the Maextro S800 has a front-facing LiDAR unit, in addition to two side units and one at the rear.
Loading LiDAR tech on the rear, for instance, can improve parking safety. Cameras and ultrasonic sensors have limitations. Cameras produce flat images without depth, and ultrasonic radar lacks accuracy. A rear-mounted LiDAR unit can provide centimeter-level precision, helping the system distinguish between a harmless wall fixture and a protruding pipe, preventing collisions.
We’ve even seen cases where cars reversed into farmland because the system failed to recognize terrain depth. With solid-state LiDAR, the system can detect pits and prevent accidents. These needs come directly from user scenarios.
We’re not adding sensors just for show. It’s all about making assisted driving safer, whether in parking, urban driving, or on highways.
KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Fan Shuqi for 36Kr.