FB Pixel no scriptHonor sees phones as key to on-device AI, even if they are not the endgame
MENU
KrASIA
Features

Honor sees phones as key to on-device AI, even if they are not the endgame

Written by Cheng Zi Published on   12 mins read

Share
James Li, CEO of Honor, on stage at Mobile World Congress 2026. Photo source: Honor.
There’s no consensus yet on the final form factor, but smartphone makers may have an advantage in AI devices.

After large models dominated headlines for much of 2025, this year’s focus in artificial intelligence has shifted toward integration into devices. Last month, several Chinese smartphone brands rolled out major updates for on-device AI, bringing capabilities once limited to flagship models down to midrange and even select entry-level devices.

Then, early this month, Google released the open-source Gemma 4 model family. Through architectural innovations, it significantly lowered the barrier to deploying AI on devices such as smartphones and connected hardware.

Although experimentation in on-device AI continues across multiple directions, an emerging consensus is that smartphones may be the most viable near-term form factor for bringing it to market.

Seemingly aware of this shift, Honor began repositioning itself last year from a smartphone maker to an AI device company, announcing plans to invest USD 10 billion in AI over five years.

On the operating system front, it launched MagicOS, an agent-based operating system, and introduced Yoyo as its primary AI agent product. Online shopping, ride-hailing, and other common applications have since been integrated with AI on Honor devices.

It was also last year that Honor restructured its R&D organization, establishing an AI and software division. The move integrated teams across operating systems, AI products, and internet services to strengthen overall competitiveness.

Use Honor’s latest flagship phone, the Magic V6, and it becomes clear that the AI phone experience has changed in a fundamental way.

AI was not previously essential to smartphones. On the Honor Magic V6, however, it is central to the device’s role as a productivity tool. For example, the phone can act as a digital assistant, reminding users of upcoming meetings, providing AI-managed support during sessions, and generating structured meeting minutes and to-do lists afterward. With an on-device AI agent, users can also convert photos into documents and edit them directly.

36Kr spoke with Li Xiangdong, an AI product expert for MagicOS at Honor, about his views on AI phones, on-device AI, and Honor’s broader approach.

The following transcript has been edited and consolidated for brevity and clarity.

36Kr: If you had to sum it up, what would be the biggest difference between future AI phones and today’s phones?

Li Xiangdong (LX): There will be three fundamental changes:

  • First, the more you use the phone, the better it will understand you. Phones will provide highly personalized services, and AI phones will look different for every user.
  • Second, human-machine interactions will shift from a paradigm requiring people to adapt to a GUI (graphical user interface) to systems that can proactively serve people. For example, Yoyo can proactively provide reminders throughout your flight journey. That is an early form of proactive service.
  • Finally, future AI phones will have powerful autonomous execution capabilities. Like a secretary, AI will be able to understand a user’s complex, cascading needs, break them down into workflows, and complete tasks automatically. The user will only need to make choices or provide feedback.

36Kr: Companies have been positioning themselves around AI phones this year. What trends do you think are worth watching in 2026?

LX: First, there is the combination of AI agents and autonomous execution. Agent capabilities will become deeply integrated with phone scenarios.

Second, AI phones will begin emphasizing global memory. Based on global, long-context memory, phones will deliver better personalization, understanding users better the more they use them.

Third is multimodal interaction. AI phones will combine vision, voice, and text into integrated interactions and services. Some time ago, Honor launched the “Robot Phone,” which can see what you see and hear what you hear, provide companion-style explanations, and generate summaries. That also points to the future direction of multimodality.

Photo shows Honor CEO James Li presenting the company’s “Robot Phone” at Mobile World Congress 2026.
Honor CEO Li presenting the company’s “Robot Phone” at Mobile World Congress 2026. Photo source: Honor.

36Kr: Will smartphone makers face difficulties in advancing autonomous execution, given that so much user data is currently held inside major apps?

LX: That is indeed an ecosystem issue. Objectively speaking, though, this is an area where terminal manufacturers have an advantage over app companies and large model developers. App companies only know users’ habits within their own applications, and model developers can only understand user needs when those users engage with them. Terminal makers, by contrast, can understand users around the clock and across applications.

We are also working with large app companies on partnerships. Everyone’s goal is to provide a better user experience while protecting user privacy and achieving commercial win-win outcomes.

36Kr: But could future AI phones reshape the existing traffic allocation logic inside smartphones?

LX: Without question. The current business model, in which app companies, large model developers, and terminal manufacturers each play to their strengths while competing with one another, is relatively stable.

But as AI develops, especially with the rise of user-intent-centered autonomous execution and proactive service, the original business model and traffic allocation will inevitably be reshaped. Eventually, all parties will reach a new balance. That process of reshaping will require exploration and adjustment by everyone involved.

36Kr: Since AI agents emerged, companies have been exploring AI phones, including model developers, internet companies, and phone makers. What do you see as the strengths and weaknesses of each?

LX: Model developers have advantages in foundation model capabilities, computing power, and inference technology, but they may lack understanding of hardware, the willingness to invest in it, and experience in the consumer terminal industry.

Terminal manufacturers have advantages in their deep understanding of end users, their massive user bases, their integrated hardware-software capabilities, and their mature experience in operating consumer products. But they cannot go all in on foundation model R&D in the same way model developers can.

Each side has different strengths and areas of investment. Based on those advantages, everyone will pursue integration and collaboration to better serve users together.

36Kr: Honor has invested in AI for a long time. If you were to break it down, how many stages has that journey gone through?

LX: Honor’s work in AI can be traced back to the first-generation Magic smartphone series in 2016 and the birth of the Magic Live engine. But it was in 2020 when Honor started exploring AI seriously.

From 2020–2023, we were in a phase of deploying platform-level AI and building out our overall AI strategy. At the time, we launched platform-level AI based on MagicOS 7.0 and used the Magic Live engine to provide an intent-recognition-based human-machine interaction experience.

In the first half of 2024, we launched MagicOS 8.0 and the on-device AI capability called Magic Portal. Based on the industry’s first on-device intent recognition framework, Magic Portal allowed users to quickly access needed services across applications through simple drag-and-drop actions. That same year, Honor also clearly defined its intent recognition framework and four-layer on-device AI architecture, using AI to reconstruct the operating system and device-cloud collaboration.

In the second half of 2024, we had already recognized the industry’s shift toward AI agents and began exploring features in which agents could automatically execute tasks for users.

By 2025, we had fully moved into agents and self-evolving AI. We not only launched our three-layer on-device AI agent architecture but also introduced the latest Magic family large language model.

From 2020–2025, Honor completed the foundational architecture and core strategic groundwork for its AI technology. The recent attention around OpenClaw and Hermes agents has also validated the value and forward-looking nature of Honor’s investment in this technical path.

36Kr: At the product level, AI on the newly launched Honor Magic V6 seems richer than before. There are clearer signs of personalized AI services, and the emotional connection with users also seems deeper.

LX: Yes. With the Honor Magic V6, we targeted a very specific user group.

Our research found that their main pain points involved frequent meetings and document processing, so we built a complete meeting assistant experience, including pre-meeting reminders, speaker-separated transcription and real-time summaries during meetings, and automatic generation of accurate meeting minutes and to-do items afterward.

We also introduced a document agent for this target group, allowing paper documents to be converted into editable files with one tap.

In addition, for investors, we launched an intelligence assistant, allowing users to customize proactive information pushes on their phones. All of these features are built around real-world scenarios and solve real pain points. Fundamentally, the idea is to turn AI from a toy into a productivity tool.

36Kr: Honor established a dedicated AI division last year, elevating AI to a high strategic level. What changed before and after that department was created?

LX: The difference was quite significant. Before it was established, AI was a second-level unit under the operating system department, focused mainly on the smart engine within the OS and the AI platform.

Afterward, we connected key modules across the OS, AI, internet, and ecosystem, making our AI investments more focused and more systematic.

36Kr: Can you give some examples of matching different AI functions to different hardware?

LX: For example, our flagship Honor Magic V6 has strong computing power and high-end specifications, making it a key AI platform. We focus on cutting-edge directions such as autonomous agent execution, global memory, and multimodal interaction on flagship devices.

For midrange and lower-end models, users’ core demands are smooth performance, no lag, and long battery life. We also provide a lightweight, system-level AI-native solution. This solution reconstructs the OS and uses AI to make the system more lightweight, allowing midrange and lower-end phones with relatively limited chips and storage to deliver a smooth experience close to that of flagship devices. This capability has also improved the experience and stickiness of many users of lower-priced models.

36Kr: How does Honor decide whether to build a particular AI feature?

LX: Our focus has shifted from what AI can do to what AI should do. The fundamental goal is to make AI better at understanding users over time and serving them better. Specifically, our efforts revolve around three directions:

  • First, we plan features around the needs of target users for different hardware products. For example, for business users of the Honor Magic V6, we created features such as the meeting assistant and document agent.
  • Second, we expand around our core AI product, Yoyo, with the goal of enabling users to consult it for “everything.”
  • Third, we deeply integrate AI agents into MagicOS to enable proactive service and autonomous execution.

We will sunset features that are too complicated to use, too infrequently used, too costly to learn, too inaccurate at the current stage of AI, or too dependent on extensive third-party adaptation. Instead, we aim for close integration among AI, products, and user needs so that we can provide a usable, closed-loop experience.

36Kr: You have developed several distinctive AI features. Internally, how do you evaluate whether an AI feature is really delivering value to users, what people now often call product-market fit?

LX: That is an area we are good at. We have a mature evaluation system for consumer products, and we operate on three levels:

  • First, for different hardware categories, we plan features in advance around the core scenarios of target user groups, then validate them through user co-creation and follow-up research, such as user satisfaction surveys.
  • Second, for Yoyo, we use the fast-iteration model of internet products. That includes A/B testing, user feedback, daily active users, and other indicators to evaluate the performance of AI features.
  • Third, for the native AI experience in MagicOS, once AI has been embedded into system-level applications, we continue to monitor metrics such as usage, satisfaction, and retention. Through this layered evaluation system, we make sure AI features truly solve pain points for target users.

36Kr: I’m curious how you view the “Doubao phone.”

LX: It is an important industry experiment. Its highlight lies in opening up system-level capabilities to enable generalized, cross-application, agent-based autonomous execution and background task processing.

At the same time, its success also depended on a specific context, including cooperation with a smartphone maker to obtain system-level permissions and the use of large models and computing power without regard to cost in order to create benchmark scenarios.

36Kr: Smartphone makers have long been discussing the prospect of AI phones, but many people also ask why a “Doubao phone” was not first built by smartphone makers themselves.

LX: In fact, we had been thinking about similar ideas quite early on, but they have not yet been commercialized. The main reason is that terminal manufacturers think about these issues differently.

A smartphone maker’s first priority is to guarantee user experience. We serve a massive user base, so we must ensure that functions are stable, closed-loop, and broadly applicable. Any flaw in the experience could trigger dissatisfaction on a large scale.

The second consideration is cost. When a massive user base frequently calls AI services, token costs become a sustainability issue that terminal manufacturers have to consider. For large model developers as well, once AI products move toward scale, they will inevitably face cost pressure and may eventually shift to limited-use or paid models, which would then affect user experience.

So the fact that a “Doubao phone” was not first built by smartphone makers is not an issue of capability. It reflects differences in business logic, as well as different standards and requirements around user experience.

Although it is still an exploratory direction, and OpenClaw agents have recently gone viral worldwide, AI development is only just beginning and remains far from its end state.

36Kr: Will the system-level GUI agent route be the prototype of future AI phones?

LX: It is significant, but it may not be the final form, because it still executes tasks based on apps. In the future, AI-native systems and phones may break the current app-based structure and move toward a model in which whatever the user needs simply appears as a service.

For example, in the future, users may no longer need to open a specific app to book a flight. They may only need to state their request, and after understanding the user’s intent and preferences, AI could directly present a flight information card for the user to review and pay. For the user, it would no longer matter which application the service came from. I think that could be a more long-term form.

36Kr: What impact do you think the emergence of OpenClaw has had on AI phones?

LX: OpenClaw’s popularity has validated the value of agents and autonomous execution. In fact, Honor has long been investing in this direction. For example, we created Yoyo Claw, which can support phones controlling Honor PCs and tablets across devices, or allow OpenClaw-like agents to be deployed on third-party devices or in the cloud to execute tasks, with a strong emphasis on security.

The emergence of OpenClaw shows that the industry is shifting toward autonomous execution. That is exactly the direction we are focused on.

36Kr: Companies are also exploring entirely new product categories beyond smartphones. What important traits do you think they will need to have, and which category are you most inclined to believe will ultimately win?

LX: Right now, the market is still in a stage of many players exploring in parallel. I think a successful AI-native device category must be able to solve users’ main pain points efficiently and at low cost. More specifically, it may need three characteristics:

  • First, it must be able to combine perception of the physical world with perception of the digital world. With sensors such as cameras and microphones, it can achieve multimodal perception and humanlike interaction. For example, when a user visits a museum, Honor’s “Robot Phone” can see a cultural relic through its camera and provide humanlike explanations.
  • Second, it must have long-context memory and personalized forecasting, allowing it to provide personalized services based on a user’s historical habits.
  • Third, it must have autonomous execution and proactive service capabilities, enabling it to carry out users’ requests across devices and ecosystems.

At present, smartphones remain the main vehicle for AI, while other devices are mostly companions to the phone. The direction of on-device AI exploration has not yet converged. But whichever form ultimately wins will depend on whether the AI-driven experience it provides can deliver a generation-defining leap in value.

Image shows Honor’s “Robot Phone.”
Honor’s “Robot Phone.” Image source: Honor.

36Kr: So far, different smartphone makers seem to be taking very different approaches to AI, but each is gradually developing its own characteristics and playbook. How would you assess Honor’s position and strategy in the wave of AI phones?

LX: At its core, specific AI functions are only the outward form, the visible expression. If you compare them to a tree, those features are only branches in differentiated competition. Fundamentally, everyone still has to think about what core value the combination of AI and terminals can bring to users.

Honor’s main direction is very clear. It continues to invest in proactive service, agent-based autonomous execution, personal knowledge bases, and multimodal interaction. I believe that as long as it stays on that main path of creating user value, it can remain standing at the end.

36Kr: Two years ago, senior executives from various smartphone makers said AI had not really begun driving device purchases. Has that changed?

LX: There has been a positive change. In earlier years, AI was more of a selling point than a buying reason. But starting in 2025, based on our user surveys, AI has entered the top four purchase considerations for new phones. In some models, such as the Honor Magic V5 and Magic V6, it has even entered the top three.

That is because we focused AI features on key pain points and scenarios for target users and ensured the experience formed a closed loop. Of course, in some models, the top purchase factor may still be a hardware feature, such as the thinness of a foldable phone or long battery life. But AI has become an increasingly important factor in purchase decisions.

KrASIA features translated and adapted content that was originally published by 36Kr. This article was written by Qiu Xiaofen for 36Kr.

Share

Loading...

Loading...