FB Pixel no scriptDeepSeek’s open-source AI roils markets, sending tech stocks into a tailspin | KrASIA
MENU
KrASIA
News

DeepSeek’s open-source AI roils markets, sending tech stocks into a tailspin

Written by 36Kr English Published on   4 mins read

Share
As DeepSeek gains traction, Nvidia stumbles, OpenAI adjusts, and investors weigh the future of AI.

A-shares tied to the DeepSeek concept surged after the Lunar New Year holiday break, with stocks such as Merit Interactive, QingCloud, DAS Security, and Timeverse hitting their daily trading limits for two consecutive sessions.

Before DeepSeek emerged, the dominant belief in the artificial intelligence sector was that a model’s capability correlates directly with its training costs. DeepSeek shattered this notion, proving that model performance and training expenses are not necessarily proportional. This breakthrough is the key reason why it has captured the world’s attention.

According to various data reports, DeepSeek-V3’s training cost is only 1% of Meta’s Llama 3, while DeepSeek-R1’s inference cost is just 3% of OpenAI’s GPT-o1.

As DeepSeek challenged the conventional compute power logic, it became an unexpected black swan on Wall Street, triggering a panic-driven selloff in Nvidia and other US tech stocks.

On January 27, DeepSeek topped the free app charts on Apple’s App Store in both China and the US. The shockwave carried over into the US market, where tech stocks plummeted at the opening bell. The PHLX Semiconductor Index (SOX) plunged 9.2%, marking its steepest single-day drop since March 2020. Nvidia’s stock price tumbled nearly 17%, wiping out approximately USD 600 billion in market value—the largest single-day loss in US stock market history.

Open-source models have caught up

DeepSeek’s competitive edge goes beyond just lower costs and inference efficiency—its biggest impact is proving that open-source models are now closing the gap with top-tier proprietary models.

According to DeepSeek, the R1 model achieved performance comparable to OpenAI’s GPT-o1 in tests such as Codeforces, GPQA Diamond, MATH-500, MMLU, and SWE-bench Verified, even surpassing o1 in some cases. Notably, o1 represents the highest standard in proprietary AI models today.

Meta’s chief AI scientist, Yann LeCun, commented that DeepSeek-R1’s release wasn’t about China surpassing the US in AI technology—rather, it was about the power of open-source technology overcoming the limitations of proprietary systems.

On a technical level, R1 ditched the reinforcement learning from human feedback (RLHF) approach, retaining only the reinforcement learning (RL) component. This enabled the model to develop emergent reflection capabilities during training.

Jim Fan, a senior research scientist at Nvidia, noted that R1 may be the first open-source AI project to demonstrate that the RL learning loop can effectively sustain continuous improvement.

The prevailing industry consensus has been that proprietary AI models must maintain a performance advantage over open-source alternatives—otherwise, why would users pay for an inferior proprietary service?

Now that open-source AI is catching up—and in some cases, even surpassing proprietary models—it is disrupting the AI landscape. This is one of the key reasons DeepSeek triggered panic in the US tech world.

The pressure from DeepSeek’s free and open-source model has also prompted OpenAI to react.

On February 1, OpenAI released o3-mini, its first inference model available for free users. CEO Sam Altman acknowledged the shift, stating he now believes OpenAI has been “on the wrong side of history” and needs to figure out a different strategy:

Comment
byu/OpenAI from discussion
inOpenAI

Even before that, Nvidia, Amazon, and Microsoft all announced on January 31 that they would integrate DeepSeek-R1 into their platforms on the same day.

The widespread adoption and recognition of DeepSeek in both domestic and international AI communities have fueled a continued rally in A-shares tied to the DeepSeek concept.

From an industry perspective, several brokerage firms predict that DeepSeek could accelerate the development of China’s entire AI ecosystem. Its open-source and low-cost nature may also empower AI application developers and drive AI adoption in hardware.

Domestically, DeepSeek’s innovation emerged amid US chip restrictions—validating that China’s AI industry has the capability to achieve a self-sustaining cycle from chips to models and applications. This has significantly boosted confidence in China’s AI supply chain.

Capital expenditures not slowed by DeepSeek

DeepSeek’s rise has fueled concerns in the US stock market about whether compute demand has peaked and AI capital expenditures will slow. However, major tech firms’ latest guidance suggests otherwise.

In its February 4 earnings call, Google projected USD 75 billion in capital expenditures for the year—32% higher than market expectations. Meanwhile, Meta set its 2024 capital spending at USD 60–65 billion, far exceeding analysts’ forecast of USD 51.3 billion.

At the start of the year, Microsoft projected USD 80 billion in capital spending, compared to USD 55.7 billion in fiscal 2024. However, due to diverging AI strategies with OpenAI, Microsoft has begun reducing its investments in OpenAI.

One example is Stargate, a collaboration announced in November 2023 between OpenAI and Oracle to advance AI infrastructure. However, sources indicate that Microsoft’s investment in the project fell short of expectations—possibly signaling a shift in its AI infrastructure investment priorities.

Moreover, on January 20, Microsoft renegotiated its partnership with OpenAI, allowing OpenAI to use rival cloud providers—effectively weakening Microsoft’s exclusive hold over OpenAI.

By 2025, the capital expenditure-to-revenue ratio for Microsoft, Meta, Amazon, and Google is forecast to reach 22%, up from 17.2% in 2024.

In the long run, DeepSeek’s development is likely to increase GPU demand rather than reduce it.

Microsoft CEO Satya Nadella referenced the Jevons paradox on social media, explaining that higher efficiency in resource utilization often leads to greater overall consumption.

In other words, DeepSeek’s ability to reduce computing costs will lower the barrier for AI adoption, ultimately driving increased demand for compute power upstream.

Investment bank Cantor Fitzgerald echoed this view, stating in a research note, “There has been great angst as to the impact for compute demand, and therefore, fears of peak spending on GPUs. We think this view is farthest from the truth and that the announcement [of DeepSeek-V3] is actually very bullish.”

As for Nvidia, many analysts remain bullish, with Citigroup’s Atif Malik reaffirming a buy rating on the stock.

Ultimately, the biggest losers in the wake of DeepSeek’s rise may not be Nvidia, but proprietary AI firms like OpenAI.

KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Song Wanxin for 36Kr.

Share

Auto loading next article...

Loading...