Little known till recently, an artificial intelligence startup based in Hangzhou, China, has disrupted global markets and forced a rethink of conventional wisdom about AI development. In the span of a few days, DeepSeek became the most-downloaded free app in the US, surpassing ChatGPT. Meanwhile, the company’s open-source AI model—known as R1—sparked a global selloff in tech stocks.
On January 27, major US technology companies lost nearly USD 1 trillion in combined market value as investors grappled with the implications of DeepSeek’s cost-effective approach. Nvidia bore the brunt, plunging 17% in its worst single-day drop since March 2020—erasing USD 589 billion in market capitalization.
Energy and utility stocks also felt the sting, highlighting just how intertwined AI spending has become with broader infrastructure plays. DeepSeek’s sudden emergence undermined the notion that state-of-the-art AI requires the most advanced (and expensive) chips to succeed.
At the heart of the upheaval is DeepSeek’s R1 model. Publicly available and reportedly trained for less than USD 6 million using Nvidia’s H800 GPUs—an older technology approved for sale in China until late last year—R1 showcases advanced “reasoning” capabilities many executives believed were achievable only with billions in hardware outlays.
The US has enacted multiple rounds of export controls on advanced semiconductors bound since 2022, aiming to stymie China’s progress in sensitive technologies. However, DeepSeek’s breakthrough—powered by chips that were still technically legal to export at the time—suggests those measures haven’t been airtight. Lawmakers in Washington are now debating whether even tougher export controls or broader restrictions are needed.
Representative John Moolenaar, who co-chairs a select committee on US-China competition, condemned the current rules and was cited by Bloomberg saying, “We must swiftly place stronger export controls on technologies critical to DeepSeek’s AI infrastructure.”
US President Donald Trump, sworn in just last week, struck a measured tone. During a meeting with Republican lawmakers at his golf resort in Miami, he called DeepSeek’s success a “positive development,” applauding the model’s cost-efficiency. At the same time, he warned that China’s advancements should serve as a “wake-up call” for US tech firms.
According to SCMP, Trump also reiterated a pledge to bring more semiconductor manufacturing stateside, threatening tariffs on foreign-made chips if companies such as Taiwan Semiconductor Manufacturing Company (TSMC) do not shift more production to the US. “They left us and went to Taiwan,” he said, “which controls about 98% of the chip business. We want them to come back.”
Major AI developers in the US are racing to understand DeepSeek’s methods. A person familiar with OpenAI’s internal deliberations told Bloomberg that the company is analyzing R1’s open-source code, suspecting it may have “distilled” outputs from other large models like GPT or Meta’s Llama. Meta, for its part, has reportedly formed multiple “war rooms” to assess DeepSeek’s performance and any innovative techniques behind it, according to The Information.
Executives in Silicon Valley are particularly intrigued by DeepSeek’s emphasis on test-time scaling, or the model’s ability to “reason” by spending more compute effort at the moment it generates responses. This approach potentially allows smaller, cheaper models to match the performance of costlier ones that rely on huge training runs and cutting-edge data centers.
Despite the rout, Nvidia welcomed DeepSeek’s advancement as an example of how even older or downgraded chips can foster AI breakthroughs. In a statement, the chipmaker called R1 a “perfect example of test-time scaling,” adding that “inference requires significant numbers of Nvidia GPUs,” implying the Chinese startup’s innovation might still boost overall demand for its products.
However, jittery shareholders remain concerned that big-budget AI arms races may prove unnecessary—or at least less urgent—if DeepSeek’s approach can replicate GPT-4 or GPT-5 performance with just a fraction of the hardware outlay.
DeekSeek’s rise has reignited debate over whether the AI field’s biggest leaps must come from pouring ever more money into chips, servers, and power-hungry data centers. While Meta, Microsoft, and Google are committed to spending upwards of USD 60–80 billion apiece on data center expansions, R1’s low-cost achievement may accelerate a shift toward algorithmic efficiency and open-source collaboration.
Some observers see parallels to past moments when constraints fueled leaps in innovation. “Engineering is about constraints,” said Pat Gelsinger, Intel’s former CEO. “Export laws limited the available resources, so Chinese engineers needed to get creative—and they did.”
For now, DeepSeek has restricted new user sign-ups following “large-scale malicious attacks,” which the startup claims were attempts to undermine its service. Observers believe DeepSeek’s popularity at home and abroad will keep it on regulators’ radars—especially as US officials grapple with whether to widen chip export bans and step up pressure on foreign chipmakers.
“It’s a paradigm shift,” Databricks CEO Ali Ghodsi told Bloomberg. “These models that can reason are so much cheaper to produce that you will see it be democratized. You’ll see innovations from unexpected corners of the world.”