History is rhyming. One year ago, Chinese AI lab DeepSeek’s impressive artificial intelligence (AI) models wiped over half a trillion dollars in value from US chip designer Nvidia’s stock alone in a single day, double the previous record decline for a single stock amid a broader rout for US tech. DeepSeek, an offshoot of a Chinese hedge fund, shook confidence in two key assumptions underlying the AI boom: First, that US export controls limiting China’s access to computing power would hand US firms a large, durable lead; and second, that surging demand for AI would keep driving durable demand for AI chips, data centers, and power. A year on, the first assumption was clearly wrong, while the second has mostly held.
Now, for the second time in a row, the year has started with doubts about the prospects for US AI labs and cloud computing firms. This time, rising debt, depreciation, and circular vendor financing are creating the concerns. Revisiting last year’s DeepSeek shock can help shine light on the AI boom’s prospects going forward.
Before January 2025, DeepSeek’s best model matched US firm OpenAI’s large language model GPT-4, released a year and eight months prior. China started from behind on large language models, and it faced US export controls that banned Nvidia from selling its best AI training semiconductors to China. But DeepSeek’s R1 model released in January 2025, trained on tuned-down Nvidia chips, claimed “[p]erformance on par” with OpenAI’s o1, released less than two months earlier.[1] R1 leapfrogged with better scores on a commonly cited intelligence index than models from America’s Anthropic, Google, Meta, and xAI, which unlike DeepSeek could buy advanced US chips. Even more shocking, if not fully credibly, DeepSeek claimed its models were trained with only around $6 million dollars’ worth of computing power. It was also highly efficient and cheap, with the cost to use R1 about 27 times cheaper than OpenAI’s o1. DeepSeek’s models were released “open weight,” so anyone with their own cloud provider or high powered computing could download and run it with just the cost of their own chips and electricity.
DeepSeek’s release seemed to undermine the economic model of US AI developers by providing nearly free, nearly as good alternatives. Jack Ma famously did this to eBay by launching Taobao in the early 2000s, quipping, “they have deep pockets, but we will cut a hole in their pocket.” Chinese “good enough” competition could attract would-be paying customers of US AI products, sapping them of revenue needed to continue buying chips and power. The prospect of training advanced models with minimal computing power and efficiency gains reducing computing power needed for inference (running the models), suggested that all those chips and power might not be needed at all even if US AI labs could afford them. Another view, most notably from Microsoft’s Satya Nadella, highlighted Jevons Paradox: that efficiency gains in a resource historically lead to more, not less usage of a resource as lower costs made new uses possible and drive adoption.
One year after “the DeepSeek Shock,” Chinese AI models remain hot on the heels of US models, despite US export controls that the Trump administration only recently loosened. If anything, the surprise has been the number of Chinese firms beyond DeepSeek that have trained models close to the frontier, including Alibaba’s Qwen, Bytedance’s Doubao, Kimi’s Moonshot, Minimax’s M2.1, and Zhipu’s Z.ai. It is not clear how they managed to do this.
Export controls have not been perfect, allowing Chinese firms to access cloud computing outside China, smuggle banned chips (though the extent of the problem is hotly disputed), and buy compliant chips that are still powerful. OpenAI has alleged that Chinese firms have been “distilling” its models, e.g. using inputs and outputs from ChatGPT to help their models gain some of its capabilities, but there is no consensus on how much this has contributed to Chinese models. DeepSeek shrank the gap between US leaders and Chinese models to months rather than years. Since then, they have stayed only a few months behind.
This does not mean that US export controls failed. US models have remained consistently in the top spot, which the control-aided compute advantage at least contributed to. The gap may increase this year as much better chips come online. DeepSeek made headlines last month by saying the “performance gap between closed-source [most US models] and opensource [most Chinese] models appears to be widening, with proprietary systems demonstrating increasingly superior capabilities in complex tasks.” Alibaba reported “meeting delivery demands consumes most of our resources,” and Zhipu has had to restrict access to its coding agent because of a lack of computing power. Chinese firms have successfully dulled the impact of US constraints, but they remain constrained. US firms have much larger global user bases, with OpenAI at 800 million and Google’s Gemini at 750 million. Chinese firms like Bytedance’s Doubao and Alibaba’s Qwen have 100 million.[2]
Meanwhile, despite DeepSeek’s success, people were right to assume demand would keep surging for computing power for AI.
Developments over the past year have vindicated the Jevons Paradox optimism camp for AI all the way up the supply chain from AI labs developing models to their suppliers of cloud computing, chips, and the power to run them. If Chinese models have cut a hole in US firms’ pockets, the revenue going in is more than compensating. Anthropic has gone from revenue run rates[3] of $1 billion around a year ago to $9 billion today. OpenAI expects $20 billion in revenue in 2025, up from $6 billion for 2024.
Cloud computing providers like Microsoft and Google, meanwhile, have seen cloud revenues grow by 26 percent and 48 percent, respectively, in the year since the DeepSeek shock.3 Google is more than doubling its capital expenditures to fulfill backlogs of demand for computing power. That will feed demand for more chips. Like Levi Strauss profiting whether individual miners hit gold or not, US cloud providers profit by providing the computing power for training and use of US models or Chinese models. China’s computing power constraints due to US export controls probably contributed to this outcome: Chinese AI companies have too little computing power to run their models for customers at scale, so the only way they can catch on is by letting others with chips run them for free.
On chips and power, there is no sign of a lingering DeepSeek shock. Nvidia revenue is up 62 percent, and Taiwanese chip manufacturer TSMC revenue is up 36 percent. Semianalysis, a research firm, has found that surging demand to power data centers means “the grid is sold out.” Data centers have even moved on to make their own power generation in such numbers that some gas turbine producers have sold most of their capacity already into 2028.
The theory that DeepSeek would immediately undermine US AI firms’ business models and the demand for computing power was wrong for two main reasons. First, AI users—especially corporations—do not yet see models as interchangeable commodities to be chosen primarily on price. Research from Mert Demirer and coauthors found that open weight models, like most from China, cost 90 percent less than closed models (most US models are closed). Despite benchmark scores suggesting similar capabilities, however, these cheaper models make up under 30 percent of use in their dataset. They also find that “most firms allocate 90 percent of their total usage to a single model,” suggesting users lock in to one provider. Second, AI usage has dwarfed efficiency gains, just as Jevons Paradox would predict. Efficiency gains have been impressive: Over the course of 2025, the cost to achieve a similar score on a challenging AI benchmark plunged from $4,500 per task to $11.64. But growing demand for computing to train new models and support AI use has been even stronger. Computing for using AI is particularly important, because it indicates the computing power is not just being used in some sci-fi moonshot bet to train artificial superintelligence. It is finding paying users for existing capabilities.
AI user numbers continue to increase, raising demand for computing power. But demand has been driven also by a change in the AI models in use today compared to a year ago. DeepSeek’s R1 and all the other top models from 2025 are “reasoning” models that give better answers and handle more complex tasks by applying far more computing power to process the user’s query. They even often simulate multiple perspectives working together to solve the problem. The average user of the best models today is doing more with less money but with far more computing power than a year ago. This shows up in OpenAI’s usage patterns. OpenAI reportedly spent $7 billion on AI inference in 2025, 3.5 times more than the $2 billion spent in 2024. Meanwhile, $7 billion also happens to equal OpenAI’s entire computing spending for research, training, and inference from 2024.
Still, it remains early days in the AI boom. Chips and models will continue to become more efficient, smaller models can replicate capabilities of much larger models from previous generations, and Chinese firms are determined to keep up with the frontier. The competitive landscape is also changing. Google, for example, has entered the ring to sell its AI chips to external customers for the first time, creating more competition for Nvidia. There are also questions about dependence on circular financing, in which suppliers invest in their customers and support continued purchases. US AI firms will need to keep proving that they can capture enough of the value AI is creating to support their surging investment. Spending is now at levels that introduce new challenges, which include higher depreciation charges on ageing chips and the need to finance investment with an increasing share of debt.
But if 2025 was the year of reasoning models, 2026 looks to be the year of AI agents. These, including Claude Code, have made a leap beyond chatbots to automate more complex, lengthy tasks like bespoke software development that can run for hours without human intervention. This will raise computing power per AI user even further. The remarkable improvement in AI capabilities in 2025 also create a latent user base of people and organizations who tried AI previously and found it lacking. If they try it again and discover how much more useful the tools have become, they could become new intensive users.
The most optimistic projections of recursively self-improving AI that steamrolls bottlenecks to adoption might still be years away. Specific winners and losers remain uncertain. But the utility and use of US AI have exceeded expectations from before the DeepSeek shock and are showing no signs of slowing.
Notes
1. OpenAI announced o1 in September 2024 with a preview to select users, but it was not fully available even to most paid subscribers until December 5, 2025. DeepSeek released R1 on January 20, 2025.
2. Chinese models like Qwen underestimate their global reach because someone can download the model and use it without the developer knowing. For a deeper dive into adoption patterns of Chinese versus US AI, see the Microsoft AI report.
3. Revenue run rate is the revenue a firm would make for the year if the most recent period (measurable in different ways) were extrapolated to a full year.
Data Disclosure
This publication does not include a replication package.