Yet the increasingly narrow performance gap between DeepSeek and leading U.S. models, as well as its rock-bottom prices, will raise questions about the competitive moat that surrounds leading U.S. labs like OpenAI and Anthropic—and the constraints that still hold back China’s AI development.
DeepSeek’s open-source approach allowed developers to download its model for free, tweak it for their own purposes, and run it on local hardware. That helped win DeepSeek, and later labs that also went open-source, support from resource-strapped developers, particularly those outside of the U.S. and Europe.
This also means that DeepSeek is releasing V4 into a new competitive landscape. Alibaba, Moonshot AI, MiniMax and Knowledge Atlas have all released high-performing open-source models this year.
Ironically, these export controls may have helped Chinese AI startups, including DeepSeek, learn how to operate in a world of scarce processing power. Because Chinese developers have been forced to train and run models with limited resources, they had to come up with ways to make their models much more efficient. This power-efficiency has, in turn, allowed the Chinese developers to reduce prices substantially. U.S. pressure also forced China’s semiconductor sector to accelerate plans to manufacture chips domestically.
DeepSeek’s ability to train and run its model on Huawei chips could represent another step away from U.S. chipmakers like Nvidia and AMD.
OpenAI and Anthropic have also accused Chinese AI developers, including DeepSeek, of conducting “illicit” distillation attacks.



