Content

/

Blog Posts

/

DeepSeek’s Success Reinforces the Case for Export Controls

blog posts

DeepSeek’s Success Reinforces the Case for Export Controls

January 30, 2025

The featured image for a post titled "DeepSeek’s Success Reinforces the Case for Export Controls"

The release of DeepSeek R1, a state-of-the-art AI model developed by the Chinese company DeepSeek, has reignited debates about the efficacy of U.S. export controls on advanced semiconductors. Some commentators have seized on DeepSeek’s achievements—particularly their steep efficiency gains in model training—as evidence that the controls are futile, arguing that China’s AI progress will continue unabated regardless of restrictions on cutting-edge chips. This interpretation is not only shortsighted but fundamentally misunderstands the strategic value of export controls in maintaining U.S. leadership in AI. Far from being irrelevant, 2025–2027 looks likely to be an inflection point in the power and capability of AI systems beyond human-level, making export controls more important than ever for maintaining America’s technological dominance.

The U.S. AI ecosystem thrives on a foundation of unparalleled computing capacity, enabled by cutting-edge hardware, dominant cloud companies, and a deeply interconnected innovation pipeline. From startups to tech giants, American companies benefit from domestic access to the world’s most advanced AI data centers, allowing them to experiment, iterate, and scale at a pace that is simply unmatched. This is whyaround 70 percent of the world’s most compute-intensive AI models released since 2020 were developed by companies based in the United States. And while algorithmic improvements ensure that the number of actors capable of training powerful models will grow over time, those same efficiencies are ultimately fed into the next generation of compute-intensive models.

DeepSeek is undoubtedly impressive, but it is a single data point in a much larger competition. What truly matters is not whether China can produce a single advanced model or even a leading AI lab, but whether it can build and maintain a robust, scalable, and self-sustaining AI ecosystem capable of consistently pushing the frontier. This is where U.S. export controls have played, and will continue to play, a decisive role. By restricting China’s access to the most advanced semiconductors and the equipment needed to produce them domestically, the controls are less about slowing or preventing individual projects than about constraining China’s aggregate AI training resources.

Compute Dominance Is a Marathon, Not a Sprint

The U.S.’s chip export controls on China only came into full effect in October 2023. They set a performance threshold for controlled chips at roughly the level of Nvidia’s H100 series of GPUs. Prior to the updated controls, China was able to import H800s—among the chips used by DeepSeek—which can be thought of as lower bandwidth H100. Meanwhile, Nvidia only recently began fulfilling orders for their next generation chip, Blackwell, with most orders delayed until late 2025.

In other words, the chip export controls have not yet had an opportunity to fully bite. New chip generations take an average of 2-3 years to develop and ship to end users. Given the 2023 performance threshold, the performance and volume of chips barred for export to China will thus grow in the years ahead. Older chips are less energy efficient and have slower networking speeds, but in the short run these deficiencies can be offset by designing hardware optimizations or simply eating the extra cost. As chip performance continues to improve exponentially and the GPUs that Chinese firms have stockpiled begin to age and need replacing, the U.S. lead in AI hardware will become clear and compounding.

The DeepSeek CEO, Liang Wenfeng, directly addressed the impact of export controls in a recent interview, stating, "Money has never been the problem for us; bans on shipments of advanced chips are the problem." This clearly indicates that the lack of access to advanced hardware due to export controls is the primary bottleneck for DeepSeek—and one that is tightening.

Just this week, China announced its “AI Industry Development Action Plan,” a one-trillion-yuan ($139 billion USD) state-sponsored investment into advanced AI data centers. The money will flow through leading Chinese tech firms such as Baidu, Bytedance, Alibaba and DeepSeek, enabling them to build out large clusters of AI accelerators. These chips will likely include a mix of export-control-compliant GPUs like Nvidia’s H20, chips acquired through smuggling and remote access, and new GPUs designed by Huawei like the Ascend 910B.

Easing export controls right now would thus be perfectly timed for what appears to be China’s big push to AGI—the mirror to OpenAI’s Stargate project. Imagine what would happen next were the controls not in place. Tens of millions of Nvidia’s leading-edge GPUs would suddenly begin flowing across China’s border unrestricted. GPU prices would equalize, becoming higher for U.S. companies but much lower for Chinese companies. State-owned enterprises and the Chinese military would begin hoarding as many chips as possible as insurance against the controls being reimposed. In short order, China would not just catch up to the U.S. in frontier AI, but race quickly past us given their looser energy, infrastructure and regulatory bottlenecks.

The U.S. should lean against this scenario by closing gaps in the existing chip controls:

  • Update export control to cover Nvidia’s H20. DeepSeek’s innovations include hardware optimizations to offset the shortcomings of export compliant GPUs like Nvidia’s H20. Export controls should thus be updated to adjust for Nvidia’s work-around, which wouldn’t be for the first time.
  • Extend export controls to remote access. While Chinese companies are not allowed to directly import export controlled GPUs, they can still access them remotely through the cloud. This is why Nvidia’s H100 shipments to Singapore spiked following the export controls, leading Singapore to now account for 22 percent of Nvidia’s projected revenue. Closing the remote access loophole would require passing the Remote Access Security Act, which passed the House last year but failed to move forward in the Senate. The bill would enable the Department of Commerce to enforce export controls on companies that supply China with remote access to export controlled chips via the cloud, but with the flexibility to issue export licenses to U.S.-based cloud companies where we have jurisdictional oversight.
  • Modernize the Bureau of Industry and Security. Export controls are only as effective as their enforcement, a responsibility that falls on the Bureau of Industry and Security (BIS). Unfortunately, the BIS is notoriously under-resourced, enforcing export controls from the barrel of a giant spreadsheet. The agency needs a technological overhaul and new tools, from chip location verification to an export violator whistleblower program, to help combat chip smuggling and police export control violations at scale.

These actions would buttress the Diffusion Rule issued in the waning days of the Biden Administration. But unlike the Diffusion Rule, which enacts a complex compliance framework for controlling the allocation of data center investment worldwide, the first two of the above policies have the benefit of being simple to implement and narrowly targeted on China, while modernizing the BIS would help enhance export administration across the board.

The rise of DeepSeek also has several lessons for AI policymakers more generally:

  • Export controls require patience. Rome wasn’t built in a day, and ecosystems as complex as the semiconductor supply chain aren’t reshaped or replicated in a matter of a few short years. Nevertheless, the chip export controls on China have already tangibly undermined their competitiveness in AI and will bind increasingly in the years ahead. Eliminating the controls under pressure from industry groups would turn DeepSeek’s temporary propaganda victory into a permanent own-goal, undermining the credibility and durability of all U.S. trade actions going forward.
  • The 2024 model frontier will diffuse by default. Given rapidly falling training costs, the generation of models benchmarked by the original GPT-4 are rapidly commoditizing, including their reasoning and multi-modal extensions. Hundreds if not thousands of new open source models will soon appear and diffuse by default. Any “AI safety” regime premised on controlling the release of such models through licensing or regulation are thus likely doomed to fail. To the extent such models present dangerous capabilities, policy will have to shift to an orientation of proactive defense, rather than assuming safeguards can be imposed ex ante of deployment.
  • Embrace open source diplomacy. The predictable decline in the cost of computation means we should expect even the most expensive closed AI models to become within the reach of open source developers over time. Open source models are likely to become the go-to for governments and critical infrastructure given their greater controllability. The risk of the U.S. falling behind in open source stems from the potential for bad actors to embed backdoors and other malicious behaviors into open models undetectably. It is thus essential for the U.S. to embrace a strategy of open source diplomacy by encouraging U.S. companies to stay atop the open source model leaderboards. The in-house evaluation capacities of the U.S. AI Safety Institute could also be used to audit open models and training sets for safety and security, providing American models a trusted seal of approval.

Yet perhaps the biggest lesson from DeepSeek is that compute matters more than ever. By analogy, if Henry Ford found a trick to make his cars drive 10 times farther with the same amount of gas, access to large oil reserves would become more valuable, not less. This is doubly true for reasoning models and agents given the enormous quantities of synthetic data generation required to fuel their training. With the next generation of AI training clusters now online, the frontier models scheduled to appear this year will likely drive a step-change improvement in model performance, particularly in inference-time reasoning ability.

The best—if not only—strategy for the U.S. to maintain AI dominance thus hinges on sustaining a structural compute advantage at the ecosystem level. Export controls ensure that this advantage remains firmly in U.S. hands, creating a structural barrier that limits China’s ability to replicate the same returns to scale. In some ways, America’s advantage in AI compute is a mirror image of China’s advantages in low-cost manufacturing. The U.S. is home to several world-class drone and robotics companies, for example, but lacks the production capacity of China to scale up to many millions of units. Conversely, while DeepSeek may be a world-class AI lab, China’s reliance on stockpiled chips and foreign cloud providers limits their ability to build ever more powerful AI models and diffuse them economy-wide. Whether this remains true going forward depends almost entirely on the strength and effectiveness of U.S. export controls.

The Sputnik Moment that Wasn’t

The release of DeepSeek R1 has been called a “Sputnik Moment” for the U.S.—a reminder that we can’t sit on our laurels in the AI race. And while sitting on our laurels is never wise, the Sputnik comparison is arguably hyperbole. Algorithmic improvements in AI are occurring at a pace equivalent to computational power doubling every 5 to 14 months. The cost to train GPT-4 scale models has thus already fallen by a factor of 10 since its initial release. And while DeepSeek’s breakthrough was a leap forward in model training efficiency, it is still well within the envelope of what informed observers have come to expect.

To the extent the Sputnik analogy has any merit, it calls for strengthening export control enforcement abroad while accelerating the AI infrastructure build-out at home. Consider that the original Sputnik moment in 1957 inspired the creation of NASA just one year later. America answered the Soviet’s propaganda victory by doubling down on its commitment to the space race—not by cancelling export controls on satellite equipment. Doing so would have been a gift to the Soviets back then, just as canceling export controls would be a gift to the CCP today.

There is nonetheless some truth to the notion that “necessity is the mother of invention.” Given the export controls, DeepSeek’s team was forced to search for tricks to make the most of the hardware they had. And yet no one would argue that companies like Meta or OpenAI would be better off forgoing over 90 percent of their computing resources. Access to abundant AI hardware simply allows U.S. researchers to focus on different kinds of problems. This has also led to one breakthrough after another—including the very reasoning paradigm OpenAI pioneered with their o1 model and which DeepSeek subsequently copied.

DeepSeek’s efficiency gains are a reminder that software innovation matters. But algorithmic improvements and compute scale are not mutually exclusive; they’re complimentary. Scaling more powerful base models has a multiplicative effect on the performance of inference-time scaling, as models had to first achieve GPT-4-scale language and reasoning capabilities for chain of thought and inference-time scaling to work at all. So with several GPT-5-scale models due out this year, the current generation of reasoning models may soon be left in the dust. After all, even the most elegant algorithms hit diminishing returns without vast computational resources, making it fortunate that America’s lead in AI infrastructure is tremendous and growing rapidly. Let’s keep it that way.

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All

Stay in the loop

Get occasional updates about our upcoming events, announcements, and publications.