
Today, I submitted a comment in response to the National Science Foundation and Networking and Information Technology Research and Development Program’s request for information regarding the development of an Artificial Intelligence Action Plan. Click here to download a pdf of the full comment.
Thank you for the opportunity to comment on the NSF and NITRD NCO’s request for information (RFI) on the Development of an Artificial Intelligence Action Plan.
The United States is the world leader in Artificial Intelligence (AI), and not by accident. America’s dominance in AI is an outgrowth of the strength and dynamism of our private sector, particularly as it relates to software, the internet, and other information technologies. The strength of our technology sector is, in turn, the byproduct of farsighted legal frameworks that encouraged both the commercialization of the early internet and the massive investments in communications infrastructure that the digital economy needed to scale.
As with the internet, securing America’s dominance in AI will require accelerating the development and diffusion of AI technologies, unencumbered by unnecessary regulation, alongside an aggressive build-out of the energy and data center infrastructure needed to train and deploy advanced AI systems at scale. As the defining technology of our era, becoming “second place” in AI is simply not an option. AI has the potential to positively transform virtually every aspect of industry and society, drive unprecedented rates of productivity growth and scientific discovery, and provide the leading nations in AI with decisive geopolitical advantages. Enhancing America’s dominance in AI is thus not merely important for economic competitiveness, but critical for U.S. national security and, ultimately, the preservation of the American way of life.
However, the AI revolution is different from the internet in several crucial respects:
First and most obviously, the U.S. became the dominant player in information technology and the internet during a period of relative unipolarity. That first-mover advantage allowed America to establish rules of the road for internet governance that both advanced U.S. interests and exported core American values, such as openness, transparency and free expression. Today, in contrast, progress in AI is occurring against the backdrop of an intense technological competition with China[1] – a communist regime that oppresses its own people, bullies U.S. allies, and conducts all manner of economic and cyber warfare against American companies and citizens.
Second, while the internet revolution was primarily won in the lightly-regulated realms of software and microelectronics, winning the AI race will require overcoming bottlenecks in the heavily-regulated physical world, from the energy generation needed to fuel data center growth, to AI’s countless applications in robotics, material science, biology and beyond. Indeed, as a side-effect of deindustrialization, the last 40 years have seen the United States deepen its comparative advantage in high value-added “knowledge” sectors, such as software development, entertainment, higher education, management, law and finance – the very sectors AI is most likely to disrupt and deflate in the coming years. Thus, without concerted efforts to reindustrialize and deregulate the physical economy – energy, transportation, manufacturing, construction, health care, etc. – America’s “innovation in bits” risks becoming China’s “innovation in atoms.”
Third, progress in AI is rapidly accelerating towards systems that match or surpass expert human performance in arbitrary domains; what is often referred to as Artificial General Intelligence (AGI). Weaker forms of AGI (i.e. systems that could substantially supplant cognitive labor performed behind a computer) are now forecast to emerge as early as 2026,[2] while “strong” AGI (i.e. robustly superhuman robotic systems capable of, among other things, assembling a car from scratch using only human readable instructions) is forecasted for 2030.[3] Given continuously falling computational costs, the proliferation of powerful AI capabilities and highly intelligent autonomous systems is likely unavoidable over time, presenting sui generis risks to public safety and national security. While expert opinion on the imminence and severity of these risks varies widely, a large dose of humility is no doubt warranted. We are entering truly uncharted territory, and should thus adopt policy stances rooted in realism and evidence that are robust to a wide range of possible scenarios.
These three considerations – geopolitical competition with China; barriers to adoption in the real economy; the sui generis nature of AGI – point to the importance and urgency of the AI Action Plan, and serve to frame the policy recommendations that follow.
Energy and compute infrastructure
America’s current lead in AI is a function of having the world’s leading AI companies, chip designers and hyperscale compute providers. As the AI build-out continues, maintaining this lead will require concerted policy actions to address supply-chain bottlenecks in power generation and associated infrastructure. Consider that China added a record 429 GW of net new energy generation to their grid in 2024, a 21% year-on-year increase.[4] In contrast, total electricity generation in the U.S. has been essentially stagnant since 2008, with new capacity from renewable energy sources offset by reduction in non-renewable energy production. Addressing the regulatory barriers to U.S. compute and energy dominance is essential to U.S. dominance in AI over the medium term, and of sufficient urgency to justify the use of national security exemptions and emergency powers.
Recommendations:
- Prioritize the rapid deployment of new baseload energy sources for AI scaling.
- Replace Executive Order 14141, “Advancing United States Leadership in Artificial Intelligence Infrastructure,” with an analogous Executive Order that a) strikes the clean energy and carbon capture sequestration requirements, and b) establishes “Special Compute Zones” (SCZs) for expediting the build-out of 5GW+ AI clusters on federal lands.
- Direct CEQ and DOE to implement National Environmental Policy Act (NEPA) Categorical Exclusions for AI data centers, co-located energy sources, and associated applications (e.g. small modular nuclear reactors; enhanced geothermal; power plant modifications).
- Invoke Title I DPA authorities to accelerate production of critical data center components; and Title III authorities to create public-private lending arrangements for deploying new energy generation.
- Develop Programmatic Environmental Assessments (PEAs) and Programmatic Environmental Impact Statements (PEISs) for new data center construction while redirecting the internal legal capacity at the DOE’s Renewable Energy Coordination Offices (RECOs) toward AI infrastructure deployments.
- Implement national security exemptions across major environmental statutes to expedite deployment of energy and data center infrastructure, including the Endangered Species Act (16 U.S.C. § 1536(j)), Clean Air Act (42 U.S.C. § 7418(a)), Clean Water Act (33 U.S.C. § 1323(a)), and NEPA (40 C.F.R. § 1506.11).
- Use Presidential National Emergency powers to expedite the construction of secure AI data center(s) for use by the Department of Defense under 10 U.S.C. § 2808 and P.L. 108-136, Section 318(a).
- Appoint a “Data Center Czar” to advise and coordinate the above initiatives and to liaison with private sector stakeholders.
Export controls
The current US export control regime has failed to curtail China’s access to frontier AI capabilities. GPU sales to China and the broader Sinosphere – Singapore, Taiwan, Malaysia – now accounts for just under half of Nvidia’s revenue by geography, roughly matching aggregate sales to the United States. Some fraction of export restricted chips sold in this region are directly resold into China through shell companies and smuggling rings. The remainder go to data centers in third countries that Chinese firms can access remotely through the cloud. These enforcement gaps will only grow in significance overtime with the rapidly growing volume of chips produced above the controlled performance threshold – precisely when the controls ought to begin binding.
Despite these gaps in enforcement, export controls have tangibly limited the build-out of China’s domestic AI compute capacity, giving the United States a structural advantage in both AI training and inference at the ecosystem level. Indeed, had export controls had not been imposed on China in 2022 and 2023, they may very well have already taken the lead in AI given their weaker energy bottlenecks. With China announcing a $1 trillion yuan ($138 billion USD) state-backed data center fund, strengthening the Bureau of Industry and Security’s enforcement capacity has thus never been more urgent.
Recommendations:
- Prioritize BIS modernization in the President’s Budget. BIS operates with outdated technology, but could fulfill their modernization plan with additional appropriations of roughly $25 million annually for five years. IT modernization is needed to enhance BIS’s analytical and enforcement capabilities, but would also benefit US exporters by accelerating the license application process. Further explore data and intelligence sharing arrangements between BIS and U.S. intelligence agencies.
- Expand both BIS’s in-house technical talent and regional inspection capacity. As it stands, BIS has only two in-house Mandarin speakers, two regional inspection officers for all of China, and only one inspection officer for the rest of Southeast Asia. Regional units could be further supplemented by secondees from partner countries.
- Add Nvidia’s H20 and B20 to the list of controlled chips to account for the rise of reasoning models like DeepSeek R1, which make greater use of inference compute.
- Create a certification program for chip distributors that mandates the use of pre-approved logistics providers committed to notifying BIS investigators of suspicious entities.
- Launch a chip smuggling whistleblower program with awards pegged to a percentage of the resulting fines.
- Order a feasibility study for delay-based location verification and the creation of a centralized BIS registry for exported chips. Delay-based location verification is non-trivial to spoof and would greatly enhance BIS’s capacity to verify end-users, identify smuggling intermediaries, and geolocate chips in restricted countries. Location verification and related mechanisms for enhanced end-user controls could be piloted under the Diffusion Rule as a condition for exporting chips to countries suspected of facilitating chip diversion, such as Singapore and Malaysia. No such mechanisms should be required for chips sold to U.S. end-users.
- Restrict China’s ability to access export controlled chips remotely through cloud service providers in Tier 2 countries by deeming them illegal exports in-kind (as proposed in Rep. Michael Lawler’s Remote Access Security Act).
- Direct the State Department to develop a formal “AI Talent Network” among Tier 1 countries (or a new, G20-style organization) to attract AI talent to the United States and promote multilateral engagement on AI supply-chain vulnerabilities.
Frontier AI security and visibility
The capabilities of frontier AI systems are improving at an accelerating rate. Five years ago, Large Language Models (LLMs) were barely able to generate coherent English text. Today, they can program entire games and applications from a single prompt, write in-depth research reports, and reason through high-level math problems. Breakthroughs in LLM post-training through reinforcement learning have further revealed new, potentially unbounded dimensions for model scaling. Post-training will scale rapidly over the coming year, and is anticipated to make AI systems significantly more agential and superhuman in any domain where success can be objectively benchmarked or verified.
With AGI in sight, monitoring the status of frontier AI capabilities in real-time should be a national security imperative of the U.S. government. Consider that OpenAI and Anthropic both recently indicated that their best models are on the cusp of acquiring “dual-use,” Chemical, Biological, Radiological, and Nuclear (CBRN) capabilities that, according to their own policies, will need to be withheld from public deployment without robust model safeguards. Developments at the frontier can thus provide policymakers and national security advisors with foresight into the sorts of capabilities that will eventually proliferate through open weight models as the cost of computation falls overtime.
In contemporary policy discussions, AI safety and security are unhelpfully conflated with “AI ethics,” which often reduces to making AI models parrot politically correct viewpoints. Yet powerful AI systems present many bona fide, near-term risks to U.S. national security and the health and safety of ordinary American citizens. The AI Action Plan should take care to delineate policies aimed at managing the potential catastrophic risks presented by powerful, highly autonomous AI systems from more prosaic or evergreen issues, such as bias and discrimination.
Recommendations:
- Support the U.S. AI Safety Institute (AISI) in its mission to develop voluntary AI standards and evaluations for national security-relevant risks. Explicitly require AISI to evaluate open weight foreign models, such as DeepSeek, in addition to American-made models.
- Adopt a policy of “Open Source Diplomacy” by giving AISI an explicit export promotion mandate. AISI can attest to the security and reliability of American open weight models for use in critical infrastructure by independently auditing training code and datasets for backdoors and data contamination. The AISI consortium’s 280 members could then be reimagined as de facto embassies for AI systems underwritten by American standards, serving as a bulwark against both the Brussels Effect and China’s efforts to export digital authoritarianism.
- Designate a senior member of the National Security Council staff to coordinate the creation of an AI Operations Center – a “Situational Awareness” room – to monitor frontier AI developments and develop real time risk assessments in collaboration with DOD, DHS, FBI and ODNI. Employ the Defense Production Act to obtain up-to-date information from industry leaders under Title VII, Section 705. The AI Operations Center should serve as the 24 hour counterpart to the AISI’s Testing Risks of AI for National Security Taskforce (TRAIN).
- Direct AISI and the National Labs to launch an industry collaboration to develop rigorous benchmarks operationalizing “dual-use” thresholds for CBRN capabilities. Publish a real-time tracker of the current state-of-the-art (SOTA) for both closed and open weight models along each capability dimension and by model size.
- Reinstate a modified version of Executive Order 14110 § 4.2., “Ensuring Safe and Reliable AI,” to require information disclosures to the Federal Government from companies “developing or demonstrating an intent to develop potential dual-use foundation models.” Replace the floating-point operations (FLOP)-based thresholds used to define covered models with a capability-based metric derived from the above benchmarks, e.g. only require disclosures for models that significantly advance the SOTA.
Science and R&D
AGI-like systems and deep learning techniques more generally have the potential to massively accelerate scientific discovery and technological R&D. The U.S. Federal Government can enable AI-assisted science and R&D by expanding access to its many science-relevant datasets; by promoting the values of open science in federally-funded research; by making strategic investments in core research infrastructure; and by reforming the scientific grantmaking process itself.
Consider that the Department of Energy has already proposed creating AI foundation models for materials science as part of its Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative. Unfortunately, outdated IT infrastructure and data access and retention policies make sharing and combining different datasets challenging under the status quo. Assembling large DOE physics and chemistry datasets into formats and within technical infrastructure suited to training foundation models has been estimated to cost around $100 million, but could potentially be achieved for much less via private sector partnerships. The DOE’s new Foundation for Energy Security and Innovation (FESI) should prioritize establishing fellowships and public-private partnerships to support such initiatives, both financially and with human capital.
At an estimated cost of $20-40 million, the DOE should further pursue the construction of robotic labs (also known as “self-driving labs”) that automate the manual process of performing experiments, allowing scientists to synthesize, validate, and characterize new materials 24 hours a day at dramatically lower costs. Such labs could be used internally by the DOE and/or rented to research institutions to ensure public datasets are used to support American researchers and promote open science. At a minimum, self-driving labs should be added as an official priority of the DOE’s preexisting FASST initiative.
AI also has the potential to transform the scientific enterprise more broadly. AI models fine-tuned for peer review are already close to parity with human reviewers. The NSF and NIH should aggressively pilot the use of AI to accelerate the peer review process, identify and reduce publication bias, and improve the quality of grant making more generally. This can be paired with reforms to grant policies to encourage open data.
Recommendations
- Zealously defend fair use protections for AI models trained on copyrighted data as a matter of national security.
- Require all publications stemming from federal grants to be deposited in open repositories at the time of publication (or even preprint stage) by default, rather than allowing any length of embargo. Further enforce Creative Commons (CC-BY) or public-domain dedications (CC0) for datasets produced with federal funding.
- Establish or expand national data repositories (like NIH’s data repositories, but broader and more specialized for AI use cases), including through the proactive acquisition and storage of large scientific datasets under open licenses that explicitly allow reuse in AI training.
- Make data-sharing plans a strict requirement in all grant proposals, including details about how the data will be prepared for open-access reuse (metadata format, curation standards, etc.). Further require that all federally funded datasets be registered in an open, centralized index for automatic discoverability by AI researchers.
- Revise Intellectual Property and data licensing rules. For example, consider waiving or modifying certain provisions of the Bayh-Dole Act that give universities ownership of federally funded IP, ensuring that key research artifacts (data, code) default to open licenses. Such licenses should explicitly allow for commercial use to foster broad innovation in both academia and industry.
- Explore reforms to “unbundle the university” from the research enterprise. An OSTP rulemaking could relax institutional eligibility requirements mandated under the NSF’s “Proposal & Award Policies & Procedures Guide” and NIH’s “Grants Policy Statement.” Agencies could then establish new programs to allow qualified individual scientists or ad-hoc consortia to apply for federal grants directly, bypassing the standard requirement for an institutional sponsor. OSTP could similarly issue memos directing agencies to experiment with alternative grant mechanisms, such as Focused Research Organizations or research collectives.
- OSTP should systematically review the compliance burdens afflicting the traditional research enterprise, including as they relate to accounting rules, grant formats, Institutional Review Boards (IRB), and IP obligations, and advise the President on issuing updated research guidelines to relevant subagencies (e.g. the Office for Human Research Protections in the case of IRB enforcement).
Government modernization
Institutional destabilization and displacement are perhaps the most under-rated sources of near-term AI risk. The Arab Spring was potentiated by the internet and social media, for example, demonstrating that mere information technology can facilitate a regime change under certain conditions. A world with billions of superintelligent, highly autonomous AI agents presents an analogous set of risks – not necessarily through misuse, but through appropriate use at unprecedented scale.
Democratized access to AI lawyers could overwhelm the court system; AI-enabled drug design and discovery could overwhelm the FDA clinical trial process; regulatory comments written and submitted by AIs could overwhelm agency rule-makings. Across the board, a world with democratized AGIs promises to render many legitimate functions of government inoperable due to “Denial of Service”-style attacks, like an email inbox without a spam filter. Just as AI voice cloning forced financial institutions to adopt new policies around voice verification, Federal agencies must systematically review their institutional readiness for a world with AGI.
This class of risks is manageable if and only if the rate of AI diffusion within the public sector keeps pace with the private sector. In contrast, the Biden Administration’s OMB Directive on AI established a multi-part approval process for deploying even the most rudimentary AI systems within government. The Trump Administration should take the opposite approach by empowering the Department of Government Efficiency (DOGE) to drive AI adoption within the Federal government in collaboration with OSTP, OMB and agency CIOs. For example, attorneys at the FTC could be empowered to use open source and securely hosted AI models in the discovery process. While many agencies will have bespoke AI needs or the need for fundamental process reforms to unlock AI use cases, the promise of AGI lies in its generality. A small number of generalist models should thus be authorized for use government-wide, analogous to how FedRAMP consolidated authorizations for cloud services.
At the same time, the potential of AI for monitoring and enforcement risks creating a world of “perfect compliance,” surveillance and censorship. Consider the NHTSA’s recall of Tesla FSD for performing rolling stops – something drivers do every day, and which is primarily illegal so that traffic enforcers can police edge-cases. Care must therefore be taken to ensure government use of AI remains privacy and civil liberties preserving, including from the incidental increase in the stringency of existing law and regulation through exponentially improved enforcement.
Recommendations
- Direct the General Services Administration to restore a version of FedRAMP’s Emerging Technology Prioritization program as an interim measure to encourage the adoption of generative AI in government.
- Streamline the FedRAMP authorization path for AI-specific services or updates that are low-risk or highly targeted (e.g., pilot uses of language models), and allow “sandboxing” or “pilot” designations within FedRAMP for AI tools with narrow compliance scopes.
- Strengthen FedRAMP’s “Authority to Operate” (ATO) reciprocity clause so that once an AI cloud service is approved at a certain impact level, other agencies can leverage that authorization without re-opening the accreditation process.
- Review FedRAMP and broader IT procurement guidelines for opportunities to promote adoption of open source AI and reduce compliance burdens that benefit Big Tech incumbents.
- Leverage the Technology Modernization Fund to rapidly modernize critical IT systems for AGI readiness.
- Systematically review the potential for AI-enabled mass surveillance through existing regulatory vectors, and develop recommendations for safeguarding against overreach.
Labor and opportunity
AI systems now seem likely to surpass human performance across most cognitive domains within the next year or two. By its very nature, Artificial General Intelligence has the potential to directly substitute for human intelligence and agency, including in domains that require forms of creativity, strategic judgement, and emotional intelligence that are ostensibly unique to humans. Moreover, the extent to which AI augments or substitutes for human labor will be largely determined by competitive market dynamics largely beyond the control of public policy. Maintaining “a pro-worker growth path for AI,” in the words of Vice President JD Vance, is thus a worthy aspiration but far from a fait accompli.
Technological innovation has historically served to augment human labor, drive productivity growth, and ultimately create more jobs than it destroys. Yet these are long-run dynamics that can often mask severe, short-run adjustment costs. The trade shock and wave of offshoring associated with the ascension of China to the World Trade Organization comes to mind. The China Shock eliminated up to 2 million American manufacturing jobs concentrated in the rural Midwest and South Atlantic. Affected workers tended to be older and with skills specific to their industry. The new jobs that were created, meanwhile, were often in entirely different parts of the country. Dislocated workers were thus forced to turn to early retirement, disability insurance, or lower wage service sector jobs. This experience is relevant to the extent the economics of AGI mirror a sudden labor supply and offshoring shock that depresses labor demand in exposed sectors.
However, the economics of AGI differ from the China Shock in at least two crucial respects. First, AGI is likely to impact a wide cross section of jobs in various geographies and across the income scale. This is salutary to the extent that the jobs created in the wake of AI-driven automation are horizontally distributed, reducing geographical frictions to labor reallocation. And second, AI systems can be operated using natural language, and will thus likely drive education innovations that make reskilling substantially easier than in the past.
Recommendations
- The Secretary of Labor should issue guidance clarifying that individuals whose primary occupations are at high risk of automation or AI displacement qualify for Dislocated Worker services under Title I of WIOA. Workforce Development Boards (WDBs) should be further instructed to incorporate data on AI‐related displacement risks into their local plans as required by 20 CFR § 679.560.
- The Employment and Training Administration (ETA) should initiate notice‐and‐comment to update WIOA performance accountability provisions under 20 CFR Part 677 toward developing standardized metrics for tracking reemployment outcomes in AI‐impacted industries (see: 29 U.S.C. 3141). Local WDBs should be instructed to prioritize short‐term AI upskilling programs, including micro‐credentials and AI literacy training, as allowable under 29 U.S.C. 3174 and state-level "rapid response" programs.
- The Secretary of Labor has substantial discretionary authority over demonstration grants and pilot programs (29 U.S.C. 3224). These authorities should be used to launch an AI Reskilling Demonstration Grant program for jumpstarting AI-assisted forms of accelerated education / retraining in collaboration with workforce boards, community colleges and private technology companies. Grants can be finely-tuned using priority scoring to ensure existing grants and E&T programs favor AI-robust skills and occupations, such as trades apprenticeships.
- Given the outsized role of states in labor policy implementation, DOL should make liberal use of WIOA waiver requests under 29 U.S.C. 3249(i) to provide states with flexibility to pursue new and experimental AI‐focused workforce strategies.
- Explore regulatory changes to enhance WARN Act data quality and frequency and improve reporting compliance (e.g. the creation of a standardized digital form). The WARN Act covers private‐sector employers with 100 or more employees, requiring 60 days’ notice for “Plant Closings that affect 50 or more employees within a 30‐day period, or Mass Layoffs involving at least 500 employees (or 50 employees if they constitute at least one‐third of the workforce).” This is important in part because notice of layoff under WARN is often prerequisite to qualifying as a dislocated worker under Title I.