This piece originally appeared at Second Best.
I. Oversight of AGI labs is prudent
- It is in the U.S. national interest to closely monitor frontier model capabilities.
- You can be ambivalent about the usefulness of most forms of AI regulation and still favor oversight of the frontier labs.
- As a temporary measure, using compute thresholds to pick out the AGI labs for safety-testing and disclosures is as light-touch and well-targeted as it gets.
- The dogma that we should only regulate technologies based on “use” or “risk” may sound more market-friendly, but often results in a far broader regulatory scope than technology-specific approaches (see: the EU AI Act).
- Training compute is an imperfect but robust proxy for model capability, and has the immense virtue of simplicity.
- The use of the Defense Production Act to require disclosures from frontier labs is appropriate given the unique affordances available to the Department of Defense, and the bona fide national security risks associated with sufficiently advanced forms of AI.
- You can question the nearness of AGI / superintelligence / other “dual use” capabilities and still see the invocation of the DPA as prudent for the option value it provides under conditions of fundamental uncertainty.
- Requiring safety testing and disclosures for the outputs of $100 million-plus training runs is not an example of regulatory capture nor a meaningful barrier to entry relative to the cost of compute.
II. Most proposed “AI regulations” are ill-conceived or premature
- There is a substantial premium on discretion and autonomy in government policymaking whenever events are fast moving and uncertain, as with AI.
- It is unwise to craft comprehensive statutory regulation at a technological inflection point, as the basic ontology of what is being regulated is in flux.
- The optimal policy response to AI likely combines targeted regulation with comprehensive deregulation across most sectors.
- Regulations codify rules, standards and processes fit for a particular mode of production and industry structure, and are liable to obsolesce in periods of rapid technological change.
- The benefits of deregulation come less from static efficiency gains than from the greater capacity of markets and governments to adapt to innovation.
- The main regulatory barriers to the commercial adoption of AI are within legacy laws and regulations, mostly not prospective AI-specific laws.
- The shorter the timeline to AGI, the sooner policymaker and organizations should switch focus to “bracing for impact.”
- The most robust forms of AI governance will involve the infrastructure and hardware layers.
- Existing laws and regulations are calibrated with the expectation of imperfect enforcement.
- To the extent AI greatly reduces monitoring and enforcement costs, the de facto stringency of all existing laws and regulations will greatly increase absent a broader liberalization.
- States should focus on public sector modernization and regulatory sandboxes and avoid creating an incompatible patchwork of AI safety regulations.
III. AI progress is accelerating, not plateauing
- The last 12 months of AI progress were the slowest they’ll be for the foreseeable future.
- Scaling LLMs still has a long way to go, but will not result in superintelligence on its own, as minimizing cross-entropy loss over human-generated data converges to human-level intelligence.
- Exceeding human-level reasoning will require training methods beyond next token prediction, such as reinforcement learning and self-play, that (once working) will reap immediate benefits from scale.
- RL-based threat models have been discounted prematurely.
- Future AI breakthroughs could be fairly discontinuous, particularly with respect to agents.
- AGI may cause a speed-up in R&D and quickly go superhuman, but is unlikely to “foom” into a god-like ASI given compute bottlenecks and the irreducibility of high dimensional vector spaces, i.e. Ray Kurzweil is underrated.
- Recursive self-improvement and meta-learning may nonetheless give rise to dangerously powerful AI systems within the bounds of existing hardware.
- Slow take-offs eventually become hard.
IV. Open source is mostly a red-herring
- The delta between proprietary AI models and open source will grow overtime, even as smaller, open models become much more capable.
- Within the next two years, frontier models will cross capability thresholds that even many open source advocates will agree are dangerous to open source ex ante.
- No major open source AI model has been dangerous to date, while the benefits from open sourcing models like Llama3 and AlphaFold are immense.
- True “open source” means open sourcing training data and code, not just model weights, which is essential for avoiding the spread of models with Sleeper Agents or contaminated data.
- The most dangerous AI models will be expensive to train and only feasible for large companies, at least initially, suggesting our focus should be on monitoring frontier capabilities.
- The open vs. closed source debate is mainly a debate about Meta, not deeper philosophical ideals.
- It is not in Meta’s shareholders’ interest to unleash an unfriendly AI into the world.
- Companies governed by nonprofit boards and CEOs who don’t take compensation face lower-powered incentives against AI x-risk than your typical publicly traded company.
- Lower-tier AI risks, like from the proliferation of deepfakes, are collective action problems that will be primarily mitigated through defensive technologies and institutional adaptation.
- Restrictions on open source risk undermining adaptation by incidentally restricting the diffusion of defensive forms of AI.
- Trying to restrict access to capabilities that are widely available and / or cheap to train from scratch is pointless in a free society, and likely to do more harm than good.
- Nonetheless, releasing an exotic animal into the wild is a felony.
V. Accelerate vs. decelerate is a false dichotomy
- Decisions made in the next decade are more highly levered to shape the future of humanity than at any point in human history.
- You can love technology and be an “accelerationist” across virtually every domain — housing, transportation, healthcare, space commercialization, etc. — and still be concerned about future AI risks.
- “Accelerate vs. decelerate” imagines technology as a linear process when technological innovation is more like a search down branching paths.
- If the AI transition is a civilizational bottleneck (a “Great Filter”), survival likely depends more on which paths we are going down than at what speed, except insofar as speed collapses our window to shift paths.
- Building an AGI carries singular risks that merit being treated as a scientific endeavor, pursued with seriousness and trepidation.
- Tribal mood affiliations undermine epistemic rationality.
- e/acc and EA are two sides of the same rationalist coin: EA is rooted in Christian humanism; e/acc in Nietzschean atheism.
- The de facto lobby for “accelerationism” in Washington, D.C., vastly outstrips the lobby for AI safety.
- It genuinely isn’t obvious whether Trump or Biden is better for AI x-risk.
- EAs have more relationships on the Democratic side, but can work in either administration and are a tiny contingent all things considered.
- Libertarians, e/accs, and Christian conservatives — whatever their faults — have a far more realistic conception of AI and government than your average progressive.
- The more one thinks AI goes badly by default, the more one should favor a second Trump term precisely because he is so much higher variance.
- Steve Bannon believes the singularity is near and a serious existential risk; Janet Haven thinks AI is Web3 all over again.
VI. The AI wave is inevitable, superintelligence isn’t
- Building a unified superintelligence is an ideological goal, not a fait accompli.
- The race to build a superintelligence is driven by two or three U.S. companies with significant degrees of freedom over near-term developments, as distinguished from the inevitability of the AI transition more generally.
- Creating a superintelligence is inherently dangerous and destabilizing, independent of the hardness of alignment.
- We can use advanced AI to accelerate science, cure diseases, solve fusion, etc., without ever building a unified superintelligence.
- Creating an ASI is a direct threat to the sovereign.
- AGI labs led by childless Buddhists with alt accounts are probably more risk tolerant than is optimal.
- Sam Altman and Sam Bankman-Fried are more the same than different.
- High functioning psychopaths demonstrate anti-social behaviors in their youth but learn to compensate in adulthood, becoming adept social manipulators with grandiose visions and a drive to “win” at all cost.
- Corporate malfeasance is mostly driven by bad incentives and “techniques of neutralization” — convenient excuses for over-riding normative constraints, such as “If I didn’t, someone else would.”
VII. Technological transitions cause regime changes
- Even under best case scenarios, an intelligence explosion is likely to induce state collapse / regime change and other severe collective action problems that will be hard to adapt to in real time.
- Government bureaucracies are themselves highly exposed to disruption by AI, and will need “firmware-level” reforms to adapt and keep-up, i.e. reforms to civil service, procurement, administrative procedure, and agency structure.
- Congress will need to have a degree of legislative productivity not seen since FDR.
- Inhibiting the diffusion of AI in the public sector through additional layers of process and oversight (such as through Biden’s OMB directive) tangibly raises the risk of systemic government failure.
- The rapid diffusion of AI agents with approximately human-level reasoning and planning abilities is likely sufficient to destabilize most existing U.S. institutions.
- The reference class of prior technological transitions (agricultural revolution, printing press, industrialization) all feature regime changes to varying degrees.
- Seemingly minor technological developments can affect large scale social dynamics in equilibrium (see: Social media and the Arab Spring or the Stirrup Thesis).
VIII. Institutional regime changes are packaged deals
- Governments and markets are both kinds of spontaneous orders, making the 19th and 20th century conception of liberal democratic capitalism a technologically-contingent equilibrium.
- Technological transitions are packaged deals, e.g. free markets and the industrial revolution went hand-in-hand with the rise of “big government” (see Tyler Cowen on The Paradox of Libertarianism).
- The AI-native institutions created in the wake of an intelligence explosion are unlikely to have much continuity with liberal democracy as we now know it.
- In steady state, maximally democratized AI could paradoxically hasten the rise of an AI Leviathan by generating irreversible negative externalities that spur demand for ubiquitous surveillance and social control.
- Periods of rapid technological change tend to shuffle existing public choice / political economy constraints, making politics more chaotic and less predictable.
- Periods of rapid technological change tend to disrupt global power balances and make hot wars more likely.
- Periods of rapid technological change tend to be accompanied by utopian political and religious movements that usually end badly.
- Explosive growth scenarios imply massive property rights violations.
- A significant increase in productivity growth will exacerbate Baumol’s Cost Disease and drive mass adoption of AI policing, teachers, nurses, etc.
- Technological unemployment is only possible in the limit where market capitalism collapses, say into a forager-style gift economy.
IX. Dismissing AGI risks as “sci-fi” is a failure of imagination
- If one’s forecast of 2050 doesn’t resemble science fiction, it’s implausible.
- There is a massive difference between something sounding “sci-fi” and being physically unrealizable.
- Terminator analogies are underrated.
- Consciousness evolved because it serves a functional purpose and will be an inevitable feature of certain AI systems.
- Human consciousness is scale-dependent and not guaranteed to exist in minds that are vastly larger or less computationally bounded.
- Joscha Bach’s Cyber Animism is the best candidate for a post-AI metaphysics.
- The creation of artificial minds is more likely to lead to the demotion of humans’ moral status than to the promotion of artificial minds into moral persons.
- Thermodynamics may favor futures where our civilization grows and expands, but that doesn’t preclude futures dominated by unconscious replicators.
- Finite-time singularities are indicators of a phase-transition, not a bona fide singularity.
- It is an open question whether the AI phase-transition will be more like the printing press or photosynthesis.
X. Biology is an information technology
- The complexity of biology arises from processes resembling gradient descent and diffusion guided by comparatively simple reward signals and hyperparameters.
- Full volitional control over biology is achievable, enabling the creation of arbitrary organisms that wouldn’t normally be “evolvable.”
- Superintelligent humans with IQs on the order of 1,000 may be possible through genetic engineering.
- Indefinite life extension is a tragedy of the anticommons.
- There are more ways for a post-human transition to go poorly than to go well.
- Natural constraints are often better than man-made ones because there’s no one to hold responsible.
- We live in base reality, and in nature there is no such thing as plot armor.
Update: Zvi responds and I add some clarifications in his comment section.