
This piece originally appeared at Hyperdimensional.
Introduction
Every AI policy researcher has their favorite analogy. No AI policy discussion is complete without at least one participant bringing up the way we regulate cars, or airplanes, or nuclear weapons, or electricity, or books, or the internet. Implicit in these analogies is the idea that we should regulate AI like we regulate those things—that we should take some existing regulatory or legal framework off the shelf and apply it, with a fresh coat of paint, to the governance of digital minds.
There’s nothing wrong with reasoning by analogy; I do it myself regularly. Yet I’ve come to believe that these analogies can be dangerous. Not necessarily because they mislead us, but because they constrain our imaginations. There’s the obvious fact that mechanized intelligence is not very much like any of those earlier technologies, but there’s a deeper point, too: AI is, itself, a governance technology.
The people doing the governance of advanced AI will themselves have access to advanced AI. And we do not know exactly what governance capabilities advanced AI will enable. Given that governance is a cognitive activity, however—and that AI is mechanized cognition—it would be surprising if advanced AI did not enable at least some novel governance capabilities.