
This piece originally appeared at Hyperdimensional.
Introduction
There’s a fundamental problem in AI governance: we don’t know what we are attempting to govern. We know what AI is, at an object level. It’s statistics. It’s “just math.” Of course, an awful lot of things can be reduced to “just math,” so that assertion is not as useful as it may seem.
What kind of technology will advanced AI be? What will it feel like to live in a society that has it?
You cannot properly formulate AI policy without at least having some intuitions about these questions. Without explaining those intuitions, AI policy becomes a kind of disembodied technocratic sub-field, devoid of any justification or motivation for the measures it proposes.