Content

/

Blog Posts

/

California’s Push to Regulate AI Goes too Far

blog posts

California’s Push to Regulate AI Goes too Far

May 29, 2024

The featured image for a post titled "California’s Push to Regulate AI Goes too Far"

A wise man once said, “when dealing with exponential growth, the time to act is when it feels too early.” This is true of pandemics, but it is also said of exponential technologies like Artificial Intelligence. Either we regulate too early, or we wait for the most severe risks to manifest, at which point it will be too late.

At least that is the argument made by many proponents of SB1047, a controversial bill to regulate frontier AI developments that, in the spirit of acting too early, sailed through California’s State Senate last week with near unanimous approval. The bill now heads to the State Assembly, where it will likely undergo significant amendments, as even the original sponsor of the bill has acknowledged its flaws.

The stated goal of SB1047 is to promote the safe development and deployment of powerful AI systems while preserving the innovative capacity of California’s vibrant startup ecosystem. At first blush, the bill appears to strike a reasonable balance, introducing safety testing and reporting mandates on only the largest, most powerful AI models—models larger than any that have been deployed to date—and on only the most severe forms of AI risks, such as models with “hazardous capabilities” that can facilitate “the creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.”

If that was all the bill did, SB1047 would be perfectly sensible, yet the devil is in the details.

The first problem is with the bill’s vague scope. While the bill states that it applies only to models trained with “10^26 integer or floating-point operations” and up, it further states that this includes models “that could reasonably be expected to have similar performance” on common benchmarks, or that are worse on any specific benchmark but of “otherwise similar general capability.” What “similar” means in this context is left up to regulators to define, negating the clarity of the FLOP-based threshold, and stoking fears that the range of models under California’s regulatory purview could be interpreted to be far broader than advertised.

For context, 10^26 FLOPs is a measure of the computational resources used in an AI model’s initial “pre-training” stage. This is at least 10 times larger than any AI model deployed to date, though the largest frontier models will likely cross the threshold this year or next. Training models of this size requires hundreds of millions of dollars in capital expenditures and is thus not likely to burden smaller AI companies or startups anytime soon. Nonetheless, smaller models could conceivably achieve a “similar general capability” as a frontier model while performing strictly worse on any given benchmark, and thus become exposed to regulation despite being well under the threshold. Striking such vague and discretionary language from the bill’s definitions is thus needed to assuage the startup community’s quite valid concerns about mission creep.

A second problem is the bill’s potential chilling effect on open-source AI. Under the bill, model makers must reserve “the capability to promptly enact a full shutdown of the covered model,” including the ability to force “the cessation of operation of a covered model, including all copies and derivative models, on all computers and storage devices within custody, control, or possession of a person, including any computer or storage device remotely provided by agreement.” Of course, complying with this requirement is only possible if a model remains closed-source. This includes models with built-in safeguards if the safeguards could be removed through post-training. Developers seeking to open-source their models thus run the risk of being held liable for derivative models that add or subtract capabilities, discouraging covered models from being open-sourced in the first place.

Model developers may apply for a “limited duty exemption” from these requirements if they certify the safety of their model under penalty of perjury. However, this determination must be made before training is even initiated. Beyond adding a regulatory burden to the mere training of a model, it is simply not always possible to know a model’s capabilities in advance. These prerequisite duties should be limited to provisions dealing with operational and cybersecurity, which should arguably be in place prior to any large training run. Certification of the safety properties of a specific model, in contrast, should at best only be required prior to the model’s deployment. Otherwise, the Frontier Model Division created by SB1047 to oversee frontier AI companies risks becoming a bureaucratic bottleneck to AI research and development more broadly.

A third problem with the bill is in how it attributes liability to model makers for potential damages caused by end users, including users of derivative models. The bill sets a seemingly high bar by defining hazards in terms of “mass casualty events” or AI-enabled cybercrime resulting in “at least five hundred million dollars of damage.” Yet, as others have pointed out, cyberattacks resulting in $500 million in damages are not unheard of, particularly when hard-to-assess indirect costs are included. The infamous MafiaBoy incident in 2000, for example, saw a 15-year-old boy use a simple DDoS-style attack to take down a number of popular websites for upwards of five days, including Yahoo, eBay, and Amazon. Estimates of the damage caused by the attack range widely, from as low as $7.5 million to as high as $1.2 billion when one includes the indirect cost of forgone economic activity.

But beyond the inherent challenge of assessing monetary damages from cyberattacks, the deeper issue with SB1047 lies in how it makes model developers presumptively responsible for preventing others from misusing their model. MafiaBoy carried out his attack with the aid of a personal computer, but it would have been ludicrous for prosecutors to go after Microsoft on that basis. Nor should Microsoft have had to certify that their PCs will never be used to assist in cybercrime before being sold on the market. PCs are general-purpose computers, which means they can be used for good or ill. The same is true for generalist forms of AI. If a hacker fine-tunes an open-source AI model like Llama3 to generate malicious code or to be the brains of a malware bot, any damages caused by the model should be the fault of the hacker—not Meta.

There is no reason to abandon this principle for the largest AI models, as generality implies that not every misuse is preventable in advance, but will instead be use- and context-dependent. The same multimodal capabilities that can let a visually impaired person see their surroundings, for instance, could also be used by a killer drone to identify its next target. The potential for the latter thus conceivably makes AI models with basic vision capabilities “hazardous” in the wrong hands. That is not to say that there aren’t genuinely dangerous AI capabilities coming down the pike, or that there should never be any restrictions on the kinds of capabilities that developers are allowed to deploy or open-source. It does, however, suggest that SB1047’s approach to enforcement is both lacking in its specificity and ethically dubious in how it assigns legal responsibility, particularly given the inherently dual-use nature of general AI.

The bill has assorted other problems as well. The Frontier Model Division would be financed with a fee applied to large AI companies. This is presumably due to California’s large budget deficit, but is a terrible policy practice, as agencies funded by the industries they regulate are both less democratically accountable and more prone to regulatory capture. Additionally, SB1047 would ban what it describes as compute “price discrimination,” requiring compute providers to offer a clear and uniform price schedule for their users. This is a kind of “net neutrality” for GPUs and cloud services that fails to account for the role of dynamic pricing in compute provisioning, and is likely redundant given the rapidly falling cost of compute more broadly.

On the positive side, the bill would create strong whistleblower protections for AI researchers at the leading labs. These are good and sorely needed, as demonstrated by recent events, but are insufficient to offset the negatives of the bill overall. While there may be a kernel of a good policy hidden within SB1047, the bill encases ostensibly reasonable measures within a chillingly vague enforcement regime that risks jeopardizing America’s global AI leadership outright. Instead of hastily implemented state-level regulation, the best parts of SB1047—even if salvageable through the amendment process—would be better coming from Congress and the federal government.

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All

Stay in the loop

Get occasional updates about our upcoming events, announcements, and publications.