Content

/

Blog Posts

/

How Should AI Liability Work? (Part I)

blog posts

How Should AI Liability Work? (Part I)

February 20, 2025

The featured image for a post titled "How Should AI Liability Work? (Part I)"

This piece originally appeared at Hyperdimensional.

During the SB 1047 debate, I noticed that there was a great deal of confusion—my own included—about liability. Why is it precisely that software seems, for the most part, to evade America’s famously capacious notions of liability? Why does America have such an expansive liability system in the first place? What is “reasonable care,” after all? Is AI, being software, free from liability exposure today unless an intrusive legislator decides to change the status quo (preview: the answer to this one is “no”)? How does liability for AI work today, and how should it work? It turned out that to answer those questions I had to trace the history of American liability from the late 19th century to the present day.

Answering the questions above has been a journey. This week and next, I’d like to tell you what I’ve found so far. This week’s essay will tell the story of how we got to where we are, a story that has fascinating parallels to current discussions about the need for liability in AI. Next week’s essay will deal with how the American liability system, unchecked, could subsume AI, and what I believe should be done.

Continue reading at Hyperdimensional.

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All

Stay in the loop

Get occasional updates about our upcoming events, announcements, and publications.