Content

/

Research Papers

/

Common Sense on AI

research papers

Common Sense on AI

The featured image for a post titled "Common Sense on AI"

This piece was originally published in American Affairs.

Following successive releases of advanced AIs since ChatGPT, a group of leading technologists has called for an immediate, six-month moratorium on training large AI models. Their open letter, signed by Elon Musk and Apple cofounder Steve Wozniak, raises alarms that “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

The response to the letter among AI experts could not have been more polarized. Many of the world’s leading papers have featured articles written by notables in the field claiming that sufficiently ad­vanced AI systems run the risk of destroying civilization as we know it. These experts, and many engineers at the forefront of AI, argue that humanity is careening toward a future in which “superintelligent” machines may be more powerful than humans and ultimately uncontrollable—a combination that does not bode well for a world ruled by natural selection. For this existential risk camp, “God-like AI” wiping out humanity is an all-too-real possibility.

Other veterans of the field have vehemently opposed a pause—and roundly mocked the existential concerns behind it. Yann Lecun, recipi­ent of the prestigious Turing Award for his pioneering AI work, quipped that “Before we can get to ‘God-like AI’ we’ll need to get through ‘Dog-like AI.’” In this view, halting AI development for misguided, unproven sci-fi fears is an impediment to useful progress and a very poor precedent for future scientific advancement in a field with vast potential to promote human flourishing.

Continue reading in American Affairs.

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All

Stay in the loop

Get occasional updates about our upcoming events, announcements, and publications.