Should we regulate AI development significantly?

This debate tackles how far governments should go in constraining how AI is researched, built, and deployed. Pro-regulation arguments stress risks like mass surveillance, discrimination, labor disruption, concentration of power, and even extreme safety scenarios, claiming that voluntary standards are not enough. Opponents will emphasize the dangers of overregulation: freezing innovation, entrenching incumbents, and pushing development into less accountable jurisdictions. The key tension is between precaution and freedom to experiment.

Here you can watch the whole argumentation in favor and against the discussed thesis

Yes (15 votes) No (3 votes)
Should we regulate AI development significantly?

To vote

YES: Oumou Ly

Agree Position

NO: Dean Ball

Disagree Position

Oumou Ly

Oumou Ly is a nonresident fellow at the Digital Forensic Research Lab at the Atlantic Councie. She was Senior Advisor for Technology and Ecosystem Security at the White House and a resident fellow at the Harvard Law School's Berkman Klein Center for Internet and Society. She hosts a web series titled The Breakdown, regularly provides media commentary on these issues, and has published in The Hill.

Dean Ball

Senior Fellow at the Foundation for American Innovation and Fathom, and author of the Hyperdimensional newsletter. Previously, he served as a Senior Policy Advisor at the White House OSTP, where he drafted America's AI Action Plan, and advised the National Science Foundation. Starting his career at the Mercatus Center, his scholarship on AI and governance is published by major institutions like the Hoover Institution and Carnegie Endowment.

Our other debates: