Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University's Center for Security and Emerging Technology, fears that Congress could react “knee-jerkly” in AI policymaking if the establishment doesn’t change.
“Congress straight away – I don't know if anyone's noticed – isn’t particularly functional and never particularly good at passing laws unless there's a large crisis,” Toner said Tuesday at TechCrunch's StrictlyVC event in Washington, DC. “AI goes to be a giant, powerful technology – in some unspecified time in the future something goes to go mistaken. And if the one laws we get are reflexively passed in response to a serious crisis, is that going to be productive?”
Toner's comments, ahead of a White House-sponsored summit on Thursday on using artificial intelligence to spice up American innovation, underscore the long-standing deadlock in U.S. AI policy.
In 2023, President Joe Biden signed an executive order implementing certain consumer protections related to AI and requiring developers of AI systems to share the outcomes of safety tests with relevant government agencies. Earlier that very same 12 months, the National Institute of Standards and Technology, which sets federal technology standards, released a roadmap to discover and mitigate the brand new risks of AI.
But Congress has yet to pass any laws on AI—and even one as comprehensive because the EU's recently passed AI law. And with 2024 being a key election 12 months, that's unlikely to vary any time soon.
As a report by the Brookings Institute notes that the vacuum in federal laws has led state and native governments to rush to fill the gap. In 2023, state legislatures introduced over 440% more AI-related bills than in 2022; in accordance with lobby group TechNet, nearly 400 latest AI laws have been proposed on the state level in recent months.
Last month, California lawmakers passed about 30 latest artificial intelligence laws designed to guard consumers and jobs. Colorado recently passed a measure requiring AI corporations to exercise “reasonable care” in developing the technology to avoid discrimination. And in March, Tennessee Governor Bill Lee signed the ELVIS Act, which bans the factitious cloning of musicians' voices or images without their explicit consent.
The complex of rules threatens to create uncertainty for each industry and consumers.
Consider this instance: Many state laws regulating AI define “automated decision-making”—a term that generally refers to AI algorithms that make decisions about, say, whether a business will get a loan—in a different way. Some laws don’t consider decisions to be “automated” so long as they’re made with some level of human involvement. Others are more strict.
Toner believes that even a federal mandate at the very best level could be preferable to the present state of affairs.
“Some of the smarter and more thoughtful actors I've seen on this space are attempting to say, OK, what relatively easy, relatively common sense guardrails can we put in place now to make future crises – future big problems – less severe and mainly make it less likely that you just'll find yourself in a situation later where you’ve got to make rapid and ill-conceived responses,” she said.