Can the USA meaningfully regulate AI? It's under no circumstances clear yet. Policymakers have made progress in recent months but have also faced setbacks, highlighting the problem of passing laws that impose restrictions on technology.
In March, Tennessee became the primary state to guard speakers from unauthorized AI cloning. This summer, Colorado assumed a tiered, risk-based approach to AI policy. And in September, California Gov. Gavin Newsom signed dozens of AI-related security laws, a few of which require corporations to reveal details about their AI training.
However, the US still lacks a federal AI policy comparable to the EU's AI law. Even on the state level, regulation continues to face major obstacles.
After a protracted battle with special interests, Governor Newsom vetoed SB 1047, a law that will have imposed sweeping security and transparency requirements on corporations developing AI. Another California bill targeting AI deepfakes spreaders on social media was suspended this fall pending the final result of a lawsuit.
But there may be reason for optimism, in accordance with Jessica Newman, co-director of the AI ​​Policy Hub at UC Berkeley. In a panel discussion on AI governance at TechCrunch Disrupt 2024, Newman noted that many federal laws may not have been written with AI in mind but still apply to AI – resembling anti-discrimination and consumer protection laws.
“We often hear that the U.S. is type of a 'Wild West' in comparison with what's happening within the EU,” Newman said, “but I believe that's an exaggeration and the fact is more nuanced.”
Newman puts it succinctly: The Federal Trade Commission has forced corporations that secretly collect data to delete their AI models, and that is the case investigate whether selling AI startups to big tech corporations violates antitrust rules. Meanwhile, the Federal Communications Commission has declared AI-powered robocalls illegal and issued a rule requiring the disclosure of AI-generated content in political promoting.
President Joe Biden has also tried to implement certain AI rules. About a yr ago, Biden signed the AI ​​Executive Order, which supports the voluntary reporting and benchmarking practices that many AI corporations have already chosen to implement.
One consequence of the manager order was the US AI Safety Institute (AISI), a federal agency that studies risks in AI systems. Operation throughout the National Institute of Standards and TechnologyAISI has research partnerships with large AI laboratories resembling OpenAI and Anthropic.
Still, the AISI could possibly be disbanded by a straightforward repeal of Biden's executive order. In October, a coalition of greater than 60 organizations called on Congress to pass laws codifying the AISI before the tip of the yr.
“I believe all of us as Americans have a typical interest in ensuring that we mitigate the potential downsides of technology,” said AISI Director Elizabeth Kelly, who also took part within the panel.
So is there hope for comprehensive AI regulation within the states? The failure of SB 1047, which Newman called a “lightweight” bill with industry input, will not be exactly encouraging. Authored by California Senator Scott Wiener, SB 1047 was opposed by many in Silicon Valley, including high-profile technologists like Meta's chief AI scientist Yann LeCun.
With that in mind, Wiener, one other Disrupt panelist, said he wouldn't have worded the bill any otherwise – and he's confident that comprehensive AI regulation will ultimately prevail.
“I believe it laid the groundwork for future efforts,” he said. “Hopefully we are able to do something that brings more people together, because the fact that each one the massive labs have already recognized is that the risks (of AI) are real and we would like to check them.”
Actually Anthropic last week warned an AI catastrophe if governments fail to implement regulations in the subsequent 18 months.
Opponents have just doubled down on their rhetoric. Last Monday, Khosla Ventures founder Vinod Khosla called Wiener “completely clueless” and “unqualified” to control the true dangers of AI. And Microsoft and Andreessen Horowitz released a press release opposing AI regulations that might harm their financial interests.
But Newman expects that pressure to standardize the growing patchwork of AI rules from state to state will ultimately result in a stronger legislative solution. Instead of reaching consensus on a regulatory model, state policymakers have done so introduced This yr alone, almost 700 AI laws have been passed.
“I feel like corporations don't want an environment with a patchwork regulatory system where every state is different,” she said, “and I believe there will probably be increasing pressure to create something on the federal level that gives more clarity and “reduces a few of this uncertainty.”