AI corporations in California breathed a collective sigh of relief when Gov. Gavin Newsom vetoed the AI security law SB 1047, which the state Senate passed earlier this month.
The controversial bill would require additional security checks for AI models that exceed a training computational or cost threshold if signed into law. These models would require a “kill switch” and end in hefty fines for the models’ manufacturers in the event that they were used to cause “critical damage.”
In his letter to the California State Senate, Newsom stated the explanations for his decision veto the bill.
He noted that one in every of the explanations California is home to 32 of the world's top 50 AI corporations is the state's “free-spirited cultivation of mental freedom.” He didn’t mention the chance that a few of these corporations would go away California, but hinted on the impact the bill would have on them.
Newsom said the principal reason for vetoing the bill was that it was too broad and the brink for regulation didn’t take note of the actual risks.
He said: “By focusing only on the costliest and ponderous models, SB 1047 creates a regulatory framework that might give the general public a false sense of security in controlling this rapidly evolving technology.” Smaller, specialized models could prove to be the identical and even prove more dangerous than the models targeted by SB 1047—but perhaps at the price of restricting the very innovation that drives progress for the common good.”
Newsom said that regulating AI risks is obligatory, but that specializing in dangerous applications quite than the blanket approach of SB 1047 is a greater option.
“SB 1047 doesn’t consider whether an AI system is utilized in high-risk environments, requires critical decisions, or uses sensitive data. Instead, the bill sets strict standards for even essentially the most basic functions – in the event that they are provided by a big system. “I don’t consider that is the perfect approach to protecting the general public from real threats posed by technology,” Newsom said.
While Newsom declined to sign SB 1047, he pointed to other AI regulations he signed this month as evidence that he takes the risks related to AI seriously.
He summarized his commitment to security and AI advancement by saying, “Given the challenges – protecting against real threats without unnecessarily thwarting this technology's promise to advance the general public good – we must get it right.”
Senator Scott Weiner was understandably unhappy that Newsom refused to sign the bill he authored.
Weiner said: “This veto is a setback for everybody who believes in oversight of enormous corporations that make critical decisions that impact the security and well-being of the general public and the longer term of the planet…This veto leaves us with the troubling reality “To create extremely powerful technology, U.S. policymakers aren’t subject to binding constraints, especially given Congress's continued paralysis on meaningful regulation of the technology industry.”
While Weiner lamented the bill's failure, Meta's Yann LeCun and enterprise capitalist Marc Andreesen publicly thanked Newsom for the veto.
We'll should wait and see whether Newsom's decision is an example of forward-looking leadership or a cause for regret.