HomeIndustriesOpenAI joins resistance to California AI safety law

OpenAI joins resistance to California AI safety law

Stay up so far with free updates

OpenAI has sharply criticized a California bill geared toward ensuring the secure use of high-performance artificial intelligence, suggesting that latest controls would jeopardize its growth within the state, joining a last-minute lobbying campaign by investors and AI groups searching for to dam the bill.

The bill, SB 1047, threatens “California's unique status as a world leader in artificial intelligence,” wrote Jason Kwon, the corporate's chief strategy officer, in a letter to Scott Wiener, the California state senator who pushed the bill.

This could “slow the pace of innovation and cause California's top engineers and entrepreneurs to depart the state to hunt higher opportunities elsewhere,” he added.

SB 1047 has divided Silicon Valley. While there may be widespread agreement that the risks of super-powerful latest AI models have to be contained, critics argue that Wiener's proposals would stifle startups, favor America's competitors and undermine California's position on the epicenter of an AI boom.

OpenAI is the most recent startup to oppose parts of the bill, and in addition essentially the most well-known – thanks largely to the recognition of its chatbot ChatGPT and a $13 billion commitment from its partner Microsoft.

OpenAI supports regulations to make sure secure development and deployment of AI systems, but argues within the letter, first reported by Bloomberg, that the laws should come from the federal government, not individual states.

In a response Wednesday, Wiener said he agreed the federal government should take the lead but was “skeptical” that Congress would act. He also criticized the “hackneyed argument” that technology startups would relocate if the bill passed, and said out-of-state corporations would still need to comply with the law to do business locally.

The California State Assembly is scheduled to vote on the bill by the tip of the month. If it passes, Governor Gavin Newsom will then resolve whether to sign or veto it.

Silicon Valley technology groups and investors, including Anthropic, Andreessen Horowitz and YCombinator, have launched a lobbying campaign against Wiener's proposals for a stricter security framework. Nancy Pelosi, the previous Speaker of the House and Representative for California, also released a press release against the bill last week, calling it “well-intentioned but ill-informed.”

Among essentially the most controversial elements of the senator's original proposals was a requirement that AI corporations would need to guarantee to a brand new government panel that they might not develop models with “dangerous capabilities” and create a “kill switch” to show off their powerful models.

Opponents claimed the bill focused on hypothetical risks and added “extreme” liability risk to founders.

Some of those requirements were softened last week by an amendment to the law. For example, the civil obligations of AI developers were limited and the circle of those that need to comply with the foundations was narrowed.

But critics argue that the bill still burdens startups with onerous and sometimes unrealistic requirements. On Monday, U.S. Reps. Anna Eshoo and Zoe Lofgren wrote in a letter to California Assembly Speaker Robert Rivas that there are “still significant problems with the underlying construction of the bill” and as a substitute called for “specializing in federal regulations to regulate the physical tools needed to create these physical threats.”

Despite criticism from leading AI scientists reminiscent of Fei-Fei Li of Stanford University and Andrew Ng, who led AI projects at Alphabet's Google and China's Baidu, the bill has found support from a number of the “AI godfathers,” reminiscent of Geoffrey Hinton of the University of Toronto and Yoshua Bengio, a pc science professor on the University of Montreal.

“The bottom line is that SB 1047 is a particularly sensible bill that requires large AI labs to do what they’ve already committed to doing: testing their large models for catastrophic safety risks,” Wiener wrote Wednesday.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read