HomeIndustriesSilicon Valley in uproar over California AI safety law

Silicon Valley in uproar over California AI safety law

Artificial intelligence heavyweights in California are protesting a proposed state bill that will force technology corporations to stick to strict safety regulations, including the creation of a “kill switch” to shut down their powerful AI models, sparking a growing dispute over regulatory control of this cutting-edge technology.

California lawmakers are considering proposals that will impose recent restrictions on technology corporations operating within the state, including the three largest AI startups OpenAI, Anthropic and Cohere, in addition to large language models owned by major technology corporations comparable to Meta.

The bill, which passed the state Senate last month and will likely be voted on within the General Assembly in August, requires AI groups in California to ensure to a newly created state panel that they’ll not develop models with “dangerous capabilities,” comparable to those used to provide biological or nuclear weapons or to support cybersecurity attacks.

Under the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, developers could be required to report on their security testing and introduce a so-called kill switch to shut down their models.

But the law has turn into the topic of fierce response in lots of Silicon Valley circles because it could allegedly force AI start-ups to depart the state and forestall platforms like Meta from operating open-source models.

“If someone desired to invent regulations to stifle innovation, you might hardly do higher,” said Andrew Ng, a renowned computer scientist who led AI projects at Alphabet's Google and China's Baidu and sits on Amazon's board. “It creates massive liabilities for science fiction risks, thereby stoking fear in anyone who dares to innovate.”

AI's rapid growth and large potential have raised concerns concerning the technology's safety. Billionaire Elon Musk, an early investor in ChatGPT maker OpenAI, last 12 months called it an “existential threat” to humanity. This week, a gaggle of current and former OpenAI employees published an open letter warning that “groundbreaking AI corporations” will not be adequately monitored by governments and pose “serious risks” to humanity.

The California bill was co-sponsored by Center for AI Security (CAIS), a San Francisco-based nonprofit led by computer scientist Dan Hendrycks, who serves as a security advisor to Musk's AI startup xAI. CAIS has close ties to the effective altruism movement made famous by imprisoned cryptocurrency executive Sam Bankman-Fried.

Democratic Senator Scott Wiener, who introduced the bill, said: “Fundamentally, I need AI to succeed and proceed to innovate, but we should always attempt to anticipate any security risks.”

He added that it was a “lightweight bill … that will only require developers training large models to conduct basic security assessments to discover major risks and take appropriate steps to mitigate those risks.”

However, critics accused Wiener of being too restrictive and imposing a costly compliance burden on developers, especially smaller AI corporations. Opponents also claim that the bill focuses on hypothetical risks that pose an “extreme” liability risk for founders.

Among probably the most intense criticisms is that the bill will harm open-source AI models – where developers make the source code freely available to the general public so developers can construct on it – comparable to Meta's flagship LLM, Llama. The bill would make developers of open models potentially liable to malicious actors who manipulate their models to cause harm.

Arun Rao, senior product manager for generative AI at Meta, said in a post on X last week that the bill was “unworkable” and would “end open source in (California).”

“The net tax effects of destroying the AI ​​industry and displacing corporations could run into the billions as each corporations and high-paying employees move away,” he added.

Wiener said of the criticism: “This is the technology sector that doesn't like regulation, so I'm under no circumstances surprised that there’s resistance.”

Some of the answers were “not entirely accurate,” he said, adding that he plans to make changes to the bill to make clear its scope.

The proposed changes say that open source developers won’t be held chargeable for models “that undergo heavy fine-tuning.” This signifies that if an open source model is then sufficiently customized by third parties, the group that created the unique model isn’t any longer chargeable for it. They also say that the “kill switch” requirement doesn’t apply to open source models, he said.

Another addition states that the bill will only apply to large models “that cost a minimum of $100 million to coach” and subsequently wouldn’t affect most smaller startups.

“These competitive pressures are acting on these AI organizations and essentially giving them an incentive to compromise on security,” said Hendrycks of CAIS, adding that the bill was “realistic and reasonable” and that almost all people wanted “some basic control.”

But a number one Silicon Valley enterprise capitalist said they’ve already received inquiries from company founders wanting to know whether the potential laws would require them to depart the state.

“My advice to anyone asking is that we stay and fight,” the person said. “But it will weaken open source and the startup ecosystem. I feel some founders will resolve to depart.”

Given the massive rise in popularity of this technology, governments all over the world have taken steps to manage AI over the past 12 months.

US President Joe Biden issued an executive order in October that goals to set recent standards for AI safety and national security, protect residents from AI privacy risks and combat algorithmic discrimination. The UK government outlined plans in April to draft recent laws to manage AI.

Critics are baffled by the speed with which the California AI bill was created under the auspices of CAIS and passed through the Senate.

The majority of CAIS's funding comes from Open Philanthropy, a San Francisco-based charity with roots within the effective altruism movement. It awarded about $9 million in grants to CAIS between 2022 and 2023, in step with its “focus area of ​​potential risks from advanced artificial intelligence.” The CAIS Action Fund, a division of the nonprofit created last 12 months, registered its first lobbyists in Washington DC in 2023 and has spent about $30,000 on lobbying this 12 months.

Wiener has received funding over multiple terms from wealthy enterprise capitalist Ron Conway, managing partner of SV Angel and investor in AI startups.

Rayid Ghani, a professor of artificial intelligence at Carnegie Mellon University's Heinz College, said there was “some overreaction” to the bill, adding that any laws should focus specifically on use cases of the technology slightly than regulating the event of models.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read