HomeIndustriesWhistleblowers criticize OpenAI's resistance to AI security law

Whistleblowers criticize OpenAI's resistance to AI security law

Two former OpenAI researchers wrote a letter in response to OpenAI's opposition to California's controversial AI safety bill SB 1047.

The bill is currently moving through the state legislative process and, if it passes the complete Senate by the top of the month, will probably be sent to Governor Gavin Newsom for his signature.

The bill would add additional safety controls for AI models that cost greater than $100 million to coach, in addition to a “kill switch” in case the model misbehaves. Former OpenAI employees and whistleblowers William Saunders and Daniel Kokotajlo say they’re “disillusioned but not surprised” by OpenAI's opposition to the bill.

OpenAILetter from The bill's writer, Senator Scott Wiener, stated that while he supports the intent behind the law, federal laws regulating AI development are a greater option.

According to OpenAI, national security implications, akin to potential chemical, biological, radiological and nuclear damage, are “best managed by the federal government and its agencies.”

The letter states: “If states try and compete with the federal government for the few talents and resources available, it should dilute the already limited expertise of every agency, leading to less effective and more fragmented policies to guard against national security risks and significant harm.”

The letter also cited Representative Zoe Lofgren's concerns that if the bill passes, “there’s an actual risk that firms will select to ascertain themselves in other jurisdictions or just not publish their models in California.”

OpenAI whistleblower’s response

The former OpenAI employees don’t accept OpenAI's reasoning. They stated: “We joined OpenAI because we desired to make sure the security of the incredibly powerful AI systems the corporate develops. But we left OpenAI because we lost trust that it could develop its AI systems safely, truthfully and responsibly.”

The authors of the letter were also behind the “Right to Warn” letter published earlier this 12 months.

In the letter, they justify their support for SB 1047 by saying: “Developing groundbreaking AI models without adequate safeguards poses foreseeable risks of catastrophic harm to the general public.”

OpenAI has seen an exodus of AI security researchers, but the corporate's models haven’t delivered any of the doomsday scenarios many feared. The whistleblowers say, “This is simply because no truly dangerous systems have been built yet, not because firms have security processes in place that would handle truly dangerous systems.”

They also don't consider that OpenAI CEO Sam Altman is committed to AI safety. “Sam Altman, our former boss, has repeatedly called for AI regulation. Now that actual regulation is on the table, he’s against it,” they explained.

OpenAI is just not the one company opposing the bill. Anthropic also had concerns, but now seems to support it following the amendment.

Anthropic CEO Dario Amodei said in his letter to California Governor Gavin Newsom on August 21: “In our assessment, the brand new law, SB 1047, represents a major improvement in an area where we consider its advantages likely outweigh its costs.

“However, we usually are not sure of this, and there are still some elements of the bill that appear troubling or ambiguous to us… Our initial concerns that the bill might hamper innovation because of the rapid evolution of the sector have been significantly reduced within the amended version.”

If SB 1047 becomes law, it could force firms like OpenAI to focus significantly more resources on AI safety, however it could also result in an exodus of technology firms from Silicon Valley.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read