HomeIndustriesSenate investigates OpenAI security and governance after whistleblower allegations

Senate investigates OpenAI security and governance after whistleblower allegations

OpenAI was at the middle of a Senate investigation after allegations that security testing was rushed.

Five senators, led by Brian Schatz (Democrat, Hawaii), called on the corporate to offer detailed details about its safety practices and worker agreements.

The investigation follows a Washington Post report This suggests that OpenAI can have compromised on security protocols in its rush to release GPT-4 Omni, its latest AI model.

Meanwhile, whistleblowers, including senior researchers from OpenAI's disbanded “Superalignment team,” have raised concerns about restrictive non-disclosure agreements (NDAs) with employees.

Letter from Senators to OpenAI

In a strongly worded letter to OpenAI CEO Sam Altman, five senators demanded detailed information concerning the company's security practices and treatment of employees.

The letter raises concerns about OpenAI’s commitment to responsible AI development and its internal policies.

“Given OpenAI’s position as a number one AI company, it is vital that the general public can believe in the safety of its systems,” the senators write.

They proceed to query the “integrity of the corporate’s governance structure and security testing, its employment practices, its loyalty to its public guarantees and mission, and its cybersecurity policies.”

The senators, led by Brian Schatz (D-Hawaii), have set an August 13 deadline for OpenAI to reply to a series of targeted questions, including whether the corporate will meet its commitment to dedicate 20 percent of its computing resources to AI safety research and whether it is going to allow independent experts to check its systems before release.

On the topic of restrictive employment contracts, the letter asks OpenAI to substantiate that it “is not going to implement everlasting anti-disparagement agreements for current and former employees” and to commit to “removing some other provisions from employment contracts that could possibly be used to punish employees who publicly raise concerns concerning the company's practices.”

OpenAI later used X to persuade the general public of its commitment to security.

“For AI to profit everyone, we must first develop AI that is useful and secure. We wish to share with you updates on how we prioritize safety in our work,” the corporate said in a recent post.

OpenAI highlighted its Preparedness Framework, which is designed to evaluate and protect against the risks posed by increasingly powerful AI models.

“We is not going to launch a brand new model if it exceeds a medium risk threshold until we’re confident we will accomplish that safely,” the corporate said.

OpenAI addressed allegations of restrictive worker agreements, stating: “Our whistleblower policy protects employees' rights to reveal protected information. We also consider that thorough debate about this technology is essential and have modified our exit process to remove non-degrading terms.”

The company also mentioned recent steps to strengthen its security measures.

In May, OpenAI’s board established a brand new safety committee, which incorporates retired US Army General Paul Nakasone, a number one cybersecurity expert.

OpenAI stays firm on the advantages of AI. “We consider that breakthrough AI models can bring great advantages to society,” the corporate said, while acknowledging that vigilance and security measures are still needed.

Despite some progress, the possibilities of passing comprehensive AI laws this 12 months are slim as attention shifts to the 2024 elections.

Because Congress has not passed recent laws, the White House is relying largely on voluntary commitments from AI corporations to make sure they develop secure and trustworthy AI systems.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read