HomeNewsOpenAI guarantees the US AI Safety Institute early access to its next...

OpenAI guarantees the US AI Safety Institute early access to its next model

According to Sam Altman, CEO of OpenAI, OpenAI is working with the US AI Safety Institute, a federal agency whose goal is to evaluate and manage risks in AI platforms, on an agreement that can provide early access to its next major generative AI model for safety testing.

The NoticeAltman published late Thursday evening in a post on X, was not very detailed. But it – together with a Similar offer The dispute, settled in June with the UK's AI Safety Panel, appears to be intended to counteract the narrative that OpenAI has put AI safety work on hold in favour of developing more powerful and capable generative AI technologies.

In May, OpenAI disbanded a division dedicated to developing controls to forestall “superintelligent” AI systems from getting uncontrolled. Reports—including ours—suggested that OpenAI was neglecting the team's security research in favor of launching recent products, ultimately resulting in the resignations of the team's two co-leaders: Jan Leike (who now leads security research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who founded his own security-focused AI company, Safe Superintelligence Inc.).

In response to a growing chorus of critics, OpenAI announced that it would restrictive non-disparagement clauses that implicitly prohibited whistleblowing, established a security commission, and dedicated 20% of computing power to security research. (The disbanded security team had been promised 20% of OpenAI's computing power for its work, but ultimately never received that quantity.) Altman reiterated his commitment to twenty%, reiterating that OpenAI had lifted non-disparagement provisions for brand new and existing employees in May.

However, these measures have done little to appease some observers – especially after OpenAI filled the safety commission entirely with company insiders, including Altman and, more recently, reassigned a senior AI security manager to a different organization.

Five senators, including Brian Schatz, a Democrat from Hawaii, questions raised about OpenAI's policies in a recent letter to Altman. Jason Kwon, Chief Strategy Officer of OpenAI replied to the letter, writing that OpenAI is “committed to implementing rigorous security protocols at every stage of our process.”

The timing of the agreement between OpenAI and the US AI Safety Institute seems slightly suspicious on condition that the corporate earlier this week endorsed the Future of Innovation Act, a Senate bill that might authorize the Safety Institute as the manager body that sets standards and guidelines for AI models. The joint moves could possibly be perceived as an attempt at regulatory capture – or not less than as OpenAI attempting to influence federal AI policy.

Not for nothing is Altman a member of the Department of Homeland Security's Artificial Intelligence Security Committee, which makes recommendations for the “secure development and deployment of AI” within the United States' critical infrastructure. And OpenAI has dramatically increased its federal lobbying spending this yr, spending $800,000 in the primary six months of 2024, up from $260,000 in all of 2023.

The US AI Safety Institute, which reports to the Commerce Department's National Institute of Standards and Technology, advises a consortium of corporations that features Anthropic and major technology corporations comparable to Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with working on the measures outlined in President Joe Biden's October AI executive order, including developing guidelines for AI red teaming, capability assessments, risk management, safety and security, and watermarking of synthetic content.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read