HomeIndustriesThe OpenAI Board establishes a Security Committee

The OpenAI Board establishes a Security Committee

The OpenAI Board of Directors announced the creation of a Security Committee tasked with making recommendations on critical security decisions for all OpenAI projects.

The committee is led by directors Bret Taylor (Chair), Adam D'Angelo, Nicole Seligman and OpenAI CEO Sam Altman.

Aleksander Madry (Head of Readiness), Lilian Weng (Head of Security Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security) and Jakub Pachocki (Chief Scientist) may even be on the committee.

OpenAI's approach to AI safety has drawn criticism each internally and externally. Altman's firing last yr was supported by then-board member Ilya Sutskever and others, ostensibly on safety grounds.

Last week, Sutskever and Jan Leike from OpenAI's Superalignment team left the corporate. Leike specifically cited security concerns as the explanation for his departure, saying the corporate lets security “take a back seat to shiny products.”

Yesterday, Leike announced that he could be joining Anthropic to work in oversight and alignment research.

Now Altman is just not only back as CEO, but in addition sits on the committee liable for uncovering issues of safety. Former board member Helen Toner's statements in regards to the reasons for Altman's firing make one wonder how transparent he can be in regards to the issues of safety the committee uncovers.

Apparently the OpenAI board learned in regards to the release of ChatGPT via Twitter.

The Security Committee will use the subsequent 90 days to judge and further develop OpenAI's processes and security measures.

The recommendations can be submitted to OpenAI's board of directors for approval and the corporate has committed to publishing the adopted security recommendations.

This push for added guardrails comes as OpenAI says it has begun training its next frontier model, which is able to “take us to the subsequent level of capability on our journey to AGI.”

No expected release date for the brand new model has been given, but training alone will likely take weeks, if not months.

In an update on its security approach released after the AI ​​Seoul Summit, OpenAI stated: “We won’t release a brand new model if it exceeds a medium risk threshold of our Preparedness Framework until we’ve got implemented sufficient security measures to bring the post-mitigation rating back to medium.”

It was said that greater than 70 external experts were involved within the red teaming process before the discharge of GPT-4o.

With 90 days left until the committee reports its findings to the board, training only recently began, and a commitment to extensive work on the Red Team, it looks like we'll have an extended wait before we finally get GPT-5.

Or do you mean that you may have just began training for GPT-6?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read