OpenAI CEO Sam Altman is leaving the inner committee that OpenAI created in May to oversee “critical” security decisions related to the corporate’s projects and activities.
In a Blog post Today, OpenAI announced that the committee, the Safety and Security Committee, might be an “independent” oversight group chaired by Carnegie Mellon professor Zico Kolter, and that members include Quora CEO Adam D'Angelo, retired U.S. General Paul Nakasone, and former Sony Vice President Nicole Seligman, all of whom are already members of OpenAI's board of directors.
OpenAI noted in its post that the commission conducted a security review of o1, OpenAI's latest AI model, following Altman's resignation. The group will proceed to receive regular briefings from OpenAI's security teams, the corporate said, and reserves the authority to delay releases until security concerns are addressed.
“As a part of its work, the Safety Committee… will proceed to receive regular reports on technical assessments for current and future models, in addition to reports on ongoing post-release monitoring,” OpenAI wrote within the post. “We are constructing on our model launch processes and practices to create an integrated safety framework with clearly defined success criteria for model launches.”
Altman's departure from the Security Committee comes after five US Senators questions raised about OpenAI's policies in a letter to Altman this summer. Nearly half of OpenAI's staff, once focused on the long-term risks of AI, leftand former OpenAI researchers have accused Altman opposes “real” AI regulation and as a substitute advocates policies that advance OpenAI’s corporate goals.
OpenAI has dramatically increased its federal lobbying spending, budgeting $800,000 for the primary six months of 2024, up from $260,000 for all of last 12 months. Altman also joined the U.S. Department of Homeland Security's Artificial Intelligence Safety and Security Board within the spring, which makes recommendations for the event and deployment of AI across the U.S. critical infrastructure.
Even if Altman were removed, there may be little to suggest that the Security Committee would make difficult decisions that seriously impact OpenAI’s industrial roadmap. Significantly, OpenAI has said in May that they might attempt to consider “justified criticism” of his work through the Commission – although “justified criticism” is, after all, in the attention of the beholder.
In a op-ed for The Economist in MayFormer OpenAI board members Helen Toner and Tasha McCauley said they don’t imagine OpenAI in its current form will be relied upon to carry itself accountable. “Based on our experience, we imagine that self-governance cannot reliably withstand the pressures of profit incentives,” they wrote.
And OpenAI’s profit incentives are growing.
The company is rumored to be within the means of raising over $6.5 billion in a funding round that will value OpenAI at over $150 billion. To complete the deal, OpenAI may reportedly abandon its hybrid nonprofit corporate structure, which was designed to limit investor returns, partially to make sure OpenAI stays true to its founding mission: developing artificial general intelligence that “advantages all of humanity.”