According to an individual from this team, OpenAI's Superalignment team, which is answerable for developing ways to manage and control “superintelligent” AI systems, was promised 20% of the corporate's computing resources. But requests for a fraction of that computing power were often rejected, stopping the team from getting their work done.
This issue, amongst other things, led to several team members resigning this week, including co-lead Jan Leike, a former DeepMind researcher who helped develop ChatGPT, GPT-4, and ChatGPT's predecessor InstructGPT during his time at OpenAI.
Leike went public on Friday morning with some reasons for his resignation. “I had been disagreeing with OpenAI's leadership concerning the company's core priorities for quite a while, until we finally reached a breaking point,” Leike wrote in a series of posts on “must be spent on the subsequent generations of models of security, surveillance, preparedness, protection, adversary resilience, (super)alignment, confidentiality, societal impact and related topics. It is sort of difficult to unravel these problems and I worry that we aren’t on the fitting path to get there.”
OpenAI didn’t immediately reply to a request for comment concerning the resources promised and allocated to this team.
OpenAI founded the Superalignment team last July and was led by Leike and OpenAI co-founder Ilya Sutskever, who also left the corporate this week. It had the ambitious goal of solving the important thing technical challenges of controlling superintelligent AI over the subsequent 4 years. Along with scientists and engineers from OpenAI's former Alignment division, in addition to researchers from other organizations throughout the company, the team was expected to contribute research on the safety of internal and non-OpenAI models through initiatives including a research grant program that will recruit work from the broader AI industry and share along with her.
The Superalignment team managed to publish a variety of safety research results and funnel thousands and thousands of dollars in grants to outside researchers. But as product launches increasingly stretched the bandwidth of OpenAI's leadership, the Superalignment team was forced to fight for more upfront investments – investments that they believed would support the corporate's stated mission of delivering superintelligent AI for the good thing about all developing humanity were of crucial importance.
“Building machines more intelligent than humans is an inherently dangerous endeavor,” Leike continued. “But in recent times, safety culture and processes have taken a back seat to shiny products.”
Sutskever's fight with OpenAI CEO Sam Altman created additional distraction.
Sutskever and OpenAI's old board decided to abruptly fire Altman late last yr over concerns that Altman had not been “consistently open” with board members. Under pressure from OpenAI investors, including Microsoft, and lots of the company's own employees, Altman was eventually reinstated, much of the board resigned, and Sutskever allegedly never returned to work.
According to the source, Sutskever was instrumental within the Superalignment team – not only contributing to research, but additionally acting as a bridge to other departments inside OpenAI. He would also act as an envoy of sorts, illustrating the importance of the team's work to key OpenAI decision makers.
Following Leike and Sutskever's departure, John Schulman, one other co-founder of OpenAI, has moved to guide the Superalignment team's work, but there’ll not be a dedicated team, but quite a loose group of researchers working in departments throughout firms are embedded. An OpenAI spokesperson described it as “deeper integration (of the team).”
There is a fear that this may mean that OpenAI's AI development is not going to be as safety-focused because it might have been.