HomeIndustriesAfter the coup attempt in November, internal disagreements proceed at OpenAI

After the coup attempt in November, internal disagreements proceed at OpenAI

OpenAI is struggling to regulate internal disputes over its leadership and security, because the divisions that led to last yr's attempted coup against CEO Sam Altman resurface.

Six months after Altman's aborted firing, a series of outstanding resignations suggests that divisions remain inside OpenAI between those that need to advance AI quickly and people who prefer a more cautious approach, current and former employees say.

Helen Toner, one in every of the previous OpenAI board members who tried to oust Altman in November, spoke publicly for the primary time this week, saying he misled the board “on multiple occasions” about his security procedures.

“For years, Sam made it really difficult for the board to do its job by withholding information, misrepresenting things that were happening in the corporate and, in some cases, outright lying to the board,” she said on the TED AI Show podcast.

The most high-profile of several departures in recent weeks was that of OpenAI co-founder Ilya Sutskever. An individual aware of his resignation described him as caught up in Altman's “conflicting guarantees” before last yr's leadership transition.

In November, OpenAI's directors – which on the time included Toner and Sutskever – abruptly ousted Altman as CEO, shocking investors and employees. Days later, he returned under a brand new board that had lost Toner and Sutskever.

“We take our role because the board of a nonprofit organization incredibly seriously,” Toner told the Financial Times. The decision to fireplace Altman “has taken an infinite period of time and consideration,” she added.

Sutskever said on the time of his departure that he was “confident” that under its current leadership, which incorporates Altman, OpenAI would develop artificial general intelligence – AI as intelligent as humans – “that’s each protected and useful.”

However, the November affair doesn’t appear to have resolved the underlying tensions inside OpenAI that led to Altman's ouster.

Another recent departure, Jan Leike, who led OpenAI's efforts to manipulate and control super-powerful AI tools and worked closely with Sutskever, announced his resignation this month, saying his differences with company leadership had “reached a breaking point” as “security culture and processes take a back seat to shiny products.” He has now joined OpenAI competitor Anthropic.

The turmoil at OpenAI – which has bubbled back to the surface despite calls from the overwhelming majority of employees for Altman to be reinstated as CEO in November – comes as the corporate prepares to launch a brand new generation of its AI software. It can be discussing raising capital to fund its expansion, people aware of the talks said.

Altman's focus of OpenAI on shipping products quite than publishing research led to his groundbreaking chatbot ChatGPT and sparked a wave of investment in AI across Silicon Valley. After receiving greater than $13 billion in backing from Microsoft, OpenAI's revenue is anticipated to surpass the $2 billion mark this yr.

However, this concentrate on commercialization is at odds with those throughout the company preferring to prioritize security, fearing that OpenAI could rush into making a “superintelligence” that it cannot properly control.

Gretchen Krueger, an AI policy researcher who also left the corporate this month, raised several concerns about OpenAI's handling of a technology that would have far-reaching economic and public consequences.

“We (at OpenAI) must do more to enhance fundamental things,” she said in a post on X, “like decision-making processes, accountability, transparency, documentation, policy enforcement, the care with which we use our own technology, and mitigating impacts on inequality, rights, and the environment.”

Altman responded to Leike's departure by saying his former worker was “right, we still have loads of work to do; we’re committed to doing that.” This week, OpenAI announced the creation of a brand new safety committee to oversee its AI systems. Altman will serve on the committee alongside other board members.

“(Even) with one of the best intentions, without external oversight, this type of self-regulation will ultimately be unenforceable, especially under the pressure of immense profit incentives,” Toner wrote together with Tasha McCauley, who also served on OpenAI's board until November 2023, in an opinion piece for The Economist magazine published days before OpenAI's recent board was announced.

Responding to Toner's comments, OpenAI Chairman Bret Taylor said the board had worked with an outdoor law firm to review the events of last November and concluded that “the previous board's decision was not based on concerns about OpenAI's product safety, pace of development, funds, or its statements to investors, customers or business partners.”

“Our focus stays on moving forward and pursuing OpenAI’s mission to make sure that AGI advantages all of humanity,” he said.

An individual aware of the corporate said that because the November turmoil, Microsoft, OpenAI's biggest backer, has put more pressure on the corporate to prioritize industrial products, heightening tensions with those that would quite concentrate on scientific research.

Many throughout the company still desired to concentrate on AGI's long-term goal, but internal divisions and an unclear strategy from OpenAI's leadership had demotivated employees, the source said.

“We pride ourselves on developing and launching models which can be industry-leading in each performance and safety,” OpenAI said. “We work hard to keep up this balance and consider it’s critical to have interaction in robust debate as technology advances.”

Despite the criticism sparked by recent internal turmoil, OpenAI continues to work on developing more advanced systems. This week, the corporate announced that it recently began training the successor to GPT-4, the massive AI model that powers ChatGPT.

Anna Makanju, vp of world affairs at OpenAI, said policymakers had contacted her team in regards to the recent departures to search out out whether the corporate was “serious” about security.

She said security is “something that’s the responsibility of many teams at OpenAI.”

“It's quite likely that (AI) will bring even greater change in the longer term,” she said. “Certainly there shall be loads of disagreement about exactly what’s the proper approach to arrange society (and) easy methods to regulate it.”

Video: AI: blessing or curse for humanity? | FT Tech

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read