OpenAI CEO, Sam Altmansays a world agency must be established to observe and ensure safety of powerful future AI models.
In an interview on the All-In podcast: Altman said that we are going to soon see pioneering AI models that might be significantly more powerful and potentially more dangerous.
Altman said: “I believe there’ll come a time within the not-too-distant future when cutting-edge AI systems might be able to causing significant global harm.”
The US and EU authorities have each passed laws regulating AI, but Altman doesn’t imagine that inflexible laws can sustain with the speed of advancing AI. He can also be critical of the person US States are attempting to manage AI independently.
Speaking of expected advanced AI systems: Altman said: “And for a majority of these systems we’ve got the identical sort of global oversight of nuclear weapons or synthetic bio or things that may have really very negative impacts, well beyond the scope of 1 country.”
I would really like to see some sort of international agency that appears at the very best performing systems and ensures adequate security testing.”
Altman said such a international oversight is crucial to stop a superintelligent AI from “escape and recursively improving.”
Altman recognized that while monitoring powerful AI models is crucial, over-regulation of AI could hinder progress.
His proposed approach is analogous to international nuclear regulation. The International Atomic Energy Agency has oversight of member states which have access to significant quantities of nuclear material.
“If we just take a look at models trained on computers that cost greater than 10 billion or greater than 100 billion or whatever, I could be positive with that. There could be a line that will be positive. And I don’t think this puts a regulatory burden on startups,” he explained.
Altman explained why he thought the agency approach was higher than attempting to legislate AI.
“The reason I pushed for an agency-based approach was to reflect the large picture, reasonably than… write it down in law,… in 12 months it's all going to be written mistaken… And I don't think so, even when these people I assumed were real World experts, I don't think they might do it right. I’m taking a look at 12 or 24 months,” he said.
When will GPT-5 be released?
When asked a couple of GPT-5 release date: Altman was predictably hopeless, but suggested it won’t occur the best way we imagine.
“We take our time when releasing large models… Also, I don't know if we'll call it GPT-5,” he said.
Altman identified the iterative improvements OpenAI has commented on GPT-4, saying it higher demonstrates how the corporate will introduce future improvements.
So it seems less likely that “GPT-5” might be released, but reasonably that GPT-4 might be expanded with additional features.
We must wait OpenAIUpdate announcements later today to see if we get any further clues on this ChatGPT what changes we are able to expect.
If you want to to hearken to the total interview, you may accomplish that Listen to it here.