On the second day of the AI Safety Summit, British Prime Minister Rishi Sunak tried to make it clear that AI developers will allow the federal government to guage tools before they’re launched.
Sunak said the outcomes of the summit “will tip the scales in favor of humanity” and revealed that industry leaders including Meta, Google Deep Mind and OpenAI have agreed to check their AI innovations before release – something they’d already stated in a voluntary framework recently developed by the US.
Among the commitments made on the second day of the summit was the establishment of an authority panel called AI Security Institute.
LIVE: Statement from Prime Minister Rishi Sunak from the AI Safety Summit at Bletchley Park https://t.co/YLn6O8FO7f
Another headline of the second day is the disclosing of an upcoming “State of AI Science” report led by “AI Godfather” Yoshio Bengio.
“This idea is inspired by the best way the Intergovernmental Panel on Climate Change was arrange to succeed in international scientific consensus,” Sunak said.
The agreed framework advocates for comprehensive security assessments of AI models before and after they’re put into operation, emphasizing collaborative testing efforts involving governments, particularly in areas that impact national security and societal well-being.
The industry's commitment to this cause was underscored by remarks from Demis Hassabis, CEO of Google DeepMind, who stated: “AI will help solve a number of the most important challenges of our time, from curing diseases to tackling the climate crisis. “ But it should also bring latest challenges for the world and we must make sure the technology is built and deployed safely. Achieving this requires a concerted effort from governments, industry and civil society to tell and develop sound security testing and assessments. I’m pleased that the UK is launching the AI Safety Institute to speed up progress on this essential work.”
China doesn’t take part in these multilateral initiatives. However, this pact has the support of the EU and leading countries corresponding to the US, UK, Japan, France and Germany, and is supported by tech giants corresponding to Google, Amazon, Microsoft and Meta.
While the Prime Minister faced questions on the voluntary nature of the agreements and the shortage of binding laws, he insisted that AI required quick motion and suggested that “mandatory requirements” for AI firms could also be inevitable.
Along with yesterday's Bletchley Park Declaration, the summit sought to spur motion around AI, although critics have largely dismissed this as symbolic fairly than actionable.
Before going live with Sunak, Musk posted a mocking cartoon on X featuring characters representing world powers discussing AI risks while rubbing their hands over the potential for dominance.
It's actually controversial, however it must be said that it's a witty joke for a summit that’s inherently based on guarantees. However, it will be reductionist to assert that this achieves nothing, however the true content of the conversations stays hypothetical.
Sigh pic.twitter.com/jDDTkewbDL
At the conclusion of the summit, the UK government proudly released the Bletchley Declaration, signed by 28 governments including the UK, US and EU, promising a collaborative approach to AI security standards harking back to climate crisis agreements.
Overall, Rishi Sunak's diplomatic efforts were hailed as a hit on the AI Summit, establishing the UK as a pacesetter in pursuing global AI security and regulation and setting the stage for France to host the following summit in 2024.
Second day of the summit
The second day of the summit was rounded off with a debate between Sunak and Musk. Here are a number of the key events from the newest (first) day to the sooner (last) day.
Watch the 50-minute stream here.
- Musk argued that a physical “off switch” could shut down the AI within the event of catastrophic problems. “What if someday they get a software update and suddenly they’re not so friendly anymore?” Musk told Sunak.
- When Sunak asks why Musk recently modified Twitter's content moderation system, Musk argues that each one content moderators are biased. He asks, “How will we achieve a consensus approach to truth?” and says his goal is to reach at a “purer truth.” Musk claims his latest moderation system simply provides more context and transparency, explaining: “Everything is open source. They can have a look at all the information and see if the system has been tampered with and suggest improvements…The truth pays.”
- Musk predicts that AI robots could change into real friends of humans in the long run. He argues that by reading extensively they’ll have detailed memories and knowledge, saying, “You could seek advice from him daily, you'll even have an amazing friend.” This will actually be an actual thing.
- Musk makes the daring prediction that AI can be so advanced that there can be “no need for work” for humans. He says: “You can have a job if you would like to achieve your personal satisfaction, AI can do anything.” Musk says this might be positive or negative and makes the seek for meaning and purpose harder. But he also argues that it could offer a “generally high income” and produce one of the best tutors. Overall, he sees many advantages for education, productivity and automation of dangerous jobs.
- As Sunak notes he has been criticized for inviting China, Musk praises the choice as “courageous.” Musk argues that cooperation with China on AI security is crucial and says their participation is a really positive sign.
- Musk tells Sunak he believes governments must act as “referees” to make sure public safety with AI while enabling innovation. He reiterates his view that AI can be “a force for good” overall. This is in contrast to what Musk has said previously, expressing his changing views on AI.
- Ahead of their conversation, Musk expressed optimism about AI's potential but warned that it could pose risks, using the analogy of a “magical mind problem” where wishes often go awry
- During his press conference, Sunak defended the steps governments are taking to handle the safety risks of AI, saying they’re doing the “right and responsible thing” to guard the general public, even when the risks are still uncertain.
- A brand new survey shows that only 15% of individuals believe within the UK government's ability to effectively regulate AI. 29% express no trust in any respect.
- Asked whether AI could pose an existential threat, Sunak says there may be a plausible case that it could pose risks on the dimensions of a nuclear war or pandemic. He argues that managers due to this fact have an obligation to take protective measures.
- Science Minister Donelan says the AI risk she is most apprehensive about is a “Terminator scenario” where machines change into uncontrollable. She sees this as a lower probability but the largest impact.
Further analyzes of the summit will follow in the following few days. Overall, the impression is of a symbolically significant event with enormous potential.
But the potential can’t be easily translated into laws and ultimately into motion. Only time will tell.