Last yr, the primary Global AI Safety Summit was held at historic Bletchley Park within the UK, attracting global attention.
But because the second six-month follow-up summit approaches, scheduled for May 21 and 22 and hosted largely virtually by Britain and South Korea, signs suggest a reality check is at hand.
Summit organizers hope to proceed the momentum created at Bletchley Park, where over 25 government officials signed a joint statement committing to joint oversight of AI.
While nobody expected this smaller interim event to match or exceed the dimensions of the primary summit, key participants included DeepMind and Mozilla, forgo the meeting.
Although the EU has not ruled out its presence on the event, a spokesperson confirmed that its top technology regulators, including Margrethe Vestager, Thierry Breton and Vera Jourova, won’t be attending.
The U.S. State Department has confirmed it would send representatives to the gathering in Seoul, but didn’t specify who.
Meanwhile, the governments of Canada, Brazil and the Netherlands have announced that they are going to not participate within the event.
The French government has also reportedly postponed the larger annual security summit until 2025, but this stays unconfirmed.
The challenges of AI have change into more confusing
Saying that we must always protect humanity from global and extinction-threatening events is sort of easy, considering that the technology continues to be in its infancy and the actual risk stays low.
On the opposite hand, meaningful motion against deep fakes, environmental damage and copyright requires real work that goes beyond pomp and rhetoric.
While we now have witnessed the emergence of a patchwork of laws and regulations to regulate AI, particularly the I HAVE ActMany key questions remain unresolved.
Francine Bennett, interim director of the Ada Lovelace Institute, told Reuters: “The policy discourse around AI has expanded to incorporate other vital concerns akin to market concentration and environmental impact.”
The broader scope of AI security requires extensive and highly subjective considerations that will not be self-evident on this virtual environment.
Another factor is that geopolitical tensions between the Western powers and China proceed to be a thorn within the side of the negotiations.
While US and China have discussed AI security private meetingsother major events like that World Economic Forum witnessed frosty interactions between the 2 powers, including a strike by the US Delegation at a Chinese lecture.
This six-month virtual security summit will likely reflect on the moderate progress made to this point, but strong practical motion on key issues stays to be taken.