At the international conference “Humanity on the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation” in Vienna, calls were made to control using AI in autonomous weapon systems (AWS) while the technology remains to be in its infancy.
Over centuries, advancements in technology have driven dramatic changes in how wars are fought. Developments like steel, gunpowder, and eventually the atomic bomb all found their initial applications in warfare before making their way into civilian applications.
AI has bucked the trend on this respect. Initial applications have largely been in industrial applications, but defense forces have quickly seen the potential for AI to remodel the battlefield once more.
Referencing the primary atomic bomb, Austrian Foreign Minister Alexander Schallenberg told the attendees from 143 countries that the world is facing an “Oppenheimer Moment” in deciding if and the way AI must be utilized in autonomous weapons.
Humanity on the crossroads: Autonomous weapons systems will soon fill the world’s battlefields. We need to take motion & agree on int‘l rules to be sure that decisions over life or death are never taken by machines!
➡️ Kicking off #AWS2024Vienna with 900+ guests from 142 states pic.twitter.com/AtwumLu4OP
The impetus behind the urgent need for regulations was not only the anticipation of potential future threats, but in response to AI’s use in current conflicts.
Autonomous drones are getting used by either side within the war in Ukraine. Israeli forces are using AI in multiple defense applications including allegedly using AI to discover human targets within the war in Gaza.
Schallenberg said “Autonomous weapons systems will soon fill the world’s battlefields,” warning that now was the “time to agree on international rules and norms to make sure human control”.
He urged the necessity to restrict the autonomy of AI weapons, saying, “At least allow us to ensure that that essentially the most profound and far-reaching decision — who lives and who dies — stays within the hands of humans and never of machines.”
A press release from the Austrian government said, “Autonomous weapons systems (AWS) raise profound questions from a legal, ethical, humanitarian and security perspective. Humanity is at a crossroads and must come together to handle the basic challenge of regulating these weapons.”
Defense dollars vs humanity
Ongoing conflicts have seen defense budgets increase globally, with share prices of several AI-powered defense tech firms surging in response. AWS technologies could also be too lucrative to ban.
Jaan Tallinn, an early investor Google’s DeepMind Technologies said that “Silicon Valley’s incentives won’t be aligned with the remaining of humanity.”
In his keynote address on the conference, Tallinn said, “I implore you to be wary of those that promise precision and predictability in systems using AI. We have already seen AI making selection errors in ways each large and small – from misrecognizing a referee’s bald head as a football, to pedestrian deaths attributable to self-driving cars, unable to acknowledge jaywalking.”
“We should be extremely cautious about counting on the accuracy of those systems, whether within the military or civilian sectors. Accidental errors attributable to autonomous weapons have the potential to spark the sorts of wars that ought to never be waged.”
Tallinn identified that designing AI weapons which might be more reliable isn’t the answer. He explained that, “even when autonomous weapons grow to be in a position to perfectly distinguish between humans, they are going to make it significantly easier to perform genocides and targeted killings that seek specific human characteristics.”
“Stepping out of an arms race requires courage and foresight. We have done it before, and we are able to do it again.”
From the opening of proceedings on the historic Vienna Conference on Autonomous Weapons #AWS2024 yesterday, here’s FLI co-founder Jaan Tallinn’s full keynote speech ⬇️ pic.twitter.com/wFYxWpDl1S
In a final statement to be sent to the UN Secretary General, the group affirmed its “strong commitment to work with urgency and with all interested stakeholders for a global legal instrument to control autonomous weapons systems”.
The statement added, “We have a responsibility to act and to place in place the foundations that we’d like to guard humanity… Human control must prevail in using force.”
More than 115 UN member states agree on the necessity for binding regulations governing AWS, but evading a veto from Russia, China, or the US seems unlikely.
Anthony Aguirre, cosmologist and co-founder of the Future Life Institute, summed up the situation by saying “The way forward for slaughter bots is here.”