The significant risks that AI poses to global security have gotten increasingly clear. That's partly why British Prime Minister Rishi Sunak is hosting other world leaders on November 1 and a pair of for the AI ​​Security Summit on the famous Bletchley Park, a site for breaking World War II codes. But as AI technology advances at an alarming rate, the actual threat may come from governments themselves.
The track record of AI development over the past 20 years provides a variety of evidence of the misuse of the technology by governments around the globe. These include excessive surveillance practices and the usage of AI to spread disinformation.
Although the main target of late has been on private firms developing AI products, governments should not the impartial arbiters they seem like at this AI summit. Instead, they’ve played – and can proceed to play – an equally integral role within the very development of AI.
Militarizing AI
There are repeated reports that the leading technology nations are entering into an AI arms race. No state has actually began this race. Its development has been complex and lots of groups – inside and outdoors the federal government – ​​have played a job.
During the Cold War, U.S. intelligence agencies became taken with using artificial intelligence for surveillance, nuclear defense, and automatic interrogation of spies. It is due to this fact not surprising that the mixing of AI into military capabilities has been progressing rapidly lately in other countries, resembling the United Kingdom.
Automated technologies developed to be used within the war on terror have contributed to the event of powerful AI-based military capabilities, including AI-powered drones (unmanned aerial vehicles) deployed in current conflict zones.
Russian President Vladimir Putin has declared that the country that leads in AI technology will rule the world. China has also declared its own intention to change into an AI superpower.
Surveillance states
The other big concern here is the usage of AI by governments to watch their very own societies. As governments observe increasing domestic security threats, including from terrorism, they’re increasingly deploying AI domestically to reinforce state security.
In China, this has been taken to extreme levels with the usage of facial recognition technologies, social media algorithms and web censorship to regulate and monitor the population, including in Xinjiang, where AI is an integral a part of the oppression of the Uyghur population.
But the West's track record isn't particularly good either. In 2013, it was revealed that the US government had developed autonomous tools to gather and search vast amounts of information about people's web usage, ostensibly to combat terrorism. It was also reported that the British government had access to those tools. As AI continues to develop, its use for surveillance by governments is a serious concern for privacy activists.
The borders are actually policed ​​by algorithms and facial recognition technologies which can be increasingly getting used by domestic police forces. There are also broader concerns about “predictive policing”, the usage of algorithms to predict crime hotspots (often in ethnic minority communities), that are then subject to additional policing effort.
These recent and current trends suggest that governments may not find a way to withstand the temptation to make use of increasingly sophisticated AI in ways in which raise concerns about surveillance.
Governing AI?
Despite the UK government's good intentions to convene its Security Summit and change into a world leader within the secure and responsible use of AI, the technology requires serious and sustained efforts on the international level for any type of regulation to be effective.
Governance mechanisms are starting to emerge, with the US and EU recently introducing significant latest regulations for AI.
But controlling AI on a global level is fraught with difficulties. Of course, there can be states that comply with AI regulation after which ignore it in practice.
Western governments also face the argument that overly strict regulation of AI would allow authoritarian states to comprehend their claim to be on the forefront of the technology. But when firms rush to bring latest products to market, they risk unleashing systems that might have huge unexpected consequences for society. Just take a look at how advanced text-generating AI like ChatGPT could amplify misinformation and propaganda.
And not even the developers themselves understand exactly how advanced algorithms work. Breaking through this “black box” of AI technology requires sophisticated and sustained investments in testing and verification capabilities from national authorities. But the talents or the authorities are currently not there.
The politics of fear
We are used to hearing within the news that a superintelligent type of AI is threatening human civilization. However, there are reasons to be wary of such pondering.
As my very own research highlights, the “securitization” of AI – that’s, portraying technology as an existential threat – could possibly be used as an excuse by governments to seize power, abuse it themselves, or adopt narrow, self-serving approaches to AI to pursue those that don’t realize the potential advantages they may bring to all people.
Rishi Sunak's AI summit could be a very good opportunity to focus on that governments should keep the politics of fear out of their efforts to bring AI under control.