Artificial intelligence (AI) is a label that may cover an enormous range of activities related to machines undertaking tasks with or without human intervention. Our understanding of AI technologies is essentially shaped by where we encounter them, from facial recognition tools and chatbots to photo editing software and self-driving cars.
If you’re thinking that of AI you would possibly consider tech firms, from existing giants similar to Google, Meta, Alibaba and Baidu, to latest players similar to OpenAI, Anthropic and others. Less visible are the world’s governments, that are shaping the landscape of rules during which AI systems will operate.
Since 2016, tech-savvy regions and nations across Europe, Asia-Pacific and North America have been establishing regulations targeting AI technologies. (Australia is lagging behind, still currently investigating the opportunity of such rules.)
Currently, there are greater than 1,600 AI policies and techniques globally. The European Union, China, the United States and the United Kingdom have emerged as pivotal figures in shaping the event and governance of AI in the worldwide landscape.
Ramping up AI regulations
AI regulation efforts began to speed up in April 2021, when the EU proposed an initial framework for regulations called the AI Act. These rules aim to set obligations for providers and users, based on various risks related to different AI technologies.
As the EU AI Act was pending, China moved forward with proposing its own AI regulations. In Chinese media, policymakers have discussed a desire to be first movers and offer global leadership in each AI development and governance.
Where the EU has taken a comprehensive approach, China has been regulating specific points of AI one after one other. These have ranged from algorithmic recommendations, to deep synthesis or “deepfake” technology and generative AI.
China’s full framework for AI governance might be made up of those policies and others yet to come back. The iterative process lets regulators construct up their bureaucratic know-how and regulatory capability, and leaves flexibility to implement latest laws within the face of emerging risks.
A ‘wake-up call’
China’s AI regulation can have been a wake-up call to the US. In April, influential lawmaker Chuck Shumer said his country should “not permit China to guide on innovation or write the foundations of the road” for AI.
On October 30 2023, the White House issued an executive order on secure, secure and trustworthy AI. The order attempts to handle broader problems with equity and civil rights, while also concentrating on specific applications of technology.
Alongside the dominant actors, countries with growing IT sectors including Japan, Taiwan, Brazil, Italy, Sri Lanka and India have also sought to implement defensive strategies to mitigate potential risks related to the pervasive integration of AI.
AI regulations worldwide reflect a race against foreign influence. At the geopolitical scale, the US competes with China economically and militarily. The EU emphasises establishing its own digital sovereignty and striving for independence from the US.
On a domestic level, these regulations may be seen as favouring large incumbent tech firms over emerging challengers. This is since it is usually expensive to comply with laws, requiring resources smaller firms may lack.
Alphabet, Meta and Tesla have supported calls for AI regulation. At the identical time, the Alphabet-owned Google has joined Amazon in investing billions in OpenAI’s competitor Anthropic, and Tesla boss Elon Musk’s xAI has just launched its first product, a chatbot called Grok.
Shared vision
The EU’s AI Act, China’s AI regulations, and the White House executive order show shared interests between the nations involved. Together, they set the stage for last week’s “Bletchley declaration”, during which 28 countries including the US, UK, China, Australia and several other EU members pledged cooperation on AI safety.
Countries or regions see AI as a contributor to their economic development, national security, and international leadership. Despite the recognised risks, all jurisdictions are attempting to support AI development and innovation.
By 2026, worldwide spending on AI-centric systems may pass US$300 billion by one estimate. By 2032, in accordance with a Bloomberg report, the generative AI market alone could also be value US$1.3 trillion.
Numbers like these, and talk of perceived advantages from tech firms, national governments, and consultancy firms, are likely to dominate media coverage of AI. Critical voices are sometimes sidelined.
Competing interests
Beyond economic advantages, countries also look to AI systems for defence, cybersecurity, and military applications.
At the UK’s AI safety summit, international tensions were apparent. While China agreed with the Bletchley declaration made on the summit’s first day, it was excluded from public events on the second day.
One point of disagreement is China’s social credit system, which operates with little transparency. The EU’s AI Act regards social scoring systems of this kind as creating unacceptable risk.
The US perceives China’s investments in AI as a threat to US national and economic security, particularly when it comes to cyberattacks and disinformation campaigns.
These tensions are more likely to hinder global collaboration on binding AI regulations.
The limitations of current rules
Existing AI regulations even have significant limitations. For instance, there is no such thing as a clear, common set of definitions of various sorts of AI technology in current regulations across jurisdictions.
Current legal definitions of AI are likely to be very broad, raising concern over how practical they’re. This broad scope means regulations cover a big selection of systems which present different risks and will deserve different treatments. Many regulations lack clear definitions for risk, safety, transparency, fairness, and non-discrimination, posing challenges for ensuring precise legal compliance.
We are also seeing local jurisdictions launch their very own regulations inside the national frameworks. These may address specific concerns and help to balance AI regulation and development.
California has introduced two bills to control AI in employment. Shanghai has proposed a system for grading, management and supervision of AI development on the municipal level.
However, defining AI technologies narrowly, as China has done, poses a risk that firms will find ways to work around the foundations.
Moving forward
Sets of “best practices” for AI governance are emerging from local and national jurisdictions and transnational organisations, with oversight from groups similar to the UN’s AI advisory board and the US’s National Institute of Standards and Technology. The existing AI governance frameworks from the UK, the US, the EU, and – to a limited extent – China are more likely to be seen as guidance.
Global collaboration might be underpinned by each ethical consensus and more importantly national and geopolitical interests.