Most common applications of artificial intelligence (AI) reap the benefits of its ability to process large amounts of information and discover patterns and trends in it. The results may also help predict the long run behavior of economic markets and concrete traffic, and even help doctors diagnose illnesses before symptoms appear.
But AI will also be used to compromise the privacy of our online data, automate jobs, and undermine democratic elections by flooding social media with disinformation. Algorithms can inherit biases from the true data used to enhance them, which may lead to discrimination in hiring, for instance.
AI regulation is a comprehensive algorithm that dictates how this technology ought to be developed and used to eliminate its potential harms. Here are a few of the key efforts to attain this and the way they differ.
The EU AI Law and the Bletchley Declaration
The European Commission's AI law goals to mitigate potential threats while promoting entrepreneurship and innovation in the sector of AI. The UK's AI Safety Institute, announced on the recent government summit in Bletchley Park, also goals to strike this balance.
EU law bans AI tools deemed to pose unacceptable risks. This category includes products for social scoring, which classifies people based on their behavior, and real-time facial recognition.
The law also severely restricts high-risk AI, the subsequent lowest category. This label covers applications that will have a negative impact on fundamental rights, including security.
Examples include autonomous driving and AI advice systems utilized in hiring, law enforcement, and education. Many of those tools have to be registered in an EU database. The limited risk category includes chatbots like ChatGPT or image generators like Dall-E.
In general, AI developers must make sure the confidentiality of any personal data used to “train” – or improve – their algorithms and be transparent about how their technology works. However, one among the law's biggest drawbacks is that it was developed primarily by technocrats without extensive public participation.
Unlike the AI Act, the recent Bletchley Declaration is just not a regulatory framework per se, but a call to develop one through international cooperation. The 2023 AI Security Summit that produced the declaration was hailed as a diplomatic breakthrough because it brought the world's political, business and scientific communities to agree on a typical plan that reflects EU law.
The USA and China
Companies from North America (particularly the US) and China dominate the business AI landscape. Most of their European headquarters are within the United Kingdom.
The US and China are vying to achieve a foothold within the regulatory space. US President Joe Biden recently issued an executive order requiring AI manufacturers to supply the federal government with an assessment of their applications' vulnerability to cyberattacks, the information used to coach and test the AI, and their performance measurements.
CHRIS J. RATCLIFFE / POOL / EPA PICTURES
The U.S. executive order creates incentives to advertise innovation and competition by attracting international talent. It mandates the establishment of educational programs to develop AI skills throughout the U.S. workforce. In addition, government funding is made available for partnerships between government and personal corporations.
Risks resembling discrimination attributable to the usage of AI in hiring, mortgage applications and court adjudications can be addressed by requiring the heads of US executive departments to publish guidance. This would determine how federal authorities should monitor the usage of AI in these areas.
Chinese AI regulations show significant interest in generative AI and protections against deep fakes (synthetically produced images and videos that mimic the looks and voice of real people but convey events that never happened).
There can also be a powerful deal with the regulation of AI advice systems. These are algorithms that analyze people's online activities to find out what content, including promoting, to put at the highest of their feeds.
To protect the general public from recommendations deemed unreliable or emotionally damaging, Chinese regulations ban fake news and forestall corporations from applying dynamic pricing (i.e. setting higher premiums for essential services based on the exploitation of private data). They also stipulate that any automated decision-making have to be transparent to those affected.
The way forward
Regulatory efforts are influenced by national contexts, resembling US concerns about cyber defenses, China's strong position within the private sector, and EU and UK attempts to balance innovation support with risk reduction. Global frameworks face similar challenges of their attempts to advertise ethical, protected and trustworthy AI.
Some definitions of key terms are vague and reflect the input of a small group of influential stakeholders. The general public is underrepresented.
Policymakers have to be cautious in regards to the significant political capital of technology corporations. It is essential to incorporate them in regulatory discussions, however it can be naive to trust these powerful lobbyists to police themselves.
AI is penetrating the economic fabric, informing financial investments, supporting national health and social services, and influencing our entertainment preferences. Whoever sets the prevailing regulatory framework also has the chance to shift the worldwide balance of power.
Important topics remain unconsidered. In the case of job automation, for instance, conventional wisdom assumes that digital apprenticeships and other types of reskilling will transform the workforce into data scientists and AI programmers. However, many highly expert people is probably not enthusiastic about software development.
As the world grapples with the risks and opportunities of AI, we are able to take positive steps to make sure the responsible development and use of this technology. To support innovation, newly developed AI systems could start within the high-risk category – as defined within the EU AI law – and be downgraded to lower risk categories as their impact is examined.
Policymakers could also learn from highly regulated industries resembling pharmaceuticals and nuclear power. They aren’t directly comparable to AI, but most of the quality standards and operating procedures that govern these safety-critical areas of the economy could provide useful insights.
Ultimately, the cooperation of everyone affected by AI is important. The design of the foundations shouldn’t be left to technocrats alone. The general public needs a say in a technology that may have a profound impact on their personal and skilled lives.