Governments world wide are grappling with how best to regulate the increasingly unruly beast of artificial intelligence (AI).
This fast-growing technology guarantees to spice up economies and make easy tasks easier to finish, but it surely also poses serious risks, similar to AI-enabled crime and fraud, an increased spread of misinformation and disinformation, increased public surveillance, and further discrimination against already disadvantaged groups.
The European Union has taken a number one role on this planet in tackling these risks. In recent weeks Artificial Intelligence Act got here into force.
This is the primary law internationally geared toward comprehensively managing AI risks, and there may be much to be learned from it for Australia and other countries as they too seek to make sure that AI is protected and helpful for all.
AI: a double-edged sword
AI is already widespread in human society. It forms the premise of the algorithms that recommend music, movies and TV shows on applications similar to Spotify or Netflix. It is in cameras that discover people in airports and shopping malls. And it’s increasingly utilized in recruitment, education and healthcare.
But AI can also be getting used for more disturbing purposes. It can create deepfakes of images and videos, facilitate online fraud, promote mass surveillance, and violate our privacy and human rights.
For example, Australian Privacy Commissioner Angelene Falk stated in November 2021: ruled a facial recognition toolClearview AI, violated data protection laws by copying photos of individuals from social media sites for training purposes. However, a Crikey investigation Earlier this 12 months, it emerged that the corporate was still collecting photos of Australians for its AI database.
Cases like this underscore the urgent need for higher regulation of AI technologies. In fact, AI developers are even calling for laws to mitigate AI risks.
The EU laws on artificial intelligence
The The European Union’s recent AI law got here into force on 1 August.
Crucially, the necessities for various AI systems are set based on the extent of risk they pose. The more risks an AI system poses to people's health, safety or human rights, the more stringent the necessities it must meet.
The law includes a listing of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to govern individual decisions. It also includes unrestricted and real-world facial recognition systems utilized by law enforcement agencies. just like those currently utilized in China.
Other AI systems, similar to those utilized by authorities or in education and healthcare, are also considered dangerous. Although they aren’t banned, they have to meet many requirements.
For example, these systems should have their very own risk management plan, be trained on high-quality data, meet accuracy, robustness and cybersecurity requirements, and supply a certain level of human oversight.
Lower-risk AI systems, similar to various chatbots, only need to fulfill certain transparency requirements. For example, users have to be informed that they’re interacting with an AI bot and never an actual person. AI-generated images and text must also include a press release that they were generated by the AI ​​and never by a human.
Certain EU and national authorities will monitor whether AI systems deployed on the EU market meet these requirements and impose fines in case of non-compliance.
Other countries follow suit
The EU is just not the just one taking measures to curb the AI ​​revolution.
At the start of the 12 months, the Council of Europe, a global human rights organisation with 46 member states, adopted the primary international treaty to require AI to respect human rights, democracy and the rule of law.
Canada can also be discussing the AI and Data ActSimilar to EU laws, this sets rules for various AI systems depending on their risks.
Instead of a single law, the U.S. government recently proposed a series of various laws that address different AI systems in several sectors.
Australia can learn – and lead
There is great concern about AI in Australia and steps are being taken to impose the crucial safeguards on the brand new technology.
Last 12 months, the Federal Government conducted a public consultation on Safe and responsible AI in Australia. Subsequently, a AI expert group which is currently working on a primary legislative proposal on AI.
The government can also be planning legislative reforms to handle the challenges of AI in healthcare, consumer protection and the creative industries.
The risk-based approach to AI regulation utilized by the EU and other countries is a superb place to begin when desirous about regulating different AI technologies.
However, a single AI law won’t ever find a way to handle the complexity of the technology in specific industries. For example, the usage of AI in healthcare will raise complex ethical and legal questions that may should be addressed in specific healthcare laws. A general AI law won’t be enough.
Regulating different AI applications across different sectors is not any easy task, and there continues to be a protracted technique to go before comprehensive and enforceable laws are in place in all jurisdictions. Policymakers must join forces with industry and communities across Australia to make sure AI delivers the promised advantages to Australian society – without the associated harms.