HomePolicyAustralia's national plan says existing laws are sufficient to control AI. This...

Australia's national plan says existing laws are sufficient to control AI. This is a false hope

Earlier this month, Australia's long-awaited national AI plan was released to a mixed response.

The plan departs from the federal government's previously promised mandatory AI protections. Instead, it’s positioned as a whole-of-government roadmap for constructing an “AI-powered economy.”

The plan has set off alarm bells amongst experts since it lacks specificity, measurable goals and clarity.

Cases of AI harm are increasing worldwide. From serious cybercrime breaches using deepfakes to disinformation campaigns driven by generative AI, the dearth of accountability is appalling. In Australia, AI-generated child sexual abuse material is spreading rapidly, and existing laws fail to guard victims.

Without specific AI regulation in Australia, probably the most vulnerable people risk being harmed. But there are also conditions elsewhere on this planet from which we will learn.

There aren’t any specific AI laws in Australia

The latest plan doesn’t provide for an independent AI law. There are also no specific recommendations for reforms to existing laws. Instead, an AI Security Institute and other processes including voluntary codes of conduct shall be established.

According to Andrew Charlton, Deputy Minister for Science, Technology and the Digital Economy, “The Institute (..) will work directly with regulators to make sure we’re able to harness the advantages of AI safely and with confidence.” However, this institute is simply granted management and advisory powers.

Australia also has an extended history of blaming algorithms for legal failures, equivalent to the Robodebt scandal. Current legal protections should not sufficient to deal with existing and potential harm from AI. As a result, the brand new AI plan risks increasing inequities.

Legal slap within the face

Holding technology firms legally liable isn’t any easy task.

Big Tech is consistently in search of loopholes in existing legal systems. Tech giants Google and OpenAI claim that “fair use” provisions in U.S. copyright law legalize data scraping.

The social media firms Meta and TikTok exploit existing laws – equivalent to the extensive immunity under the US Communications Decency Act – to avoid liability for harmful content.

Many also use special purpose entities (essentially shell firms) to bypass antitrust laws that focus on anti-competitive behavior.

Under the brand new national plan, Australia's “technology neutral” approach argues that existing laws and regulations are sufficient to deal with potential harms from AI.

Under this line of pondering, concerns equivalent to data breaches, consumer fraud, discrimination, copyright and workplace safety will be addressed with a light-weight touch – regulating only where mandatory. And the AI ​​Safety Institute would “monitor and advise.”

Existing laws identified as sufficient include the Privacy Act, Australian Consumer Law, current anti-discrimination, copyright and mental property laws, and industry-specific laws and standards, for instance within the medical field.

This might seem to be extensive legal oversight. However, legal gaps remain, including those related to generative AI, deepfakes and artificial data compensated for AI training.

There are also more fundamental concerns about systemic algorithmic bias, autonomous decision-making and environmental risks. The lack of transparency and accountability can be great.

Big Tech often uses legal uncertainty, lobbying, and technical complexity to delay compliance and evade responsibility. Companies are adapting while the legal system tries to catch up – like a slugfest.

A call to motion for Australia

Just just like the moles in the sport, big tech firms often engage in “regulatory arbitrage” to get across the law. This represents a transition to jurisdictions with less stringent laws. According to the present plan, that is now Australia.

The solution? Global consistency and harmonization of relevant laws to cut back the variety of locations that giant technology firms can exploit.

Two frameworks specifically offer lessons. Harmonizing Australia's National AI Plan with the EU AI Act and Aotearoa New Zealand's Māori AI Governance Framework would improve the protection of all Australians.

The EU AI law was the world's first AI-specific laws. There are clear rules about what’s allowed and what will not be allowed. AI systems are assigned legal obligations and responsibilities based on the extent of potential societal risk they pose.

The law provides for various enforcement mechanisms. These include specific fines for non-compliance in addition to governance and monitoring authorities at EU and national level.

Meanwhile, the Māori AI Governance Framework describes the principles of Indigenous data sovereignty. It highlights the importance of Māori data sovereignty within the face of inadequate AI regulation.

The framework includes 4 pillars that provide comprehensive measures to support Māori data sovereignty, national health and community safety.

The EU AI Act and the Māori framework formulate clear values ​​and translate them into specific protective measures: on the one hand through enforceable risk-based rules, however through culturally anchored principles.

Meanwhile, Australia's AI plan claims to reflect “Australian values” but provides neither regulatory underpinnings nor cultural specifics to sustain them. As legal experts have called for, Australia needs AI accountability structures that don’t depend on individuals successfully prosecuting well-resourced firms using outdated laws.

The alternative is evident. We can either pursue an “AI-powered economy” at any cost or construct a society where community safety, not money, comes first.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read