HomePolicyDo we'd like a brand new law for AI? Sure –...

Do we’d like a brand new law for AI? Sure – but first we could attempt to implement the laws we have already got

Regulation was once a grimy word in tech firms all over the world. They argued that if people wanted higher smartphones and flying cars, we’d should look beyond the dusty old laws invented within the pre-internet era.

But something profound is afoot. First a whisper, then a roar: the law is back.

Ed Husic, Australia's Federal Minister for Technology Policy, is leading a first-of-its-kind review of Australian law and asking Australians how our law should change for the AI ​​age. He recently told the ABC: “I feel the era of self-regulation is over.”

Sure, there have been reservations. Husic made it clear that AI regulation should deal with “high-risk elements” and “the fitting balance.” But the rhetorical shift was unmistakable: If we had allowed a form of digital Wild West to emerge, it might should end.

Tech firms demand regulation – but why?

One moment could sum up the start of this recent era. On May 16, Sam Altman – CEO of OpenAI, the corporate accountable for ChatGPT – told the US Congress: “Regulation of AI is crucial.”

At first glance, this appears to be a surprising transformation. Less than a decade ago, Facebook's motto was “Move fast and break things.” When its founder, Mark Zuckerberg, uttered those words, he spoke on behalf of a generation of Silicon Valley tech bros who saw the law as a handbrake on innovation.

Reforms are urgently needed and that’s the reason we must seize this moment. But first we must always ask ourselves why the tech world is suddenly fascinated about regulation.

One explanation for that is that technology leaders may recognize that without simpler regulation, the threats related to AI could overshadow its positive potential.

We have been tragically reminded of the worth of regulation recently. Consider OceanGate, the corporate behind the Titanic submersible that disintegrated earlier this 12 months, killing everyone on board. OceanGate opted out of security certification because “keeping an out of doors company apprised of each innovation before it’s put into practice is anathema to rapid innovation.”

Perhaps there was an actual change of heart: tech firms definitely know that their products can do each harm and help. But something else plays a job. When tech firms call on governments to legislate for AI, there may be an unspoken premise: There are currently no laws that apply to AI.

But that is solely improper.

Existing laws already apply to AI

Our current laws make it clear that you should not engage in fraudulent or negligent behavior, whatever the technology used.

Let's say you advise people on selecting the most effective medical insurance, for instance. It doesn't matter whether you base your advice on an abacus or probably the most sophisticated type of AI, it’s equally illegal to just accept secret assignments or give careless advice.

A key a part of the issue within the AI ​​age just isn’t the content of our laws, however the proven fact that they should not consistently enforced when developing and using AI. This signifies that regulators, courts, lawyers and the municipal sector must do their best to be sure that human rights and consumer protections for AI are effectively enforced.

This will probably be an enormous task. In our contribution to the federal government's AI review, we call on the University of Technology Sydney Human Technology Institute to create an AI Commissioner – an independent expert advisor to government and the private sector. This body would cut through the hype and white noise and supply clear advice to regulators and firms on the right way to use AI throughout the letter and spirit of the law.

Australia needs to maintain up with the world

Australia has experienced a period of utmost political lethargy on the AI ​​front. While the European Union, North America and a number of other countries in Asia (including China) have put in place legal protections, Australia has been slow to reply.

In this context, the review of regulation for AI is crucial. We shouldn’t mindlessly copy other jurisdictions, but our law should make sure the same protections for Australians.

This signifies that the Australian Parliament should adopt a legal framework that matches our political and legal system. If which means diverging from the EU's draft AI law, that's all well and good, but our law must protect Australians from the risks of AI not less than as effectively because it protects people in Europe.

Personal data is the fuel for AI, so the start line ought to be updating our data protection laws. The Attorney General has published a review that may modernize our data protection laws, but we now have not yet seen a commitment to vary.

Reform is especially urgent in dangerous applications of AI, resembling facial recognition technology. A series of research by CHOICE has shown that firms are increasingly using this technology in shopping malls, sports stadiums and workplaces – without adequate protection from injustice or mass surveillance.

There are clear reform solutions that enable the protected use of facial recognition, but we’d like political leadership.

The government must get AI right

The government must also lead by example. The Robodebt Royal Commission showed in harrowing detail how the federal government's automated welfare debt collection system went horribly improper, causing enormous and widespread harm to the community.

The lesson from this experience just isn’t that we must always throw away all computers. But it shows that we’d like clear, strong guardrails to make sure the federal government leads the way in which within the protected and responsible use of AI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read