The recent wave of artificial intelligence – the so-called AI – brings with it each guarantees and threats.
By empowering staff, it could increase productivity and lift real wages. By leveraging large, untapped data sets, it could improve outcomes in service sectors corresponding to retail, health and education.
Risks include deepfakes, data breaches, unappealable algorithmic decisions, mental property violations, and large job losses.
Both the risks and the potential advantages appear to be growing by the day. On Thursday, Open AI released recent models that it said could Reasonperform complex calculations and draw conclusions.
But as a specialist in competition and consumer protection, I consider that recent AI-specific regulations are largely misguided.
Most applications of AI are already regulated
A Senate Committee will report on the opportunities and impacts of introducing AI, and I even have been involved in the event of the Productivity Commission submission.
Independently, the federal government is discussing mandatory railings for AI in high-risk environments, which might function a type of checklist for what developers should do along with a voluntary Safety standard.
Here's what I'm considering: Most of the potential uses of AI are already covered by existing rules and regulations that, for instance, protect consumers and data privacy or prohibit discrimination.
These laws are removed from perfect. But where they usually are not perfect, the most effective approach is to correct or extend them moderately than introduce additional special rules for AI.
AI can definitely pose challenges to our existing laws. For example, it could make it easier to deceive consumers or use algorithms that help firms fix prices.
The key point, nevertheless, is that there are laws in place to regulate this stuff and that regulators even have experience in enforcing them.
The best approach is to implement existing rules
Australia's major benefits include the strength and expertise of its regulators, which include the Competition and Consumer Commission, the Communications and Media Authority, the Australian Information Commissioner, the Australian Securities and Investments Commission and the Australian Energy Regulator.
Their task needs to be to indicate wherein areas AI is roofed by the present rules, to evaluate the extent to which AI could violate these rules, and to perform test cases that show the applicability of the foundations.
This approach will help construct trust in AI as consumers see that they’re already protected, while also providing clarity for businesses.
AI could also be recent, however the established consensus about what behavior is appropriate and what isn’t has not modified much.
Some rules must be adjusted
In some situations, existing regulations must be amended or expanded to make sure that behaviors enabled by AI are covered. Approval procedures for vehicles, machinery and medical devices are among the many procedures where AI must increasingly be taken into consideration.
And in some cases, recent regulations shall be essential. But that needs to be the tip, not the start. Trying to manage AI since it is AI shall be ineffective at best. At worst, it’ll hinder the event of socially desirable AI applications.
Many applications of AI carry little or no risk. Where there’s potential harm, it should be weighed against the potential advantages of the appliance. Risks and advantages should be assessed against real-world, human-developed alternatives, that are themselves removed from risk-free.
New regulations will only be essential where the present regulations – even after clarification, amendment or extension – usually are not sufficient.
Where needed, they needs to be as technology-neutral as possible. Rules written for specific technologies are more likely to turn into outdated quickly.
Last-mover advantage
Finally, there’s much to be said for introducing international regulations. Other countries corresponding to the European Union are pioneers in the event of AI-specific regulations.
Product developers all over the world, including those in Australia, might want to comply with these recent rules in the event that they want to achieve access to the EU and these other major markets.
If Australia were to develop its own idiosyncratic AI-specific rules, developers might ignore our relatively small market and look elsewhere.
This implies that within the limited situations where AI-specific regulations are required, the international rules already in place should function a start line.
There are benefits to being a late bloomer or last to reach. This doesn’t mean that Australia mustn’t be on the forefront of developing international standards. It simply implies that the country should develop those standards in international forums with other countries, moderately than going it alone.
The landscape continues to be evolving. Our goal needs to be to offer ourselves the most effective probability to maximise the advantages of AI while creating safety nets to guard us from negative consequences. Our existing rules, not recent AI-specific rules, needs to be our start line.