HomePolicyAustralia plans to control “high risk” AI. Here's the right way...

Australia plans to control “high risk” AI. Here's the right way to do that successfully

This week, Federal Minister for Industry and Science Ed Husic announced the Australian Government's response to the consultation on secure and responsible AI in Australia.

The response draws on feedback from last 12 months's consultation on artificial intelligence (AI). More than 500 submissions were received, expressing “excitement about the probabilities” of AI tools, but in addition raising concerns about potential risks and Australians’ expectations of “regulatory safeguards to stop harm”.

Rather than enacting a single AI regulatory law, because the European Union has done, the Australian government plans to give attention to high-risk areas of AI implementation – those with the best potential for harm. This could include examples reminiscent of discrimination within the workplace, the justice system, surveillance or self-driving cars.

The government also plans to determine a brief expert advisory group to support the event of those guidelines.

How will we define “high-risk AI”?

While this proportionate response could also be welcomed by some, the give attention to high-risk areas with only a brief advisory body raises significant questions:

  • How are high-risk areas defined – and who makes this decision?

  • Should low-risk AI applications be subject to similar regulation if some measures (e.g. requiring watermarks for AI-generated content) could largely combat misinformation?

  • Without a standing advisory board, how can organizations anticipate risks to latest AI technologies and latest applications of AI tools in the longer term?

Assessing the “risk” of using latest technologies is just not latest. We have many existing policies, guidelines and regulations that could be adapted to handle concerns about AI tools.

For example, many Australian sectors are already heavily regulated to handle safety concerns, reminiscent of vehicles and medical devices.

In all research involving human subjects, Australian researchers must adhere to national guidelines which clearly define risk assessment practices:

  • Identifying the risks and people who may very well be vulnerable to harm;

  • assessing the likelihood, severity and extent of the danger;

  • Consider strategies to attenuate, mitigate and/or manage risks;

  • identifying potential advantages and who might profit; And

  • Weighing the risks and determining whether the risks are justified by the potential advantages.

This risk assessment is carried out before conducting research and is carried out under comprehensive review and oversight by human research ethics committees. The same approach may very well be used for AI risk assessment.

AI has already arrived in our lives

A key problem with AI regulation is that many tools are already in use in Australian homes and workplaces, but without regulatory guardrails to administer risk.

A recent YouGov report found that 90% of Australian staff use AI tools for his or her day by day tasks, despite serious limitations and shortcomings. AI tools can “hallucinate” and present fake information to users. The lack of transparency of coaching data raises concerns about bias and copyright infringement.

Consumers and organizations need guidance on the right way to appropriately adopt AI tools for risk management, but many uses lie outside of “high-risk” areas.

Defining “high risk” settings is difficult. The term “risk” lies on a spectrum and is just not absolute. Risk is just not determined by a tool itself or the environment wherein it’s used. Risks arise from contextual aspects that create the potential for harm.

For example, while knitting needles pose little risk in on a regular basis life, knitters are warned against carrying metal needles on airplanes. Airport security considers these tools to be “dangerous” tools and restricts their use on this environment to stop harm.

To discover “high risk” settings, we’d like to grasp how AI tools work. Knowing that AI tools can result in gender discrimination in hiring practices signifies that all firms must manage hiring risks. Failure to grasp the constraints of AI, just like the American lawyer who trusted the fake case law generated by ChatGPT, highlights the danger of human error when using AI tools.

The risks posed by people and organizations when using AI tools should be managed, in addition to the risks posed by the technology itself.

Who advises the federal government?

The government states in its response that the expert panel on AI risks will need “diverse members and expertise from industry, academia, civil society and the legal occupation.”

Within the industry, membership should span diverse sectors (e.g., healthcare, banking, law enforcement), with representation from large organizations and small to medium-sized businesses.

Members of academia should include not only AI computing experts but in addition social scientists with expertise in consumer and organizational behavior. They can advise on risk evaluation, ethics and other people's concerns about adopting latest technologies, including misinformation, trust and privacy concerns.

The government also needs to choose the right way to cope with potential future AI risks. A standing advisory board could manage risks for future technologies and for brand spanking new uses of existing tools.

Such a body could also advise consumers and workplaces on lower-risk AI applications, particularly where there is restricted or no regulation.

Misinformation is a key area where the constraints of AI tools are known and requires people to have strong critical considering and knowledge literacy skills. For example, requiring transparency when using AI-generated images can make sure that consumers are usually not misled.

However, the federal government’s current give attention to transparency is restricted to “high-risk” situations. This is a start, but more advice – and more regulation – will likely be needed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read