HomeNews80% of Australians imagine AI risk is a world priority. The...

80% of Australians imagine AI risk is a world priority. The government must step up

A brand new nationally representative survey has found Australians are deeply concerned in regards to the risks posed by artificial intelligence (AI). They want the federal government to take stronger measures to make sure protected development and use.

We conducted the survey in early 2024 and located that 80% of Australians imagine stopping catastrophic risks through advanced AI systems ought to be a world priority comparable to pandemics and nuclear war.

As AI systems grow to be more powerful, decisions about how we develop, deploy and use AI are critical. The promise of powerful technology could entice corporations – and countries – to achieve this Race ahead without taking note of the risks.

Our findings also reveal a spot between the AI ​​risks that media and government are likely to give attention to and the risks that Australians consider most vital.



Public concern about AI risks is growing

The development and use of increasingly powerful AI continues to advance. Current publications reminiscent of Google's twins And Anthropics Claude 3 have seemingly near-human abilities in skilled, medical and legal areas.

However, the hype has been tempered by increasing public and expert concern. Last yr, greater than 500 people and organizations submitted applications to the Australian Government Discussion paper on protected and responsible AI.

They described AI-related risks reminiscent of biased decision-making, lack of trust in democratic institutions because of misinformation, and increasing inequality because of AI-related unemployment.

Some even fear that a very powerful AI could lead on to this a world catastrophe or Extinction of humanity. Although this concept is hotly debated in plenty of cases three large SurveyMost AI researchers imagine that there may be a minimum of a 5% likelihood of superhuman AI being “extremely bad (e.g. human extinction).”

The potential advantages of AI are significant. AI is already resulting in this Breakthroughs in biology and medicineand that's what it's used to Control of fusion reactors, which could sooner or later provide carbon-free energy. Generative AI improves productivityespecially for learners and students.

However, the speed of progress raises alarm bells. People fear that we’re unprepared to take care of powerful AI systems that might be misused or behave in unintended and harmful ways.

In response to such concerns, world governments are attempting to manage the situation. The European Union has agreed to a draft AI law, the United Kingdom has founded an AI security institutewhile US President Joe Biden recently signed an executive order to put it up for sale safer development and control of advanced AI.



Australians want measures to stop dangerous consequences of AI

To understand how Australians take into consideration AI risks and easy methods to address them, we surveyed a nationally representative sample of 1,141 Australians in January and February 2024.

We have found that stopping “dangerous and catastrophic consequences of AI” is the highest priority for presidency motion for Australians.

Australians are most concerned about AI systems which are unsafe, untrustworthy and inconsistent with human values.

Other major concerns include using AI in cyberattacks and autonomous weapons, AI-related unemployment, and AI failures causing damage to critical infrastructure.

Strong public support for a brand new AI regulator

Australians expect the federal government to take decisive motion on their behalf. An overwhelming majority (86%) desire a recent government agency dedicated to AI regulation and governance, just like this Management of therapeutic goods for medication.

Nine in ten Australians also imagine the country should play a number one role in international efforts to manage AI development.

Perhaps most strikingly, two-thirds of Australians would support a six-month pause in AI development to permit regulators to catch up.



Government plans should reflect public expectations

In January 2024, the Australian government published a Interim plan to deal with AI risks. This includes strengthening existing laws on data protection, online security and disinformation. It can be recognized that our current regulatory frameworks are usually not sufficient.

The preliminary plan calls for the event of voluntary AI safety standards, voluntary labeling of AI materials and the establishment of an advisory board.

Our survey shows Australians support a more safety-focused approach that puts regulation first. This is in contrast to the targeted and voluntary approach of the interim plan.

It is a challenge to encourage innovation while stopping accidents or misuse. But Australians would like the federal government to prioritize stopping dangerous and catastrophic consequences over “delivering the advantages of AI to everyone”.

Some ways to do that are::

  • Establish an AI security laboratory with the technical capability to check and/or monitor probably the most advanced AI systems

  • Establishment of a dedicated AI regulatory authority

  • Defining robust standards and guidelines for responsible AI development

  • requires independent auditing of high-risk AI systems

  • Ensuring corporate liability and redress for AI damages

  • Increasing public investment in AI security research

  • Actively engage the general public in shaping the longer term of AI governance.

Figuring out easy methods to effectively control AI is one among them The great challenges facing humanity. Australians are keenly aware of the risks of failure and wish our government to deal with this challenge immediately.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read