HomeNewsThe government says more people need to make use of AI. That's...

The government says more people need to make use of AI. That's why it's fallacious

The Australian government released this week voluntary security standards for artificial intelligence (AI)next to a Suggestions Paper and calls for stronger regulation of the usage of this rapidly growing technology in high-risk situations.

The Key message by the Federal Minister for Industry and Science, Ed Husic, was:

We need more people using AI, and to do this we want to construct trust.

But why exactly do people have to trust this technology? And why exactly do more people need to make use of it?

AI systems are trained on incredibly large data sets, using advanced mathematics that the majority people don't understand, and so they produce results that we will't confirm. Even state-of-the-art flagship systems produce results stuffed with errors.

ChatGPT seems to change into less accurate over time. Even at its best cannot tell you which of them letters are within the word “strawberry”. Google's chatbot Gemini recommends spreading glue on pizzaamongst other comical failures.

Against this background, public mistrust of artificial intelligence seems entirely justified. The arguments for increased use of this technology appear reasonably weak – and potentially dangerous.

Federal Minister for Industry and Science Ed Husic wants more people to make use of AI.
Mick Tsikas/AAP

AI risks

Much has been said in regards to the “existential threat” from AIand the way it’s going to result in job losses. The damage that AI brings ranges from the plain – like Autonomous vehicles that hit pedestrians – to more subtle things like AI recruitment systems which can be biased against women or AI legal system Tools with a bias against people of color.

Further damages include fraud through deepfakes of Employees and from Your family members.

It doesn’t matter that the The latest reporting by the Federal Government has shown that humans are simpler, efficient and productive than AI.

But if all you’ve got is a hammerevery part looks like a nail.

The introduction of a technology still falls into this well-known cliché. AI is just not at all times the very best tool for the jobBut once we are presented with an exciting recent technology, we frequently use it without serious about whether we must always use it.

Instead of encouraging more people to make use of AI, we must always all learn what’s a superb use of AI and what is just not a superb use of AI.

Should we trust technology – or the federal government?

How does the Australian government profit from more people using AI?

One of the most important risks is the Sharing of personal dataThese tools collect our private information, our mental property and our thoughts on a scale never seen before.

Much of this data, within the case of ChatGPT, Google Gemini, Otter.ai and other AI models, is just not processed domestically in Australia.

These corporations preach Transparency, data protection and securityBut it is usually difficult to seek out out whether Your data might be used for training their newer models, the way to secure itor which other organizations or governments have access to this data.

Recently, Federal Minister for Government Services Bill Shorten unveiled the federal government’s proposed Trust Exchange program, which addresses concerns about Collecting much more data on Australian residentsIn his speech to the National Press Club, Shorten openly mentioned the support of huge technology corporations, including Google.

If data about Australians were compiled on various technology platforms, including AI, it could lead on to widespread mass surveillance.

Even more worrying, nevertheless, is the indisputable fact that we’re witnessing how technology has the ability to influence policy and behavior.

Automation bias is the term we use for the tendency of users to consider that technology is “smarter” than they’re. Over-reliance on AI poses even greater risks for Australians – by encouraging the usage of technology without adequate education, we could expose our population to a comprehensive system of automated surveillance and control.

And while it is likely to be possible to flee this method, it might undermine social trust and cohesion and influence people without them realising it.

These aspects provide further reason to control the usage of AI, because the Australian government now proposes to do. However, this doesn’t must be accompanied by vigorous encouragement to make use of it.

Let's curb the blind hype

The issue of AI regulation is very important.

The International Organization for Standardization has developed an ordinary for Deployment and management of AI systemsIts implementation in Australia would lead to raised, more thoughtful and controlled use of AI.

This and other standards form the idea of the federal government's proposed voluntary AI safety standard.

The problem with the federal government's announcement this week was not the decision for stronger regulation, however the blind hype surrounding the usage of artificial intelligence.

Let’s deal with protecting Australians – not dictating their use and trust in AI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read