HomePolicyMost Australian government agencies will not be transparent about how they use...

Most Australian government agencies will not be transparent about how they use AI

A yr ago, the Commonwealth government introduced a policy requiring most federal agencies to post “AI transparency statements” on their web sites by February 2025. These statements should explain how authorities use artificial intelligence (AI), through which areas and with what security measures.

The stated goal was to construct public trust in government use of AI – without resorting to laws. Six months after the deadline, early results from our research (which can be published in full later this yr) suggest that this policy will not be working.

We checked out 224 agencies and located that only 29 had easily identifiable AI transparency statements. A deeper search found 101 links to statements.

This gives a compliance rate of around 45%, although for some agencies (e.g. defense, intelligence and company agencies) publishing an announcement is really helpful quite than required, and it is feasible that some agencies make the identical statement. Nonetheless, these preliminary initial results raise serious questions on the effectiveness of Australia’s “soft-touch” approach to AI governance in the general public sector.

Why AI transparency is vital

Public trust in AI in Australia is already low. The Commonwealth's reluctance to impose rules and safeguards on using automated decision-making in the general public sector – identified as a deficiency by the Robodebt Royal Commission – makes transparency all of the more vital.

The public expects the federal government to be a job model for the responsible use of AI. But it’s precisely the policy to make sure transparency that many authorities appear to be ignoring.

As the federal government also expresses reluctance to adopt economy-wide AI rules, best practices in government could also encourage motion from a disoriented private sector. A recent study found that 78% of corporations are “aware” of responsible AI practices, but only 29% have actually “implemented” them.

Transparency statements

The requirement for a transparency statement is the important thing binding commitment under the Digital Transformation Agency's policy for the responsible use of AI in government.

Agencies must also appoint an “accountable (AI) officer” to be chargeable for AI use. The transparency declarations must be clear, consistent and straightforward to seek out – ideally linked on the agency's homepage.

In our investigation, conducted in collaboration with the Office of the Australian Information Commissioner, we attempted to discover these statements using a mixture of automated website crawling, targeted Google searches and manual review of the Information Commissioner's endorsed list of federal agencies. This included each agencies and departments that were strictly sure by the policy in addition to people who were asked to comply voluntarily.

However, we found that few statements were accessible via the agency's landing page. Many were buried deep in subdomains or required complex manual searching. Among agencies for which publication of an announcement was really helpful quite than required, we struggled to seek out one.

What's much more worrying is that we couldn't find the reason for a lot of, even when it was needed. It may be a technical failure, but given the efforts we now have made, it points to a political failure.

A toothless demand

The requirement for a transparency declaration is binding in theory, but ineffective in practice. There aren’t any penalties for agencies that don’t comply. There can also be no open central register to trace who has or has not published an announcement.

The result’s a fragmented, inconsistent landscape that undermines the very trust that policy should construct. And the general public has no technique to understand – or query – how AI is utilized in decisions that impact their lives.

How other countries do it

In the UK, the federal government has introduced a compulsory AI registry. But because the Guardian reported in late 2024, many government departments did not list their AI use, regardless that they were legally required to achieve this.

The situation appears to have improved barely this yr, but many high-risk AI systems identified by civil society groups within the UK are still not published on the UK government's own register.

The United States has taken a more decisive stance. Despite the anti-regulation rhetoric from the White House, the administration has thus far stuck to its binding commitments to AI transparency and risk reduction.

Federal authorities are required to guage and publicly register their AI systems. If they don’t do that, they have to stop using it in response to the foundations.

Towards a responsible use of AI

In the subsequent phase of our research, we are going to analyze the content of the transparency statements found.

Do they make sense? Do they disclose risks, safeguards and governance structures? Or are they vague and superficial? Initial signs point to major differences in quality.

If governments are serious about responsible AI, they have to implement their very own policies. If determined university researchers can't easily find the statements – even in the event that they're somewhere deep on the web site – that may't be said to be transparent.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read