The Australian government has announced plans to ban “nudify” tools and hold tech platforms accountable for failing to stop users from accessing them.
This is an element of the federal government’s overall technique to move towards a “digital duty of care” approach to online safety. This approach places obligation on tech firms to take proactive steps to discover and stop online harms on their platforms and services.
So how will the nudify ban occur in practice? And will or not it’s effective?
How are nudify tools getting used?
Nudify or “undress” tools can be found on app stores and web sites. They use artificial intelligence (AI) methods to create realistic but fake sexually explicit images of individuals.
Users can upload a clothed, on a regular basis photo which the tool analyses after which digitally removes the person’s clothing by putting their face onto a nude body (or what the AI “thinks” the person would appear to be naked).
The problem is that nudify tools are easy to make use of and access. The images they create may also look highly realistic and could cause significant harms, including bullying, harassment, distress, anxiety, reputational damage and self-harm.
These apps – and other AI tools used to generate image-based abuse material – are an increasing problem.
In June this 12 months, Australia’s eSafety Commissioner revealed that reports of deepfakes and other digitally altered images of individuals under 18 have greater than doubled previously 18 months.
In the primary half of 2024, 16 nudify web sites that were named in a lawsuit issued by the San Francisco City Attorney David Chiu were visited greater than 200 million times.
In a July 2025 study, 85 nudify web sites had a combined average of 18.5 million visitors for the preceding six months. Some 18 of the web sites – which depend on tech services resembling Google’s sign-on system, or Amazon and Cloudflare’s hosting or content delivery services – made between US$2.6 million and $18.4 million previously six months.
Aren’t nudify tools already illegal?
For adults, sharing (or threatening to share) non-consensual deepfake sexualised images is a criminal offence under most Australian state, federal and territory laws. But other than Victoria and New South Wales, it just isn’t currently a criminal offence to create digitally generated intimate images of adults.
For children and adolescents under 18, the situation is barely different. It’s a criminal offence not only to share child sexual abuse material (including fictional, cartoon or fake images generated using AI), but in addition to create, access, possess and solicit this material.
Developing, hosting and promoting the usage of these tools for creating either adult or child content just isn’t currently illegal in Australia.
Last month, independent federal MP Kate Chaney introduced a bill that might make it a criminal offence to download, access, supply or offer access to nudify apps and other tools of which the dominant or sole purpose is the creation of kid sexual abuse material.
The government has not taken on this bill. It as a substitute desires to concentrate on placing the onus on technology firms.
Minister for Communications, Anika Wells, said the federal government will work closely with industry to limit nudify tools.
Mick Tsikas/AAP
How will the nudify ban actually work?
Minister for Communications, Anika Wells, said the federal government will work closely with industry to work out the perfect option to proactively restrict access to nudify tools.
At this point, it’s unclear what the time frames are or how the ban will work in practice. It might involve the federal government “geoblocking” access to nudify sites, or directing the platforms to remove access (including promoting links) to the tools.
It may additionally involve transparency reporting from platforms on what they’re doing to deal with the issue, including risk assessments for illegal and harmful activity.
But government bans and industry collaboration won’t completely solve the issue.
Users can get around geographic restrictions with VPNs or proxy servers. The tools will also be used “off the radar” via file-sharing platforms, private forums or messaging apps that already host nudify chatbots.
Open-source AI models will also be fine-tuned to create recent nudify tools.
What are tech firms already doing?
Some tech firms have already taken motion against nudify tools.
Discord and Apple have removed nudify apps and developer accounts related to nudify apps and web sites.
Meta also bans adult content, including AI-generated nudes. However, Meta got here under fire for inadvertently promoting nudify apps through advertisements – despite the fact that those ads violate the corporate’s standards. The company recently filed a lawsuit against Hong Kong nudify company CrushAI, after the corporate ran greater than 87,000 ads across Meta platforms in violation of Meta’s rules on non-consensual intimate imagery.
Tech firms can do way more to mitigate harms from nudify and other deepfake tools. For example, they will ensure guardrails are in place for deepfake generators, remove content more quickly, and ban or suspend user accounts.
They can restrict search results and block keywords resembling “undress” or “nudify”, issue “nudges” or warnings to people using related search terms, and use watermarking and provenance indicators to discover the origins of images.
They may also work collaboratively together to share signals of suspicious activity (for instance, promoting attempts) and share digital hashes (a novel code like a fingerprint) of known image-based abuse or child sexual abuse content with other platforms to stop recirculation.
Education can be key
Placing the onus on tech firms and ensuring they’re held accountable to cut back the harms from nudify tools is essential. But it’s not going to stop the issue.
Education must even be a key focus. Young people need comprehensive education on learn how to critically examine and discuss digital information and content, including digital data privacy, digital rights and respectful digital relationships.
Digital literacy and respectful relationships education shouldn’t be based on shame and fear-based messaging but slightly on affirmative consent. That means giving young people the talents to recognise and negotiate consent to receive, request and share intimate images, including deepfake images.
We need effective bystander interventions. This means teaching bystanders learn how to effectively and safely challenge harmful behaviours and learn how to support victim-survivors of deepfake abuse.
We also need well-resourced online and offline support systems so victim-survivors, perpetrators, bystanders and support individuals can get the assistance they need.

