Elon Musk finally responded last week widespread outrage via its social media platform X, which allows users to create sexualized deepfakes using Grok, the platform's artificial intelligence (AI) chatbot.
Musk has now assured the British government that he’ll do that Block Grok from creating deepfakes to comply with the law. However, the change is prone to only apply to users within the UK.
However, these recent complaints weren’t latest. Last yr Grok users were capable of do that “undress” posted pictures to supply images of girls in underwear, swimwear or sexually suggestive positions. X's “spicy” option Let them create topless pictures without detailed prompting.
And such cases might be an indication of things to come back if governments don't change into more forceful in regulating AI.
Despite public outcry and growing scrutiny Regulatory AuthoritiesX initially made little effort to handle the issue, simply restricted access to Grok on X to paying subscribers.
Various governments have taken motion, including the United Kingdom announce plans Enact laws against deepfake tools, join Denmark And Australia in an try and criminalize such sexual material. British regulator Ofcom an investigation has been initiated of X, which apparently triggered Musk's about-face.
To date, the New Zealand government has remained silent on the problem, despite domestic law doing an inadequate job of stopping or criminalizing non-consensual sexualized deepfakes.
Hold platforms accountable
The Harmful Digital Communications Act 2015 While it offers some paths to justice, it is much from perfect. Victims are required to reveal that they’ve suffered “serious emotional distress,” shifting the main target to their response quite than the inherent injustice of non-consensual sexualization.
If images are completely synthetic and never “real” (e.g. created with no reference photo), legal protection becomes much more uncertain.
A Members' Bill A law criminalizing the creation, possession and distribution of sexualized deepfakes without consent is predicted to be introduced later this yr.
This reform is needed and welcome. But it only solves a part of the issue.
Criminalization holds individuals accountable after harm has already occurred. It doesn't hold corporations accountable for designing and deploying the AI ​​tools that produce these images in the primary place.
This is what we expect from social media providers Remove child sexual abuse materialso why not deepfakes of girls? While users are accountable for their actions, platforms like X provide quick access that removes the technical hurdle to creating deepfakes.
The Grok case has been within the news for a lot of months, so the resulting damage is simple to predict. Treating such incidents as isolated abuse distracts from the platform's responsibility.
The light touch regulation doesn’t work
Social media corporations (including X) have signed the voluntary declaration Aotearoa New Zealand Code of Practice for Online Safety and Harmbut that’s already outdated.
The Code doesn’t set standards for generative AI, require risk assessments before implementing an AI tool, or establish meaningful consequences for failing to stop foreseeable types of misuse.
This signifies that X can get away with allowing Grok to supply deepfakes while still technically adhering to the code.
Victims could also hold X accountable by complaining to law enforcement Data Protection Officer under the Data Protection Act 2020.
The Commissioner's Guide to AI points out that each using an individual's image as a solicitation and the deepfake generated might be considered personal data.
However, these investigations can take years and compensation is frequently low. Responsibility is commonly divided between the user, the platform and the AI ​​developer. This does little to make platforms or AI tools like Grok any safer.
New Zealand's approach reflects a broader approach political preference for loose AI regulation that assumes that technological development is accompanied by appropriate self-restraint and good faith governance.
Obviously that doesn't work. The competitive pressure to release latest features quickly results in novelty and engagement being prioritized over security, with gender disadvantages often seen as a suitable by-product.
An indication for the longer term
Technologies are shaped by the social conditions under which they’re developed and used. Generative AI systems trained on reams of human data inevitably absorb misogynistic norms.
Integrating these systems into platforms without robust safeguards enables sexualized deepfakes that reinforce existing patterns of gender-based violence.
These harms transcend individual humiliation. The knowledge that a compelling sexualized image might be generated at any time and by anyone – creates a persistent threat This is changing the way in which women engage online.
For politicians and other public figures, this threat can definitely be present discourage participation in public debates in total. The cumulative effect is a narrowing of the digital public space.
Criminalizing deepfakes alone is not going to solve the issue. New Zealand deserves a regulatory framework that recognizes AI-related gender-based harms as foreseeable and systemic.
This means imposing clear obligations on corporations using these AI tools, including the duty to evaluate risk, implement effective guardrails and forestall foreseeable abuse before it occurs.
Grok gives an early signal of the challenges ahead. As AI is embedded into digital platforms, the gap between technological capabilities and laws will only widen unless those in power act.
At the identical time, Elon Musk's response to legislative motion within the UK shows how effective political will and robust regulation might be.

