Since the top of December 2025, X's artificial intelligence chatbot Grok has been responding to many users' questions Requests to undress real people by turning photos of individuals into sexually explicit material. After people began using the feature, the corporate faced challenges to the social platform global control to enable users to create non-consensual, sexually explicit depictions of real people.
The Grok account posted hundreds of “nude” and sexually suggestive posts Images per hour. What's much more disturbing is that Grok created sexualized images sexually explicit material of minors.
Answer from X: Blame the platform users, not us. The company issued a press release on January 3, 2026, saying: “Anyone who uses Grok or encourages the creation of illegal content will face the identical consequences as in the event that they uploaded illegal content.” It will not be clear what actions, if any, X took against users.
As Legal scholar Anyone who studies the interface between law and recent technologies sees this flood of non-consensual images as a predictable results of the mixture of X lax content moderation guidelines and the Accessibility of powerful generative AI tools.
Target users
The rapid rise of generative AI has led to countless web sites, apps and chatbots that allow users to supply sexually explicit material, including “nudity” images of real children. But these apps and web sites are usually not as well-known or used as much as the large social media platforms like X.
State legislatures and Congress responded reasonably quickly. In May 2025, Congress enacted this Take It Down Actwhich makes publishing non-consensual, sexually explicit material from real people a criminal offense. The Take It Down Act criminalizes each the non-consensual publication of “intimate visual depictions” of identifiable individuals and AI or other computer-generated depictions of identifiable individuals.
These penalties only apply to individuals who post sexually explicit content, to not the platforms that distribute the content, akin to social media sites.
However, other provisions of the Take It Down Act require platforms to determine a process for the people depicted to request removal of the photographs. Once a “Take It Down Request” is submitted, a platform must remove the sexually explicit image inside 48 hours. However, these requirements will only come into force from May 19, 2026.
Platform issues
Meanwhile, user requests to remove the sexually explicit images produced by Grok appear to have gone unanswered. Even the mother of one in all Elon Musk's children, Ashley St. Clair, has done this I couldn't get X to remove the fake sexualized images from her, which Musk's fans produced with Grok. The Guardian reports that St. Clair told her: “Complaints to X employees led to nothing.”
That doesn't surprise me because Musk gutted the then Twitter Trust and Security Advisory Group shortly after he acquired the platform and laid off 80% of the corporate's engineers committed to trust and security. Trust and safety teams are typically chargeable for content moderation and abuse prevention initiatives at tech firms.
Publicly, it seems that Musk has ignored the seriousness of the situation. Musk reportedly posted laughing and screaming emojis in response to among the images, and X responded to a Reuters reporter's query with the automated response “Legacy Media Lies”.
Limits of lawsuits
Civil lawsuits like the one which was submitted from the parents of Adam Raine, a teen who committed suicide in April 2025 after interacting with OpenAI's ChatGPT, are a technique to hold platforms accountable. But Lawsuits face an uphill battle given within the United States Section 230 of the Communications Decency Actwhich generally excludes social media platforms from legal liability for the content that users post on their platforms.
However, Supreme Court Justice Clarence Thomas and lots of legal scholars have done so argued that Section 230 had been drafted too broadly through courts. I generally agree that Section 230 immunity must be limited since the immunity of technology firms and their platforms for his or her conscious design decisions—how their software is created, how the software works, and what the software produces—doesn’t fall inside the scope of Section 230.
In this case, X either knowingly or negligently did not implement safeguards and controls in Grok to stop users from creating sexually explicit images of identifiable individuals. While Musk and X imagine that users must have the flexibility to create sexually explicit images of adults using Grok, I imagine that
Regulatory guidelines
If people fail, platforms akin to the Federal Trade Commission, the Department of Justice, or Congress. could investigate X for Grok's generation of non-consensual, sexually explicit material. But with Musks renewed political ties with President Donald TrumpI don't expect any serious investigation or accountability within the near future.
Currently, international regulators have opened investigations against X and Grok. The French authorities are investigating “the spread of sexually explicit deepfakes” by Grok and the Irish Council for Civil Liberties and Digital Rights Ireland have strongly advocated for this called on Irish police to research the “mass undressing hype”. This was announced by the British regulatory authority Office of Communications investigate the matterand regulators within the European Commission, India and Malaysia He is reportedly investigating X in addition to.
In the United States, perhaps the perfect plan of action until the Take It Down Act takes effect in May is for people to demand motion from elected officials.

