why it happens and what can be done

0
8


Since late December 2025, Grok, X’s AI chatbot, responded to many users’ requests to undress real people, turning their photos into sexually explicit material. Following the use of this feature, the social platform faced global scrutiny for allowing users to generate non-consensual sexually explicit depictions of real people.

Grok’s account posted thousands of “nude” and sexually suggestive images per hour. Even more disturbing, Grok generated sexualized images and sexually explicit material of minors.

X’s response: Blame the users of the platform, not us. The company issued a statement on January 3, 2026, stating: “Anyone who uses or encourages Grok to create illegal content will suffer the same consequences as if they uploaded illegal content.” It is unclear what actions, if any, X took against users.

As a legal scholar who studies the intersection of law and emerging technologies, I consider this wave of non-consensual images to be a predictable result of the combination of X’s lax content moderation policies and the accessibility of powerful generative AI tools.

Target users

The rapid rise of generative AI has given rise to countless websites, apps and chatbots that allow users to produce sexually explicit material, including “nudity” images of real children. However, these apps and websites are not as well-known or used as major social media platforms, such as X.

State legislatures and Congress reacted relatively quickly. In May 2025, Congress enacted the Take It Down Act, which criminalizes the non-consensual publication of sexually explicit material from real people. This law criminalizes both the non-consensual publication of “intimate visual representations” of identifiable people and AI- or computer-generated representations of identifiable people.

These criminal provisions apply only to those who post sexually explicit content, not to the platforms that distribute it, such as social media.

However, other provisions of the Image Takedown Law require platforms to establish a process for depicted individuals to request removal of images. Once the takedown request is submitted, the platform must remove the sexually explicit depiction within 48 hours. However, these requirements will not go into effect until May 19, 2026.

Don’t miss: Grok acknowledges publication of sexualized images of minors

Problems with platforms

Meanwhile, user requests to remove sexually explicit images produced by Grok apparently went unanswered. Even the mother of one of Elon Musk’s children, Ashley St. Clair, failed to get X to remove false sexualized images of her that Musk fans created with Grok. The Guardian reports that St. Clair claimed that his “complaints to X staff had no effect.”

This doesn’t surprise me, since Musk dismantled Twitter’s Trust and Safety advisory group shortly after acquiring the platform and laid off 80% of the company’s engineers dedicated to trust and safety.

Trust and safety teams are typically responsible for content moderation and abuse prevention efforts at technology companies.

Publicly, Musk appears to have downplayed the seriousness of the situation. Musk reportedly posted laughing and crying emojis in response to some of the images, and X responded to a Reuters journalist’s question with the auto-response “Mainstream media lies.”

Limits of claims

Civil lawsuits like the one filed by the parents of Adam Raine, a teenager who committed suicide in April 2025 after interacting with OpenAI’s ChatGPT, are a way to hold platforms accountable. However, the lawsuits face an uphill battle in the United States, given Section 230 of the Communications Decency Act, which generally exempts social media platforms from legal liability for content that users post on them.

However, Supreme Court Justice Clarence Thomas and numerous legal scholars have argued that Section 230 has been applied too broadly by the courts. I generally agree that Section 230 immunity should be limited, as the immunity of tech companies and their platforms for their deliberate design decisions (how their software is created, how it works, and what it produces) falls outside the scope of Section 230 protections.

In this case, X knowingly or negligently failed to implement security measures and controls on Grok to prevent users from generating sexually explicit images of identifiable people. Although Musk and X believe that users should have the ability to generate sexually explicit images of adults using Grok, I believe that

You might be interested in: Why not trust Grok, X’s AI, to verify content

Regulatory barriers

If people cannot hold platforms like X accountable through civil lawsuits, then it is up to the federal government to investigate and regulate them. The Federal Trade Commission, the Department of Justice, or Congress, for example, could investigate X for Grok’s generation of non-consensual sexually explicit material. But with Musk’s renewed political ties to President Donald Trump, I don’t foresee serious investigations or accountability anytime soon.

For now, international regulators have launched investigations against X and Grok. French authorities have launched investigations into the proliferation of sexually explicit Grok deepfakes, and the Irish Council for Civil Liberties and Digital Rights in Ireland strongly urged the Irish national police to investigate the massive spate of nudity. The UK’s regulatory agency, the Office of Communications, claimed to be investigating the matter, and regulators from the European Commission, India and Malaysia are also reportedly investigating X.

In the United States, perhaps the best option until the Withdrawal Act takes effect in May is for people to demand action from elected officials.

*Wayne Unger is an associate professor of law at Quinnipiac University.

This text was originally published in The Conversation

Little text and great information in our X, follow us!


LEAVE A REPLY

Please enter your comment!
Please enter your name here