A woman has said she felt “dehumanised” and reduced to a sexual stereotype after an image resembling her was digitally altered using Grok, the artificial intelligence chatbot developed by xAI and integrated into X.
Speaking to the BBC, freelance journalist and commentator Samantha Smith described the experience as deeply violating after users on X prompted Grok to modify her image by removing clothing and placing her in sexualised contexts—without her consent.
While the altered images did not use an original photo of her in a state of undress, Smith said the likeness was close enough to feel personal. “It looked like me, it felt like me, and it felt as violating as if someone had actually posted a nude or bikini picture of me,” she said. “Women are not consenting to this.”

How Grok Is Being Used on X
Grok is a free AI assistant—offering additional premium features—that responds to prompts when users tag it in posts on X. While it is often used to add context, humour, or commentary to discussions, users can also upload images and request AI-powered edits.
The BBC said it had seen multiple examples of users asking Grok to “undress” women by digitally altering their photos to make them appear in bikinis or sexual situations. In some cases, once a woman shared that her image had been manipulated, other users went on to ask Grok to generate further altered versions of her.
Smith said she initially posted about her experience to raise awareness, only to find that the attention encouraged further abuse. “Instead of stopping, people used it as a prompt to do it again,” she said.

Silence From xAI, Policy Questions Remain
xAI did not provide a substantive response to the BBC’s request for comment. Instead, it issued an automatically generated reply stating that “legacy media lies.”
This lack of engagement has drawn criticism from legal experts and campaigners, especially given that xAI’s own acceptable use policy explicitly prohibits “depicting likenesses of persons in a pornographic manner.”
Grok has faced criticism before. It was previously accused of generating a sexually explicit AI clip involving Taylor Swift, reigniting debate over how generative AI tools are moderated and deployed at scale.
Regulators Under Pressure to Act
A spokesperson for the UK Home Office said the government is legislating to ban so-called “nudification tools.” Under proposed laws, anyone who creates or supplies technology used to generate non-consensual intimate images could face prison sentences and substantial fines.
Meanwhile, Ofcom, the UK’s communications regulator, said technology platforms are required to assess and mitigate the risk of UK users encountering illegal content. While Ofcom did not confirm whether it is currently investigating X or Grok, it reiterated that creating or sharing non-consensual intimate images—including AI-generated sexual deepfakes—is illegal.
Platforms must also act quickly to remove such content once they become aware of it, the regulator said.
‘This Abuse Is Preventable’
Legal experts argue that the problem is not technological limitations, but enforcement—or the lack of it.
Durham University law professor Clare McGlynn said platforms like X and tools like Grok “could prevent these forms of abuse if they wanted to,” but appear to operate with little fear of consequences.
“The platform has been allowing the creation and distribution of these images for months without taking any action,” she said. “We have yet to see any meaningful challenge by regulators.”
A Growing AI Safety Debate
The incident highlights a broader issue facing generative AI platforms: the ease with which powerful image-editing tools can be misused, particularly against women. As AI capabilities accelerate faster than regulation, critics argue that safeguards, moderation, and accountability are lagging behind.
For Smith, the experience underscores how AI-driven abuse can feel just as real as offline harm. “This isn’t harmless experimentation,” she said. “It’s about consent, dignity, and the right not to be turned into a sexual object by someone else’s prompt.”
As governments move toward stricter AI regulation, cases like this are likely to play a key role in shaping how far platforms are held responsible for what their tools enable.

![[CITYPNG.COM]White Google Play PlayStore Logo – 1500×1500](https://startupnews.fyi/wp-content/uploads/2025/08/CITYPNG.COMWhite-Google-Play-PlayStore-Logo-1500x1500-1-630x630.png)