
Warning: This article contains discussion of rape and sexual assault which some readers may find distressing.
Grok has come under fire in recent weeks after it has been caught generating photos of underage women with no clothes on.
It is not the first time the chatbot has received widespread criticism for creating content which seemingly goes against its guidelines, having obliged when users asked for pictures of women on the app with 'glue over their faces' in early 2025.
Several women fell victim to the inappropriate prompts which created a number of disturbing AI-generated images.
Advert
When the problem rose to prominence in the summer of last year, xAI, a company founded by Elon Musk which also developed Grok, apologised for its actions.
It also came under fire for posting rape fantasies and antisemitic material, including praising Nazi ideology.
A $200m contract signed with the US Department of Defense was thought to be a step in the right direction but things are still heading south for Grok.

The X-integrated AI chatbot apologised on Friday after a lapse in safeguards led to 'images depicting minors in minimal clothing' circulated on the platform.
The latest wave of disturbing sexual images filled Grok's public media tab, as xAI claimed they were working to improve its systems and prevent this from happening again.
Writing on the @Grok account on the social media site, xAI stated: “As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited,” when speaking about child sexual abuse material.
Another post responded to worries around the 'extremely inappropriate' images, saying: "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing."
"xAI has safeguards, but improvements are ongoing to block such requests entirely," the Grok account stated.
Users on X have been successful in generating sexualised, nonconsensual AI-altered versions of existing images, with Musk reposting an AI photo of himself in a bikini in a tone-deaf response to the worrying trend.
When contacted for comment by The Guardian, xAI said: "Legacy Media Lies."
A woman told the BBC that she felt like she was 'reduced into a sexual stereotype', as Samantha Smith explained: "Women are not consenting to this.
"While it wasn't me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me," she said.
A spokesperson from the Home Office said it was working to ban 'nudification' tools, looking for it to become a new criminal offence.

The issue has caused authorities in France to investigate the rise in sexually explicit deepfakes being generated by Grok, say the Paris prosecutor's office.
"These facts have been added to the existing investigation into X," the prosecutor's office said to Politico, claiming that this would be punishable by two years' imprisonment and a €60,000 (£52,200) fine.
Grok is a free AI assistant on X, which responds to X users' prompts when tagged in a post, with it answering questions or helping to edit an uploaded image.
According to xAI's policy, 'depicting likenesses of persons in a pornographic manner' is prohibited.
Ofcom further stated that it is illegal to 'create or share non-consensual intimate images or child sexual abuse material', as they highlighted the need for social media platforms to stop illegal content like this from being created.
LADbible has reached out to xAI for comment.
Topics: Elon Musk, Artificial Intelligence, Technology, Crime, Twitter