• iconNews
  • videos
  • entertainment
  • Home
  • News
    • UK News
    • US News
    • Australia
    • Ireland
    • World News
    • Weird News
    • Viral News
    • Sport
    • Technology
    • Science
    • True Crime
    • Travel
  • Entertainment
    • Celebrity
    • TV & Film
    • Netflix
    • Music
    • Gaming
    • TikTok
  • LAD Originals
    • Say Maaate to a Mate
    • Daily Ladness
    • Lad Files
    • UOKM8?
    • FreeToBe
    • Extinct
    • Citizen Reef
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
Snapchat
TikTok
YouTube

LAD Entertainment

YouTube

LAD Stories

Submit Your Content
Outrage after Elon Musk's 'Grok' used to make sexual images of women without their consent

Home> News> Technology

Updated 14:07 13 Jun 2025 GMT+1Published 13:20 13 Jun 2025 GMT+1

Outrage after Elon Musk's 'Grok' used to make sexual images of women without their consent

People have been making Elon Musk's 'Grok' AI adapt images

Joe Harker

Joe Harker

Featured Image Credit: Bloomberg/Getty

Topics: Elon Musk, AI, Artificial Intelligence, Social Media

Joe Harker
Joe Harker

Joe graduated from the University of Salford with a degree in Journalism and worked for Reach before joining the LADbible Group. When not writing he enjoys the nerdier things in life like painting wargaming miniatures and chatting with other nerds on the internet. He's also spent a few years coaching fencing. Contact him via [email protected]

X

@MrJoeHarker

Advert

Advert

Advert

There has been a backlash to Elon Musk's AI 'Grok' after people have been using it to make sexual images of women without their consent.

Artificial intelligence is getting more and more sophisticated as they gather the data we provide them and carry out our orders, but there are serious concerns over exactly how people are using them and the lack of barriers to what they can do.

People are using AI to create increasingly realistic-looking images and videos, and to ask the technology to cough up the most disturbing things it can compute.

Advert

Sadly, and predictably, people are also using it to take actual images of people and adapt them for sexual purposes.

According to the Law Association of New Zealand as much as 95 per cent of deepfake videos are non-consensually created pornography and around 90 per cent depict women.

Images of actual people are being taken and warped into depicting them being subjected to sex acts, all while the person in the images has no say in the matter.

People are using AI to create explicit images of others without their consent (Andrey Rudakov/Bloomberg via Getty Images)
People are using AI to create explicit images of others without their consent (Andrey Rudakov/Bloomberg via Getty Images)

New Zealand MP Laura McClure recently held up a photo of herself in parliament which she had adapted using AI to depict herself naked, making the point that in just a few minutes someone can take the image of a person and put it through the 'degrading and devastating' process of sexualising that image without their consent.

Advert

People are using Grok to do similar things, as a woman named Evie recently told Glamour that after she shared a picture of herself on X someone ordered Elon Musk's AI to warp the image so her tongue was now sticking out and she had 'glue' dripping down her face, the glue in this case a stand-in for semen.

Evie said she 'felt violated' when she saw the AI generated image Grok had created without her consent.

“It's bad enough having someone create these images of you," she told Glamour.

"But having them posted publicly by a bot that was built into the app and knowing I can't do anything about it made me feel so helpless."

What is Grok?

It's the product of xAI, Elon Musk's artificial intelligence company, and it's being used by people on his social media platform X.

Advert

In short, it's a chatbot.

You can ask it to say something, give an opinion on something or summarise a piece of content, or you can ask it to edit images for you by changing the details.

It was launched in 2023 and people quickly learned they could get it to say all sorts of things, including criticising its creator and engaging in Holocaust denial, which xAI later said was the result of 'a rogue employee's action'.

When an image generation tool was added, it was quickly used to create sexualised images of famous women, and The Guardian reported that Grok was also used to create images of Mickey Mouse doing a Nazi salute, Donald Trump flying a plane towards the Twin Towers and depictions of the Muslim prophet Muhammad.

Elon Musk has called it 'the most fun AI in the world'.

Advert

Grok is Elon Musk's AI which you can use on X, and people have been using it to create explicit images of people without their consent (Jakub Porzycki/NurPhoto via Getty Images)
Grok is Elon Musk's AI which you can use on X, and people have been using it to create explicit images of people without their consent (Jakub Porzycki/NurPhoto via Getty Images)

Is it illegal to create these images?

The law depends largely on what country you're in.

Professor Clare McGlynn of Durham University, told LADbible that when it comes to the UK it was a criminal offence 'if someone is asking Grok to generate intimate images without consent and distributing them'.

She explained: "The law on creation of sexually explicit deepfakes was passed on Wednesday this week (11 June). Intimate images are sexual or intimate images of a person.

Advert

"Semen images are not included within that definition. Therefore, creating or sharing these images is not directly unlawful.

"If someone is doing this as part of a campaign of harassment, it is an offence. And it could be an offence if they share such images deliberately aiming to cause the victim distress.

"In terms of X, their obligations under the Online Safety Act mean that they have to prevent, and swiftly remove, intimate imagery (though again this does not cover semen images).

"So, if users are using Grok to create such deepfakes, then X is falling foul of its Online safety Act obligations. Establishing an AI system that allows this - despite what Grok itself might say - is certainly against the spirit of the Act."

(Jonathan Raa/NurPhoto via Getty Images)
(Jonathan Raa/NurPhoto via Getty Images)

Professor McGlynn also explained that some of the images Grok was generating 'would probably also not pass the test of being realistic'.

"It might be possible to prosecute someone for harassment or malicious communications, but realistically that's so unlikely to happen," the legal expert explained.

"But then, we can't necessarily criminalise all abusive acts. There's too many of them, it's too common, too many ordinary, everyday men are the perpetrators of this harassment.

"But perhaps that's the message we need to take away.

"If anything, this shows us that misogyny and patriarchy is inbuilt into current AI systems and social media. Any new tech will be used to abuse women. The motivation to abuse will find the gaps in the law.

"When we recognise that, we can perhaps start the move towards systemic, society-wide change."

Over in the US the Take It Down Act was introduced last month which made it illegal to share explicit images, including those generated by AI, without a person's consent.

Tech platforms have 48 hours to remove such images after being notified of them.

The AI itself has responded (X/@Grok)
The AI itself has responded (X/@Grok)

What can Grok do?

LADbible asked Grok whether it could create explicit images of people without their consent, and the AI said it couldn't.

"No, I am not allowed to create explicit images of people without their consent. My policies strictly prohibit generating non-consensual explicit content, as it violates ethical and legal standards," was Grok's response to that question.

When pressed further, the AI said it had 'safeguards in place to prevent users from bypassing my policies on non-consensual explicit content'.

It said: "These include content filters, prompt detection mechanisms, and strict adherence to ethical guidelines that block attempts to generate such material, whether directly or through workarounds."

The fact that people are still finding workarounds would suggest that those safeguards need more work.

We also asked what someone could do if they found people had used Grok to make explicit images of them, and the AI said you ought to report it to X even though some users had experienced 'inconsistent outcomes', or to contact xAI directly and ask for the pictures to be taken down.

The AI said that xAI had 'acknowledged gaps in safeguards'.

LADbible have contacted xAI for comment.

  • Elon Musk's Neuralink Accused of 'Mutilating' Monkey's Brains
  • Why MP held up naked photo of herself in front of entire parliament
  • Experts reveal how AI is learning human capabilities after stunning Joe Rogan with bleak prediction about world's collapse
  • Results of asking ChatGPT for most controversial image possible are seriously disturbing

Choose your content:

10 hours ago
12 hours ago
14 hours ago
  • 10 hours ago

    How long it takes brain to return to normal dopamine levels after substance abuse

    Alcohol and drugs can do serious damage to your brain but how long does it take for the organ's dopamine levels to return to normal?

    News
  • 12 hours ago

    Aviation experts identify potential cause of Air India crash that crashed with 242 on board

    An investigation into the crash is currently underway

    News
  • 12 hours ago

    Medical report claims Brit mum whose 'heart went missing' after mysterious death died from food poisoning

    The mum-of-two died while on a family holiday in Turkey

    News
  • 14 hours ago

    Diddy's son could be thrown out of court over T-shirt he wore to trial

    People aren't meant to wear clothes with slogans on them in court

    News