• iconNews
  • videos
  • entertainment
  • Home
  • News
    • UK News
    • US News
    • Australia
    • Ireland
    • World News
    • Weird News
    • Viral News
    • Sport
    • Technology
    • Science
    • True Crime
    • Travel
  • Entertainment
    • Celebrity
    • TV & Film
    • Netflix
    • Music
    • Gaming
    • TikTok
  • LAD Originals
    • Say Maaate to a Mate
    • Daily Ladness
    • Lad Files
    • UOKM8?
    • FreeToBe
    • Extinct
    • Citizen Reef
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
Snapchat
TikTok
YouTube

LAD Entertainment

YouTube

LAD Stories

Submit Your Content
The dark happenings behind the scenes of ChatGPT that many people don't know about

Home> News> World News

Published 19:56 14 Jul 2025 GMT+1

The dark happenings behind the scenes of ChatGPT that many people don't know about

In 2023, OpenAI was subjected to controversy following a TIME magazine investigation into ChatGPT

Faima Bakar

Faima Bakar

Back on 30 November 2022, the release of ChatGPT was hailed as a huge technological leap in an ever increasingly AI-influenced world, but an investigation by TIME Magazine revealed the behind the scenes of the infamous chatbot may not have been as swanky as it seemed.

The artificial intelligence platform, built by OpenAI, was trained by trawling through online archives consisting of billions of words from the internet – so you can imagine the colourful language it picked up on.

The popular AI platform has been praised for opening up the possibilities to do with how we utilise AI, but while some of its learnings during its development were positive, other teachings were less positive, especially given the dark side of the internet.

So to make AI less toxic, racist, homophobic and violent, OpenAI hired a firm called Sama, which employs people from Kenya, Uganda and India to go through and moderate content for clients so it can be labelled it as harmful and be filtered out.

Advert

An investigation into Open AI has revealed the human cost of AI (Getty)
An investigation into Open AI has revealed the human cost of AI (Getty)

In 2023, TIME magazine spoke to those claiming to be these content moderators, who said that for less than $2 (£1.49), they were expected to go through graphic details like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest to build this library for OpenAI.

They also alleged that they weren't adequately warned about the distressing content they could be subjected to.

According to the investigation, this had a huge mental toll on the workers, with one calling it 'torture'.

The worker told TIME: “By the time it gets to Friday, you are disturbed from thinking through that picture.”

Advert

In a statement at the time, an OpenAI spokesperson confirmed that Kenyan Sama employees did help contribute to the tech it was creating to detect harmful content, which was then built into ChatGPT.

“Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content,” the spokesperson said.

“Classifying and filtering harmful [text and images] is a necessary step in minimising the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”

AI can only create things out of what already exists on the internet (Getty)
AI can only create things out of what already exists on the internet (Getty)

Two other works spoke to the publication, claiming they were expected to read and label between 150 and 250 passages of text during a nine-hour shift.

Advert

Each passage could range from 100 words to more than 1,000, with the employees saying they were mentally scarred by reading through them.

A Sama spokesperson disputed this, saying in a statement that workers were asked to label 70 text passages every nine hour shift, not up to 250, and that workers could earn between $1.46 and $3.74 per hour after taxes.

“The $12.50 rate for the project covers all costs, like infrastructure expenses, and salary and benefits for the associates and their fully-dedicated quality assurance analysts and team leaders,” the spokesperson said.

Employees claimed they were incentivised with commissions if they met key performance indicators with accuracy and speed, meaning they would have to look at even more harmful content to get the bonuses, according to the outlet.

OpenAI said it did not issue productivity targets.

Advert

While Sama claimed it offered counselling sessions to bolster the wellness of its employees due to the traumatic nature of the job, it decided to cancel its work for OpenAI in 2022 earlier than planned.

A Sama spokesperson said the company gave full notice to employees that it was drawing a line under its contract with ChatGPT early, and that they were given the opportunity to work on a different project instead.

As for OpenAI, it said Sama was responsible for managing the payment and mental health provisions for employees.

"We take the mental health of our employees and those of our contractors very seriously. Our previous understanding was that [at Sama] wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalisation, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so," it said.

Featured Image Credit: Jakub Porzycki/NurPhoto via Getty Images

Topics: Mental Health, Artificial Intelligence, Meta

Faima Bakar
Faima Bakar

Advert

Advert

Advert

Choose your content:

3 hours ago
6 hours ago
  • 3 hours ago

    Keir Starmer speaks out after British girl sent home from school for wearing Union Jack dress on culture day

    The school has since issued an apology

    News
  • 3 hours ago

    Brit, 69, detained after ‘smuggling £350k-worth of drugs in secret compartment of his car’ into Spain

    He was said to be boarding the car ferry in Ceuta, Spain

    News
  • 6 hours ago

    Doctor reveals patient's brain turned blue after taking 'limitless pill' that's gone viral on TikTok

    People on social media have hailed the substance a ‘game changer for mental clarity and longevity’

    News
  • 6 hours ago

    Doctor shares the nine warning signs of heart failure and how to spot them

    Dr Jen Caudle has revealed the nine 'most common' symptoms of the condition

    News
  • Top adult star confirmed the huge industry secret about male performers people would ‘never know about’
  • Ex-adult film star Lana Rhoades wants all her videos deleted after speaking out about scenes that made her leave industry