.jpg)
Back on 30 November 2022, the release of ChatGPT was hailed as a huge technological leap in an ever increasingly AI-influenced world, but an investigation by TIME Magazine revealed the behind the scenes of the infamous chatbot may not have been as swanky as it seemed.
The artificial intelligence platform, built by OpenAI, was trained by trawling through online archives consisting of billions of words from the internet – so you can imagine the colourful language it picked up on.
The popular AI platform has been praised for opening up the possibilities to do with how we utilise AI, but while some of its learnings during its development were positive, other teachings were less positive, especially given the dark side of the internet.
So to make AI less toxic, racist, homophobic and violent, OpenAI hired a firm called Sama, which employs people from Kenya, Uganda and India to go through and moderate content for clients so it can be labelled it as harmful and be filtered out.
Advert

In 2023, TIME magazine spoke to those claiming to be these content moderators, who said that for less than $2 (£1.49), they were expected to go through graphic details like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest to build this library for OpenAI.
They also alleged that they weren't adequately warned about the distressing content they could be subjected to.
According to the investigation, this had a huge mental toll on the workers, with one calling it 'torture'.
The worker told TIME: “By the time it gets to Friday, you are disturbed from thinking through that picture.”
Advert
In a statement at the time, an OpenAI spokesperson confirmed that Kenyan Sama employees did help contribute to the tech it was creating to detect harmful content, which was then built into ChatGPT.
“Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content,” the spokesperson said.
“Classifying and filtering harmful [text and images] is a necessary step in minimising the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”

Two other works spoke to the publication, claiming they were expected to read and label between 150 and 250 passages of text during a nine-hour shift.
Advert
Each passage could range from 100 words to more than 1,000, with the employees saying they were mentally scarred by reading through them.
A Sama spokesperson disputed this, saying in a statement that workers were asked to label 70 text passages every nine hour shift, not up to 250, and that workers could earn between $1.46 and $3.74 per hour after taxes.
“The $12.50 rate for the project covers all costs, like infrastructure expenses, and salary and benefits for the associates and their fully-dedicated quality assurance analysts and team leaders,” the spokesperson said.
Employees claimed they were incentivised with commissions if they met key performance indicators with accuracy and speed, meaning they would have to look at even more harmful content to get the bonuses, according to the outlet.
OpenAI said it did not issue productivity targets.
Advert
While Sama claimed it offered counselling sessions to bolster the wellness of its employees due to the traumatic nature of the job, it decided to cancel its work for OpenAI in 2022 earlier than planned.
A Sama spokesperson said the company gave full notice to employees that it was drawing a line under its contract with ChatGPT early, and that they were given the opportunity to work on a different project instead.
As for OpenAI, it said Sama was responsible for managing the payment and mental health provisions for employees.
"We take the mental health of our employees and those of our contractors very seriously. Our previous understanding was that [at Sama] wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalisation, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so," it said.
Topics: Mental Health, Artificial Intelligence, Meta