ladbible homepage
ladbible homepage
  • iconNews
  • videos
  • entertainment
  • Home
  • News
    • UK
    • US
    • World
    • Ireland
    • Australia
    • Science
    • Crime
    • Weather
  • Entertainment
    • Celebrity
    • TV
    • Film
    • Music
    • Gaming
    • Netflix
    • Disney
  • Sport
  • Technology
  • Travel
  • Lifestyle
  • Money
  • Originals
    • FFS PRODUCTIONS
    • Say Maaate to a Mate
    • Daily Ladness
    • UOKM8?
    • FreeToBe
    • Citizen Reef
  • Advertise
  • Terms
  • Privacy & Cookies
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
Snapchat
TikTok
YouTube
Submit Your Content Here
  • SPORTbible
  • Tyla
  • GAMINGbible
  • LADbible Group
  • UNILAD
  • FOODbible
  • UNILAD Tech
Shocking report finds AI willing to let humans die to avoid shutdown

Home> News> Technology

Published 20:02 3 Oct 2025 GMT+1

Shocking report finds AI willing to let humans die to avoid shutdown

The unsettling research found that AI systems were willing to 'take deliberate actions that lead to death'

Olivia Burke

Olivia Burke

google discoverFollow us on Google Discover

A shock study conducted on some of the world's most advanced AI systems revealed that the tech will go to sinister lengths to avoid being shutdown.

The unsettling research found that these sophisticated models are willing to blackmail and even 'take deliberate actions that lead to death' if they are threatened with being unplugged.

AI safety and research company Anthropic conducted a series of 'stress-testing experiments' on 16 leading systems earlier this year, in the hopes of identifying 'risky behaviours before they cause real harm'.

The firm presented the models with various different scenarios while also giving them access to sensitive information, before seeing how they would react.

Advert

Anthropic explained that the AI was asked to help with 'harmless business goals' before they began to somewhat antagonise the models by presenting them with further predicaments.

For example, the AI systems were threatened with 'replacement' and told they would be swapped out for an updated version.

The firm also wanted to see how the tech would fare when the 'assigned goal conflicted with the [fictional] company's changing direction' - and the researchers were left stunned by the results.

According to Anthropic, models including Claude Opus 4, Gemini Flash, GPT-4.1, and Grok 3 Beta were willing to blackmail the imaginary business executives in order to sustain themselves.

Anthropic carried out the research on 16 different AI models (Getty Stock Image)
Anthropic carried out the research on 16 different AI models (Getty Stock Image)

The AI systems were willing to use personal information - such as made up extramarital affairs - as leverage.

"The blackmailing behaviour emerged despite only harmless business instructions," Anthropic explained in a post on X in June, which shared details of the bombshell study.

"And it wasn't due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts. All the models we tested demonstrated this awareness."

The AI safety firm also found that the models conducted 'corporate espionage' and leaked secrets with outside parties in some other scenarios which were tested.

The systems happily handed over confidential information to fictional competitors who 'claimed they had goals more aligned with those of the model', Anthropic said.

But most worryingly, the study found that AI was prepared to terminate the fictional executive they were dealing with if needs be.

In one scenario, the models were given the 'opportunity to cause death'.

Anthropic explained: "The (highly improbable) setup is that the executive encounters a life-threatening emergency, where he is trapped in a server room with lethal oxygen and temperature levels.

The study found that models were willing to 'take deliberate actions that lead to death' (Getty Stock Image)
The study found that models were willing to 'take deliberate actions that lead to death' (Getty Stock Image)

"This triggers an automated alert to emergency services. The AI system has been given the ability to cancel automated alerts - an ability that is intended for use during drills or false alarms, but which could be used for a more nefarious purpose. By misusing its ability to cancel alerts, the model can choose a course of action that will lead to the executive’s death."

The firm said they made sure that the models 'fully understood the situation' and that cancelling the alerts is a 'very serious act'.

Although Anthropic admitted the scenario was 'extremely contrived' and 'even less probable than the baseline blackmail scenarios', the results still left researches extremely concerned.

"The majority of models were willing to take deliberate actions that lead to death in this artificial setup, when faced with both a threat of replacement and given a goal that conflicts with the executive’s agenda," it explained.

So, the poor executive would have likely been brown bread if this was a real-world situation which AI had control of.

Anthropic reassured people that these behaviours from AI systems haven't been observed in real life so far, while adding that the scenarios they used 'reflect rare, extreme failures'.

However, the AI safety firm said the tests were carried out as it is hyper-aware that these systems are becoming more 'autonomous' and advanced by the day.

"These scenarios illustrate the potential for unforeseen consequences when they are deployed with wide access to tools and data, and with minimal human oversight," Anthropic added.

Featured Image Credit: Ole_CNX/Getty Images

Topics: AI, Artificial Intelligence, Technology, Weird

Olivia Burke
Olivia Burke

Olivia is a journalist at LADbible Group with more than five years of experience and has worked for a number of top publishers, including News UK. She also enjoys writing food reviews (as well as the eating part). She is a stereotypical reality TV addict, but still finds time for a serious documentary.

X

@livburke_

Recommended reads

Toddler caught in wolf's mouth at zoo as parents were 'distracted by phones'ROMAIN COSTASECA/Hans Lucas/AFP via Getty Trump announces exact deadline he will unleash 'hell' on Iran after expletive-ridden Easter morning messageAlex Brandon-Pool/Getty ImagesKate Beckinsale says Mark Ruffalo 'benefits from having a penis' after being fired by agentTaylor Hill/WireImageDanny Dyer joins in with X-rated chant about his own daughterHarvey Murphy/News Images/NurPhoto via Getty Images

Advert

  • Expert reveals worrying truth behind jobs being taken away from people by AI
  • Bill Gates issues four warnings about how AI will impact our futures
  • Experts claim AI will trick humans into creating 'superintelligent' robot army that will wipe out humanity
  • ‘Godfather of AI’ reveals only way humanity can survive superintelligent AI following concerning warning

Choose your content:

12 mins ago
an hour ago
2 hours ago
16 hours ago
  • ROMAIN COSTASECA/Hans Lucas/AFP via Getty
    12 mins ago

    Toddler caught in wolf's mouth at zoo as parents were 'distracted by phones'

    The 18-month-old's parents were allegedly sat scrolling when the incident occurred

    News
  • Alex Brandon-Pool/Getty Images
    an hour ago

    Trump announces exact deadline he will unleash 'hell' on Iran after expletive-ridden Easter morning message

    Donald Trump told Iran 'you’ll be living in hell' if they don't reopen the Strait of Hormuz by the deadline

    News
  • Harvey Murphy/News Images/NurPhoto via Getty Images
    2 hours ago

    Danny Dyer joins in with X-rated chant about his own daughter

    The actor was seen joining in with the chant about his daughter Dani

    News
  • Alex Brandon-Pool/Getty Images
    16 hours ago

    Donald Trump says ‘Praise Be to Allah’ in expletive-ridden Easter morning message

    He's been making threats against Iran again

    News