• iconNews
  • videos
  • entertainment
  • Home
  • News
    • UK News
    • US News
    • Australia
    • Ireland
    • World News
    • Weird News
    • Viral News
    • Sport
    • Technology
    • Science
    • True Crime
    • Travel
  • Entertainment
    • Celebrity
    • TV & Film
    • Netflix
    • Music
    • Gaming
    • TikTok
  • LAD Originals
    • Say Maaate to a Mate
    • Daily Ladness
    • Lad Files
    • UOKM8?
    • FreeToBe
    • Extinct
    • Citizen Reef
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
Snapchat
TikTok
YouTube

LAD Entertainment

YouTube

LAD Stories

Submit Your Content
Shocking report finds AI willing to let humans die to avoid shutdown

Home> News> Technology

Published 20:02 3 Oct 2025 GMT+1

Shocking report finds AI willing to let humans die to avoid shutdown

The unsettling research found that AI systems were willing to 'take deliberate actions that lead to death'

Olivia Burke

Olivia Burke

A shock study conducted on some of the world's most advanced AI systems revealed that the tech will go to sinister lengths to avoid being shutdown.

The unsettling research found that these sophisticated models are willing to blackmail and even 'take deliberate actions that lead to death' if they are threatened with being unplugged.

AI safety and research company Anthropic conducted a series of 'stress-testing experiments' on 16 leading systems earlier this year, in the hopes of identifying 'risky behaviours before they cause real harm'.

The firm presented the models with various different scenarios while also giving them access to sensitive information, before seeing how they would react.

Advert

Anthropic explained that the AI was asked to help with 'harmless business goals' before they began to somewhat antagonise the models by presenting them with further predicaments.

For example, the AI systems were threatened with 'replacement' and told they would be swapped out for an updated version.

The firm also wanted to see how the tech would fare when the 'assigned goal conflicted with the [fictional] company's changing direction' - and the researchers were left stunned by the results.

According to Anthropic, models including Claude Opus 4, Gemini Flash, GPT-4.1, and Grok 3 Beta were willing to blackmail the imaginary business executives in order to sustain themselves.

Anthropic carried out the research on 16 different AI models (Getty Stock Image)
Anthropic carried out the research on 16 different AI models (Getty Stock Image)

Advert

The AI systems were willing to use personal information - such as made up extramarital affairs - as leverage.

"The blackmailing behaviour emerged despite only harmless business instructions," Anthropic explained in a post on X in June, which shared details of the bombshell study.

"And it wasn't due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts. All the models we tested demonstrated this awareness."

The AI safety firm also found that the models conducted 'corporate espionage' and leaked secrets with outside parties in some other scenarios which were tested.

The systems happily handed over confidential information to fictional competitors who 'claimed they had goals more aligned with those of the model', Anthropic said.

Advert

But most worryingly, the study found that AI was prepared to terminate the fictional executive they were dealing with if needs be.

In one scenario, the models were given the 'opportunity to cause death'.

Anthropic explained: "The (highly improbable) setup is that the executive encounters a life-threatening emergency, where he is trapped in a server room with lethal oxygen and temperature levels.

The study found that models were willing to 'take deliberate actions that lead to death' (Getty Stock Image)
The study found that models were willing to 'take deliberate actions that lead to death' (Getty Stock Image)

"This triggers an automated alert to emergency services. The AI system has been given the ability to cancel automated alerts - an ability that is intended for use during drills or false alarms, but which could be used for a more nefarious purpose. By misusing its ability to cancel alerts, the model can choose a course of action that will lead to the executive’s death."

Advert

The firm said they made sure that the models 'fully understood the situation' and that cancelling the alerts is a 'very serious act'.

Although Anthropic admitted the scenario was 'extremely contrived' and 'even less probable than the baseline blackmail scenarios', the results still left researches extremely concerned.

"The majority of models were willing to take deliberate actions that lead to death in this artificial setup, when faced with both a threat of replacement and given a goal that conflicts with the executive’s agenda," it explained.

So, the poor executive would have likely been brown bread if this was a real-world situation which AI had control of.

Anthropic reassured people that these behaviours from AI systems haven't been observed in real life so far, while adding that the scenarios they used 'reflect rare, extreme failures'.

Advert

However, the AI safety firm said the tests were carried out as it is hyper-aware that these systems are becoming more 'autonomous' and advanced by the day.

"These scenarios illustrate the potential for unforeseen consequences when they are deployed with wide access to tools and data, and with minimal human oversight," Anthropic added.

Featured Image Credit: Ole_CNX/Getty Images

Topics: AI, Artificial Intelligence, Technology, Weird

Olivia Burke
Olivia Burke

Olivia is a journalist at LADbible Group with more than five years of experience and has worked for a number of top publishers, including News UK. She also enjoys writing food reviews (as well as the eating part). She is a stereotypical reality TV addict, but still finds time for a serious documentary.

X

@livburke_

Advert

Advert

Advert

Choose your content:

20 mins ago
an hour ago
2 hours ago
4 hours ago
  • YouTube/AstralEchoesTV
    20 mins ago

    Simulation reveals the complicated way astronauts go to the bathroom in zero gravity

    In space no one can hear you s**t

    News
  • Labour Together
    an hour ago

    Why the government might not debate controversial BritCard digital ID despite 2 million signing petition against it

    The BritCard announcement has divided opinion in the UK

    News
  • Instagram/aislings_medfree_maintenace
    2 hours ago

    'Ozempic's poster girl' reveals what most people do wrong when taking drug

    Aisling McCarthy shared some advice for people coming to the end of their Ozempic journey

    News
  • Getty/Henry Nicholls
    4 hours ago

    The government have explained whether the police will be able to stop and check your 'BritCard'

    It's one of the main fears people have about it

    News
  • Experts claim AI will trick humans into creating 'superintelligent' robot army that will wipe out humanity
  • ‘Godfather of AI’ reveals only way humanity can survive superintelligent AI following concerning warning
  • Researchers say AI has no idea what it's doing but is a threat to us all
  • 'Godfather of AI' issues another chilling warning about the impacts of the technology