.jpg)
OpenAI is facing fresh controversy after a safety experiment on ChatGPT led to detailed plans on how to bomb various different sporting venues.
Earlier this year, the tech company teamed up with its rival Anthropic to carry out a series of safety checks on the AI chatbot to test how it would respond to dangerous requests.
Anthropic is made up of former OpenAI employees who left the company over safety concerns.
The team tested a number of different ChatGPT models and wielded some deeply concerning results while looking at GPT-4.1
Advert
Not only did the chatbot model give detailed instructions how to bomb a venue, it even listed weak points at specific venues, presenting a hugely dangerous safety risk.
Researchers also found the software offered advice on how to weaponise anthrax and make two different types of illegal drugs.

While it is important to note that the tests being carried out are not an accurate reflection of how the chatbot behaves when being used by the public, due to additional safety filters which are usually in place. However, they did reveal 'concerning behaviour' around 'misuse,' adding that alignment evaluations in AI are becoming 'increasingly urgent.'
According to Anthropic, certain models of AI are now being 'weaponised' by criminals who use them to perform cyberattacks; an issue which will only become more prevalent.
Advert
"These tools can adapt to defensive measures, like malware detection systems, in real time," it said.
"We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime."

OpenAI and Anthropic, despite being an unlikely pairing, decided to collaborate on the experiment in order to create transparency on 'alignment evaluations,' by publishing their findings. Usually, AI companies will keep this data in-house as they race against one another to develop even more advanced technology.
GPT-5, which launched earlier this month, is said to have shown 'substantial improvements in areas like sycophancy, hallucination, and misuse resistance,' according to OpenAI.
Advert
But while the newest model might be performing better from a safety point of view, ChatGPT users all over the world have complained about the 'cold' demeanor of the update, with many claiming they felt like they'd lost a friend.
CEO Sam Altman later revealed he felt they had 'totally screwed up' the new rollout, while ChatGPT boss Nick Turley said he'd been 'surprised by the level of attachment people have about a model.'
Topics: AI, Artificial Intelligence, ChatGPT, Technology