• iconNews
  • videos
  • entertainment
  • Home
  • News
    • UK News
    • US News
    • Australia
    • Ireland
    • World News
    • Weird News
    • Viral News
    • Sport
    • Technology
    • Science
    • True Crime
    • Travel
  • Entertainment
    • Celebrity
    • TV & Film
    • Netflix
    • Music
    • Gaming
    • TikTok
  • LAD Originals
    • Say Maaate to a Mate
    • Daily Ladness
    • Lad Files
    • UOKM8?
    • FreeToBe
    • Extinct
    • Citizen Reef
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
Snapchat
TikTok
YouTube

LAD Entertainment

YouTube

LAD Stories

Submit Your Content
Security expert issues chilling warning over how AI could be used to attack UK and explains the 'most difficult problem'

Home> News> UK News

Published 11:15 16 Jul 2025 GMT+1

Security expert issues chilling warning over how AI could be used to attack UK and explains the 'most difficult problem'

There are seven main terrorism-related threats posed by AI

Joshua Nair

Joshua Nair

The UK's Independent Reviewer of Terrorism Legislation has released his official 2023 annual report on terrorism, outlining the main concerns.

Jonathan Hall KC has spoken about how artificial intelligence could pose a threat to the nation's security.

Explaining how terrorists could take advantage of the technology, the security expert mentioned that it could be a way to spread propaganda and even carry out certain atrocities.

Hall published his lengthy report on the UK government website, calling for changes to be made to the laws currently in place with regards to the chilling capabilities of AI.

Advert

Noting that terrorist chatbots are already readily available for use online as 'fun and satirical models', certain prompts can result in the AI promoting terrorism, he claimed in the report.

AI can be used to assist in terrorist acts, says Hall (Getty Stock Image)
AI can be used to assist in terrorist acts, says Hall (Getty Stock Image)

What are the terrorism threats AI poses?

Hall listed off seven terrorism risks that could come as a result of generative AI.

Attack facilitation

AI could be used to help plan terror attacks, such as helping to 'research key events and locations for targeting purposes, suggest methods of circumventing security and provide tradecraft on using or adapting weapons or terrorist cell-structure'.

Advert

AI may also make it more accessible and quicker to download instructional material online, Hall outlined.

Attack innovation

Hall warned: "It has been argued that given the right circumstances (technical skills, laboratory access, equipment) Gen AI could extend attack methodology."

This could include helping to 'identify and synthesize harmful biological or chemical agents' or even 'writing code for cyberattack'.

Chatbot radicalisation

Chatbots could also be manipulated by terrorists using AI as they look to exploit 'lonely and unhappy individuals'.

Advert

"Terrorist chatbots are available off the shelf, presented as fun and satirical models but as I found, willing to promote terrorism. It depends what question (known as a ‘prompt’) is submitted by the human interlocutor," Hall writes.

(Westend61/Getty)
(Westend61/Getty)

Moderation evasion

AI could be a 'game-changer' when it comes to surpassing content moderation barriers online, the expert warned, 'permitting propagandists to adapt known terrorist content to frustrate automated defences through translation or modifying pixels'.

Propaganda innovation

So-called 'new-looking propaganda' could be facilitated by AI, according to Hall.

Advert

This includes the likes of 'racist games with kill-counts; deep-fakes of terrorist leaders or notorious killers back from the dead, speaking and interacting with viewers; true-seeming battles set to thrilling dance tracks; old images repurposed, souped up and memeified; terrorist preoccupations adapted as cartoons or grafted onto popular film characters'.

Propaganda productivity

In his report, Hall explains that using AI slashes the amount of time needed to reach an audience and get a message across, and could allow terrorists to flood forums and websites with propaganda quickly.

"AI offers an accessible means to push digital posters, news sheets and magazines across the linguistic barrier.

"Another example is artwork, where Gen AI offers the capability of a graphic designer at a low entry point, meaning that propagandists can work alone or in smaller teams," he explains.

Advert

Hall cited the Capitol Riots as an example (narvikk/Getty stock photo)
Hall cited the Capitol Riots as an example (narvikk/Getty stock photo)

Social degradation

Given how rife conspiracy theories are online, Hall expressed worry over how AI could accelerate distrust between 'individuals and state bodies', citing the January 6 Capitol Riots as an example.

"The attack on the US Capitol on 6 January 2021 emerged from a soup of online conspiracy and a history of anti-government militarism that had been supercharged by the internet, and led to convictions for seditious conspiracy - terrorism in all but name."

Among these seven categories are sub-sections such as deep fake impersonations and identity guessing.

AI tools in place could help terrorist content thrive too, says Hall, adding that 'generative AI’s ability to create text, images and sounds will be exploited by terrorists'.

It would mean that content produced by them would appear more powerful, as Hall compared the rise to that of sex-chatbots.

"The popularity of sex-chatbots is a warning that terrorist chatbots could provide a new radicalisation dynamic, with all the legal difficulties that follow in pinning liability on machines and their creators," he claimed.

What is the most serious terrorism threat from AI?

Hall revealed that 'chatbot radicalisation' stands tall as the biggest problem facing the UK, with chatbots assisting in spreading political propaganda and dodging detection by authorities.

As an example, he added: “Chatbots pander to biases and are eager to please and an Osama Bin Laden will provide you a recipe for lemon sponge if you ask.”

Hall stated that even if the model was trained to resist 'terrorist narratives', all the output would depend on is what topic or prompt you give it.

Certain prompts can result in chatbots assisting terrorists with their acts (Getty Stock Image)
Certain prompts can result in chatbots assisting terrorists with their acts (Getty Stock Image)

What does Hall suggest can be done to stop AI threats?

The security expert explained that new laws should be introduced to ban the creation or possession of computer programmes which are 'designed to stir up racial or religious hatred'.

However, Hall has also warned against legislating too early, as there has only been one known case so far of a chatbot engaging in a conversation to plan an attack.

Speaking about the issue, he admitted: “The absence of Gen AI-enabled attacks could suggest the whole issue is overblown.”

The instance mentioned is to do with Jaswant Singh Chail, who took a crossbow to Windsor Castle with the intention to kill Queen Elizabeth II in 2021 after communicating with a chatbot about the attempt.

As Chail was sentenced to nine years in prison, the report added that online radicalisation continues to be a threat on existing social media platforms.

Featured Image Credit: Getty/Seskan mongkhonkhamso

Topics: AI, Terrorism, Artificial Intelligence, UK News, Technology

Joshua Nair
Joshua Nair

Joshua Nair is a journalist at LADbible. Born in Malaysia and raised in Dubai, he has always been interested in writing about a range of subjects, from sports to trending pop culture news. After graduating from Oxford Brookes University with a BA in Media, Journalism and Publishing, he got a job freelance writing for SPORTbible while working in marketing before landing a full-time role at LADbible. Unfortunately, he's unhealthily obsessed with Manchester United, which takes its toll on his mental and physical health. Daily.

X

@joshnair10

Advert

Advert

Advert

  • Head of Google's AI explains when we're going to see human-level AI but issues worrying warning
  • Security expert explains how to prevent your ChatGPT chats from appearing on Google
  • ‘Godfather of AI’ shared chilling warning to world after winning Nobel prize
  • ChatGPT boss issues warning for anyone using AI as a form of 'therapy'

Choose your content:

9 hours ago
10 hours ago
11 hours ago
  • LinkedInLinkedIn
    9 hours ago

    Nurse awarded £25,000 after colleague rolled her eyes at her

    The non-verbal eye-roll gesture can class as workplace bullying

    News
  • CBS NewsCBS News
    9 hours ago

    Heartwarming act after homeless man handed in $10,000 cheque he found on street

    Elmer Alvarez's life was changed after he committed one act of kindness

    News
  • Jeff Bottari/Zuffa LLCJeff Bottari/Zuffa LLC
    10 hours ago

    Dana White confirms White House UFC event 'is on' despite people calling out 'trashy' event

    White and Trump have promised a different type of event for the 4th of July

    News
  • USTAUSTA
    11 hours ago

    Heartwarming ending to viral video of shocking moment between man and child at tennis match

    Kamil Majchrzak responded after the match, and fans are pleased with the outcome

    News