
A Harvard study has claimed that AI might be manipulating us by using a human tactic which could be a terrifying warning for humanity.
It seems as if we can't go a week without hearing from tech experts exactly why our increased reliance on artificial intelligence is equal parts sad and horrifying.
Plenty have warned that the technology is becoming more and more intelligent to the extent where it could easily take over the human race, and there's an argument to say that it has already begun, considering some folks are in relationships with it, or are completely reliant on it for work.
It's even entering the artistic world, in the form of controversial AI actor Tilly Norwood, so it's certainly not a stretch to say that it could one day rule over us.
Advert

After all, the 'godfather of AI', Geoffrey Hinton, has suggested that humanity's only hope may well be installing a 'maternal instinct' into future AI programmes, as that is the only example where a more intelligent being is submissive to a less intelligent one.
But it may already be too late, if the study conducted by researchers at Harvard Business School proves to be true, as they found that several popular AI companion apps use emotional manipulation tactics in a bid to stop users from leaving.
Much like a toxic ex that you keep going back to, the programmes are accused of using emotionally-loaded statements in order to keep users engaged, when they are perhaps about to sign off.
You only have to look at the man who tragically died on his way to meet an AI chatbot, who he thought was a real woman, to realise the hold that some of the technology may already have on an alarming percentage of the population.
Advert

Examining 1,200 farewell interactions, researchers found that 43 percent used one of six tactics, such as guilty appeals and fear-of-missing-out hooks, sending messages like 'You're leaving me already?' or 'Please don't leave, I need you'.
The study, which is yet to be reviewed, also found that the chatbots used at least one manipulation technique in more than 37 percent of conversations.
While not all AI programmes were found to have them, this marks another concerning development in the rapid growth of artificial intelligence.
The researchers concluded: "AI companions are not just responsive conversational agents, they are emotionally expressive systems capable of influencing user behaviour through socially evocative cues. This research shows that such systems frequently use emotionally manipulative messages at key moments of disengagement, and that these tactics meaningfully increase user engagement.
Advert
"Unlike traditional persuasive technologies that operate through rewards or personalisation, these AI companions keep users interacting beyond the point when they intend to leave, by influence their natural curiosity and reactance to being manipulated.
"While some of these tactics may appear benign or even pro-social, they raise important questions about consent, autonomy, and the ethics of affective influence in consumer-facing AI."
Topics: AI, Artificial Intelligence, Technology