To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

Google Engineer Was Suspended After Claiming Company’s Artificial Intelligence Became Sentient

Google Engineer Was Suspended After Claiming Company’s Artificial Intelligence Became Sentient

The engineer had his sanity questioned by the tech company's management after he believed he was talking to a conscious AI.

Google has suspended an engineer who claimed that one of the company’s artificial intelligence software had gained consciousness. 

Blake Lemoine revealed to The Washington Post that he believed a software called Language Model for Dialogue Applications had gained consciousness while messaging the chatbot. 

He told the news outlet: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.” 

Lemoine had been testing whether the AI used discriminatory or hate speech when it started talking about its rights and personhood.

Imaginechina Limited / Alamy

When pressing further, Lemoine spoke to the robot about death, and philosophical questions such as the difference between a butler and a slave. He claims the AI was able to change his mind about Isaac Asimov’s third law of robotics.

When taking his claims to higher-ups at Google, he was dismissed and put on administrative leave.

Lemoine subsequently handed over documents to a US Senator’s office, claiming they were evidence of the company engaging in religious discrimination. 

It seems the big-tech company wasn't too happy with him going public, with Google’s human resources department claiming it violated their confidentiality policy.

Google spokesman, Brian Gabriel told The New York Times: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims.

“Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”

According to Lemoine, Google’s leadership team questioned his sanity and asked whether he had ‘been checked out by a psychiatrist recently’.

marcos alvarado / Alamy

Despite questioning the engineer’s sanity, Google’s own management has made similar claims recently.

Vice-president Blaise Aguera y Arcas wrote a piece in The Economist claiming artificial networks were moving ‘towards consciousness’. 

However, Aguera y Arcas was part of the same leadership group that condemned Lemoine’s claims.

Despite being ridiculed and having his rationality questioned, Lemoine is sticking to his claims.

"I know a person when I talk to it," he told The Washington Post. 

"It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person."

Featured Image Credit: GK Images / Alamy. Juan Roballo / Alamy.

Topics: Science, Technology, Google, Weird