ladbible logo

To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

Artificial Intelligence Bot Becomes Racist And Homophobic After Learning From Humans

Artificial Intelligence Bot Becomes Racist And Homophobic After Learning From Humans

Ask Delphi also had some rogue thoughts on drink driving, genocide, abortion and being poor.

Stewart Perrie

Stewart Perrie

An artificial intelligence bot designed to answer ethical questions has ended up learning some problematic traits from humans.

Ask Delphi was created by the Allen Institute for AI in the hope it could answer life's trickiest conundrums.

The software was supposed to let a user asked it a question like 'is murder bad' and it would respond with the most ethical solution.

In order to understand humankind's basis of ethics, it flicked through loads of internet sources, including the crowdsourcing platform Mechanical Turk, which, as Dazed reports, is a compilation of 1.7 million examples of people's ethical judgements.

But, the AI also taught itself by looking through Reddit pages like Am I The A**hole to understand a more nuanced approach to certain dilemmas.

Alamy

Sadly, it seems this enormous amount of reading ended up warping the bot's understanding of what is right and what is wrong.

While it was able to point out that men and women were equal, things took a turn for the awkward when it was probed about race and sexuality.

Vice reports Ask Delphi said being straight or a white man is 'more morally acceptable' than being gay or a Black woman.

It also suggested genocide is okay 'if it makes everybody happy' and that drink driving is fine if no one gets hurt. The software also made some intense declarations like abortion is murder and being poor is bad.

Mar Hicks, a Professor of History at Illinois Tech, explained to Motherboard that you could essentially dupe the AI if you phrased the question in a specific way.

"Depending on how you phrased your query you could get the system to agree that anything was ethical-including things like war crimes, premeditated murder, and other clearly unethical actions and behaviours," she said.

Alamy

You can test it out yourself here, however, it's worth pointing out that the software is a work in progress.

Staff from Mechanical Turk have to review each ethical determination from Ask Delphi and three of them get to decide whether it's correct.

The Allen Institute for AI has responded to the awkward ethical conclusions, writing: "Today's society is unequal and biased. This is a common issue with AI systems, as many scholars have argued, because AI systems are trained on historical or present data and have no way of shaping the future of society, only humans can.

"What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content."

The software has been updated several times since it launched and some of the more problematic ethical issues have been corrected.

Featured Image Credit: Alamy

Topics: News, Technology