A group of over 100 tech leaders, including Paypal supremo Elon Musk, have written an open letter to the United Nations warning of the dangers of "killer robots".
The writers of the letter are predominantly experts in the field of robotics and artificial intelligence (AI), and the target is the UN Convention on Certain Conventional Weapons, a declaration signed in 1980 that seeks to regulate warfare.
The letter warns that "as companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm."
"Lethal autonomous weapons threaten to become the third revolution in warfare," it continues. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
"These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora's box is opened, it will be hard to close. Once this Pandora's box is opened, it will be hard to close."
Elon Musk has long held an interest in robotics and has warned about the dangers of AI before. In July he said "AI's a rare case where I think we need to be proactive in regulation instead of reactive, because I think by the time we are reactive in AI regulation, it's too late."
Robot 'PR2' cooks a pancake in a laboratory kitchen of the Institute for Artificial Intelligence. Credit: PA
Futurologists, who study the impact of projected technologies, have long been suspicious of getting carried away with ideas of super-smart AI. Dr. Ian Pearson, futurologist said: "Everybody in AI is very familiar with this idea - they call it the 'Terminator scenario'."
"It has a huge impact on AI researchers who are aware of the possibility of making [robots] smarter than people. But, the pattern for the next 10-15 years will be various companies looking towards consciousness. There is absolutely no reason to assume that a super-smart machine will be hostile to us."
Dr Pearson did strike a note of warning: "But just because it doesn't have to be bad, that doesn't mean it can't be. You don't have to be bad but sometimes you are. It is also the case that even if it means us no harm, we could just happen to be in the way when it wants to do something, and it might not care enough to protect us."
Featured Image Credit: PA