China has developed an artificial intelligence that can reportedly identify crimes and file charges against criminals with more than 97 per cent accuracy.
The AI was developed and tested by the Shanghai Pudong People's Procuratorate and it has been trained to identify Shanghai's eight most common crimes.
It was developed by a team led by Professor Shi Yong, director of the Chinese Academy of Sciences' big data and knowledge management laboratory.
As reported by the South China Morning Post, the AI cannot 'participate in the decision-making process of filing charges and suggesting sentences', but is already being used to help assess evidence and determine whether criminals are dangerous to the public.
It files charges using a description of a suspected criminal case and the researchers believe it can 'replace prosectors in the decision-making process to a certain extent'.
The tool was built using an existing AI called System 206 and, without being able to identify and remove irrelevant information in a case, or process human language, it won't be able to make sentencing decisions or file charges without human intervention.
But, it can identify and charge criminals in credit card fraud, gambling, reckless driving, international assault, theft, fraud, obstructing an officer and, most worryingly, political dissent.
System 206 has been used in processing cases since 2016, but it was not designed to be part of the decision making process.
The learnings applied to the new AI include the ability to sort data and determine what data points are relevant to a case, which allows it to make decisions where System 206 couldn't.
Speaking to the Post, an anonymous prosecutor said the concern was that while 97 per cent accuracy is high, there is still the change of mistakes.
"Who will take responsibility when it happens? The prosecutor, the machine or the designer of the algorithm?" the lawyer asked.
The developers said the AI will be used to lessen the workload of prosecutors, but it is yet to be widely rolled out.
It can be used on a desktop computer and uses billions of data points stored on the system in its analysis.
It was developed using thousands of legal cases from around the world.
The prosecutor who spoke to the Post said the concern was in trying to replace human decision making with a machine.
"AI may help detect a mistake, but it cannot replace humans in making a decision," they said.
Malaysia is already using AI in sentencing, but critics have pointed out that AI is only as strong as the data it is being fed and is able to develop bias, which could be particularly dangerous in legal use.
An AI risk assessment software used in Wisconsin has already been proven to have developed a bias against offenders based on their race.
As of yet, there is no timeframe for when the AI will be rolled out further or any clear plans for further training beyond its current eight crime abilities.
Featured Image Credit: Joe Belanger / Alamy Stock Photo