Robocop-type policing techniques are set to replace police on the beat within the next decade as Artificial Intelligence becomes widely used to investigate crimes.
Thames Valley Police said that AI computers, which mimic humans by making decisions themselves, will be used to answer 999 calls, detect crimes and identify offenders.
However, the force also warned of “bias” in the software and a concern that AI computers “might be unable to reason with a human”.
Zero Hedge reports:
We don’t find further clues about the subject of these biases until much later in the story, when the leader of a civil liberties group – which have been predictably alarmed by plans to automate police forces – pointed to a study showing the machine’s algorithms have shown racial biases, according to the Telegraph.
Civil liberties groups were alarmed by the plans. David Green, the director of the Civitas thinktank, warned that the AI computers could unfairly target ethnic minority groups.
He said: “Robocop policing has now arrived in England. This Orwellian reliance on automated decisions has been found to undermine the most basic precepts of the justice system when it has been tried in America.
“An experiment in Fort Lauderdale, for example, found that the algorithm reflected human prejudices, including racial bias.”
However, the news comes as ministers prepare to publish the first ever review into how AI will change Britain over the coming decades. AI is already used by Scotland Yard to recognise faces at London’s Notting Hill Carnival. Durham Constabulary is also planning to use AI for deciding whether to keep suspects in custody.
In a submission to a Parliamentary inquiry into the Implications of Artificial Intelligence, Thames Valley said there are “even at the lowest level AI could perform many of the process driven tasks that take place in the police”.
AI could be used to assist “investigations by ‘joining the dots’ in police databases, the risk assessment of offenders, forensic analysis of devices, transcribing and analysis of CCTV and surveillance, security checks and the automation of many administrative tasks”, it said.
To be sure, Thames Valley police noted that even once robocops are ready for widespread use, they will still require “a high level of human oversight and clear justification”.
Their report showed that “recent tests of AI in policing indicate there is a risk of bias perpetuation in AI outputs, therefore engagement with Privacy and Civil Rights groups will be necessary to persuade the public that everything possible is being done to mitigate this whilst doing our best to keep them safe.
“Of utmost importance is that any AI process that involves an ethical issue, must have a high level of human oversight and clear justification.
The automation of processes also introduces a risk of being unable to reason with a human when events occur outside expected parameters.”
Latest posts by Niamh Harris (see all)
- Florida Officer Suspended After Arresting Two 6 Year Old Children - September 23, 2019
- Russia Blocks Israeli Airstrikes In Syria - September 21, 2019
- Transgender Group Advises Teachers That Puberty Blockers Should Be Given To Kids As Young As 12 - September 17, 2019