Six years imprisonment for the unauthorized use of a foreign car and refusing to cooperate with the police: about two years ago in the US State of Wisconsin, a court sentenced Eric Loomis to prison for a six-year term – based on an algorithm. A software produced by Northpointe Corporation called Compas created a social prognosis for the judge who sat on Loomis' case. The prognosis was based on the defendant's résumé and a catalog of 137 questions he had to answer. The algorithm found that Loomis would continue to be a danger to society. Although there was hardly anything to substantiate such a view, the judges sent Loomis to prison – where he is still serving his sentence today, despite an appeal against the severity of the punishment.
Sentencing with the aid of artificial intelligence is by no means science fiction, and has long been practiced in Great Britain as well. It is based on the expectation of being able to better assess dangers to the public, but also on the hope of faster and cheaper legal proceedings and a more rational jurisdiction. Should automated decision-making also be used in German courts? The question is open.
We must decide what we consider to be ethically correct when administering justice.
But do we really want that? Should machines in future have a say in deciding the fate of individual people? Where can they be useful, what should the legal framework look like, where are the ethical limits? Computer scientist Professor Katharina Zweig from the Technical University of Kaiserslautern wants to shed light on these questions in a joint project together with colleagues from psychology, computer science, law, and the political and social sciences.
The problems behind the software
She is well acquainted with the Compas system, as she and her team at the Algorithm Accountability Lab have been intensively examining the software that put Eric Loomis behind bars for six years. Compas determines how likely it is that a perpetrator will break the law again – and this has a great influence on the sentence imposed by the judge. However, there are some problems to do with the software. "One snag is that algorithmic decision making systems categorize people into risk classes, but judges have no way of seeing how the software arrives at such a decision," says Zweig. It is could be that a person with an estimated risk of repeating an offence may be at the lower end of the "high risk of recidivism" scale, but the judge sees only that the defendant is assigned to this high-risk class and is unable to distinguish from the most problematic group at the upper end of the scale.