Grafik KI künstliche Intelligenz
Story

Sentencing with the aid of algorithms

Judith Blage

Many courts in the USA use software to assess the risk of recidivism of defendants. Should AI also be used in the German judicial system? An interdisciplinary research group is now seeking answers to the provocative issues raised in this context.

Six years imprisonment for the unauthorized use of a foreign car and refusing to cooperate with the police: about two years ago in the US State of Wisconsin, a court sentenced Eric Loomis to prison for a six-year term – based on an algorithm. A software produced by Northpointe Corporation called Compas created a social prognosis for the judge who sat on Loomis' case. The prognosis was based on the defendant's résumé and a catalog of 137 questions he had to answer. The algorithm found that Loomis would continue to be a danger to society. Although there was hardly anything to substantiate such a view, the judges sent Loomis to prison – where he is still serving his sentence today, despite an appeal against the severity of the punishment.

Sentencing with the aid of artificial intelligence is by no means science fiction, and has long been practiced in Great Britain as well. It is based on the expectation of being able to better assess dangers to the public, but also on the hope of faster and cheaper legal proceedings and a more rational jurisdiction. Should automated decision-making also be used in German courts? The question is open.

We must decide what we consider to be ethically correct when administering justice.

But do we really want that? Should machines in future have a say in deciding the fate of individual people? Where can they be useful, what should the legal framework look like, where are the ethical limits? Computer scientist Professor Katharina Zweig from the Technical University of Kaiserslautern wants to shed light on these questions in a joint project together with colleagues from psychology, computer science, law, and the political and social sciences.

The problems behind the software

She is well acquainted with the Compas system, as she and her team at the Algorithm Accountability Lab have been intensively examining the software that put Eric Loomis behind bars for six years. Compas determines how likely it is that a perpetrator will break the law again – and this has a great influence on the sentence imposed by the judge. However, there are some problems to do with the software. "One snag is that algorithmic decision making systems categorize people into risk classes, but judges have no way of seeing how the software arrives at such a decision," says Zweig. It is could be that a person with an estimated risk of repeating an offence may be at the lower end of the "high risk of recidivism" scale, but the judge sees only that the defendant is assigned to this high-risk class and is unable to distinguish from the most problematic group at the upper end of the scale.

For related articles, visit our focus "Artificial Intelligence and society"

Learn more

To make matters worse, the Northpointe Corporation advertises that its software is highly accurate, claiming that 70 percent of those classified by Compas as being highly prone to relapse really do commit crimes again. However, various studies in the United States have shown that this actually applies to only 25 percent of offenders classified as high-risk by the software.

"Here, the ethical responsibility becomes apparent. We as a society must ask ourselves what we want: if the answer is security, then more innocent people will end up behind bars purely as a precautionary measure. Or are we prepared to accept that dangerous criminals might not be punished because the software tends to decide less strictly." The software can be programmed either way. "We must therefore decide what we consider to be ethically correct when administering justice – and above all we must ensure that everyone involved is properly trained," Katharina Zweig says. This means that judges in the US, for example, would have to know more about what the software is based on in order to be able to evaluate the solutions proposed by Compas.

In essence, the questions that society now has to answer had to be addressed by all civilizations throughout history: What is justice? What is revenge? What is punishment meant to achieve? "Technology is hardly the issue here," says Zweig, and thus at the same time illustrating why it is so important to promote interdisciplinary cooperation in many of the issues surrounding artificial intelligence – and to ensure interchange between the disciplines.

Who decides how – and why?

In this context, it is essential to reconcile the different entry points and perspectives of the parties involved. In the current project, the first meeting served to check the suitability of terms for all participants. "The central question was: What actually constitutes a good decision? That has a different meaning for a computer scientist than it does for a lawyer or a psychologist," says Anja Achtziger. The psychology professor from Zeppelin University Friedrichshafen has been working on cognitive decision making psychology for more than twelve years. Achtziger sees her task within the project as initially also being to convey to colleagues from other disciplines how people decide about people.

What actually constitutes a good decision? That has a different meaning for everyone.

"People are of course influenced by stereotypes and classifications when they make decisions. This is simply necessary, because otherwise we would be overwhelmed by the complexity of the world around us," says Achtziger. In other words: Prejudices always play a part in determining things; no one is able to free themselves from this basic truth. Some people therefore believe that technical aid in the administering of justice would be a way to achieve a more objective justice – in theory. "That is not the case, unfortunately. Because algorithms are programmed by people; the self-learning systems learn from the data we provide them with." Since these data cannot be neutral, algorithmic decision making systems are also not.

People are influenced by stereotypes and classifications when they make decisions.

"If, for example, a system has a number of suspected perpetrators to choose from and is supposed to select who is the guilty one, then it will most likely be a black man between the ages of 18 and 22," says Achtziger. This is because in the US this group is statistically particularly associated with certain crimes, such as drug-related offences. There are many reasons for this: Afro-Americans would probably be checked more often, which of course leads to more frequent convictions. Poverty is also an important factor.

I look into the goals and motives that influence human decision-makers – and whether algorithms can map and adopt them in the same way.

In the course of the project, Achtziger would like to clarify more precisely how the way humans think and process information are reflected in the data sets used for machine learning. "I look into the goals and motives that influence human decision-makers – and whether algorithms can map and adopt them in the same way."

Compatible with human rights?

The use of algorithmic decision making systems (ADM) also has an impact on the legal profession. As Wolfgang Schulz points out, "We must determine which legal regulations are applicable and whether ADM systems can actually comply with them". The director of the Hans Bredow Institute for Media Research in Hamburg is the legal expert in the research group and brings the necessary legal perspective into the project. "One focus is to be Article 6 of the European Convention on Human Rights, which states the right of every individual to a fair trial. Is the use of machines in court at all compatible with this article? How can courts and the criminal justice systems of individual countries ensure that the law is observed – and thus human rights and dignity are respected?

Cover IMPULSE-Magazin

In our magazine "Impulse" you can find more articles related to "Artificial Intelligence" (in German only). 

Learn more

"The way the data produced by such a system is presented to the judge is probably also important," says Schulz. For example, it might be helpful for an ADM system to generate several proposals so that judges can weigh them up against each other. If there is only one proposal, there is a higher probability that judges will simply accept it, thus leaving the decision entirely up to the system. "Bringing such considerations into line with legal requirements and formulating legal guidelines is one of my goals in the project," says Schulz.

In addition to drawing up legal guidelines, the project will also formulate guidelines for software developers and others involved in AI development. Katharina Zweig emphasizes: "If we as a society decide on the right decisions and regulations now and harness the possibilities offered by the new technologies in a sensible way, artificial intelligence is a huge opportunity for mankind. But only then."

This claim is reflected in Katharina Zweig's commitment beyond research. She is co-founder of the non-profit organization AlgorithmWatch, which monitors the development of the new technology and explains it in layman's terms – reaching assessments and raising critical questions.

eine Cyborg Hand welches ein künstliches Menschenhorn hält

The funding initiative "Artificial Intellligence"

More information on the funding initiative under "Artificial Intelligence and the Society of the Future".

Learn more