To make matters worse, the Northpointe Corporation advertises that its software is highly accurate, claiming that 70 percent of those classified by Compas as being highly prone to relapse really do commit crimes again. However, various studies in the United States have shown that this actually applies to only 25 percent of offenders classified as high-risk by the software.
"Here, the ethical responsibility becomes apparent. We as a society must ask ourselves what we want: if the answer is security, then more innocent people will end up behind bars purely as a precautionary measure. Or are we prepared to accept that dangerous criminals might not be punished because the software tends to decide less strictly." The software can be programmed either way. "We must therefore decide what we consider to be ethically correct when administering justice – and above all we must ensure that everyone involved is properly trained," Katharina Zweig says. This means that judges in the US, for example, would have to know more about what the software is based on in order to be able to evaluate the solutions proposed by Compas.
In essence, the questions that society now has to answer had to be addressed by all civilizations throughout history: What is justice? What is revenge? What is punishment meant to achieve? "Technology is hardly the issue here," says Zweig, and thus at the same time illustrating why it is so important to promote interdisciplinary cooperation in many of the issues surrounding artificial intelligence – and to ensure interchange between the disciplines.
Who decides how – and why?
In this context, it is essential to reconcile the different entry points and perspectives of the parties involved. In the current project, the first meeting served to check the suitability of terms for all participants. "The central question was: What actually constitutes a good decision? That has a different meaning for a computer scientist than it does for a lawyer or a psychologist," says Anja Achtziger. The psychology professor from Zeppelin University Friedrichshafen has been working on cognitive decision making psychology for more than twelve years. Achtziger sees her task within the project as initially also being to convey to colleagues from other disciplines how people decide about people.
What actually constitutes a good decision? That has a different meaning for everyone.
"People are of course influenced by stereotypes and classifications when they make decisions. This is simply necessary, because otherwise we would be overwhelmed by the complexity of the world around us," says Achtziger. In other words: Prejudices always play a part in determining things; no one is able to free themselves from this basic truth. Some people therefore believe that technical aid in the administering of justice would be a way to achieve a more objective justice – in theory. "That is not the case, unfortunately. Because algorithms are programmed by people; the self-learning systems learn from the data we provide them with." Since these data cannot be neutral, algorithmic decision making systems are also not.
People are influenced by stereotypes and classifications when they make decisions.
"If, for example, a system has a number of suspected perpetrators to choose from and is supposed to select who is the guilty one, then it will most likely be a black man between the ages of 18 and 22," says Achtziger. This is because in the US this group is statistically particularly associated with certain crimes, such as drug-related offences. There are many reasons for this: Afro-Americans would probably be checked more often, which of course leads to more frequent convictions. Poverty is also an important factor.
I look into the goals and motives that influence human decision-makers – and whether algorithms can map and adopt them in the same way.
In the course of the project, Achtziger would like to clarify more precisely how the way humans think and process information are reflected in the data sets used for machine learning. "I look into the goals and motives that influence human decision-makers – and whether algorithms can map and adopt them in the same way."
Compatible with human rights?
The use of algorithmic decision making systems (ADM) also has an impact on the legal profession. As Wolfgang Schulz points out, "We must determine which legal regulations are applicable and whether ADM systems can actually comply with them". The director of the Hans Bredow Institute for Media Research in Hamburg is the legal expert in the research group and brings the necessary legal perspective into the project. "One focus is to be Article 6 of the European Convention on Human Rights, which states the right of every individual to a fair trial. Is the use of machines in court at all compatible with this article? How can courts and the criminal justice systems of individual countries ensure that the law is observed – and thus human rights and dignity are respected?