There are many examples of how algorithms frequently discriminate against various population groups, i.e. show machine bias. Back in 2014, the US corporate giant Amazon implemented software that used artificial intelligence to rank female and male job applicants. Four years later, it was discovered that the algorithm discriminated against female applicants. In another case, Google's automatic image-sorting software captioned the photo of an African American woman "gorilla". A New Zealand passport office refused to issue Asians with passports because it thought the submitted photos showed that the applicants had their eyes closed.
An algorithm that discriminates
"A particularly disturbing example is a software called Compas, which supports US judges in sentencing criminal offenders," says Hübner. Compas predicted a higher probability of recidivism for Afro-American offenders than for white persons, measuring the risk in percentage terms. This is clearly misleading – some may of course never again find themselves on the wrong side of the law.
Artificial intelligence is like a knife. You can hurt yourself with it, but you can also do very useful things, like cutting vegetables.
"The results were disconcerting," says Hübner. For example, Compas gave 18-year-old Brisha Borden a score of 8 for a bicycle theft, which corresponds to a very high risk of recidivism. At the same time, a man already convicted for other thefts and more serious crimes received a clement 3 points. The obvious difference between these two people was skin colour – Brisha Borden is black.
But how can it happen that a machine reflects and adopts human prejudices? And how can this be prevented? "Artificial intelligence is like a knife. You can hurt yourself with it, but you can also do very useful things, like cutting vegetables," says Bodo Rosenhahn, a computer scientist and professor at the Institute for Information Processing at Leibniz University Hanover. In the BIAS research project, he and his team are developing the technical concepts to solve the problem. Rosenhahn explains that algorithms learn from selected training data: for example, from information about how people have reached decisions in the past. "But if the selection of this training data is inadequate, algorithms may actually reproduce social problems. As a computer scientist, it is my job to find suitable ways to train models and to set the mathematical conditions that guide algorithmic decisions." In many cases, though, the conditions may contradict each other.
I hope to get some moral guidelines and rules [...] so that algorithms can be programmed more fairly in the future..
Put simply, if a certain type of discrimination is prevented on one side, a new one may arise for another group of people on the other side. If, for example, an algorithm is programmed to apply a stricter measure of judgment to suspected offenders, even innocent people may end up behind bars.
Computer scientists have to curate and program data, i.e., make decisions that have far-reaching social consequences. This is where Rosenhahn appreciates the interdisciplinary collaboration at BIAS: "I hope to get some moral guidelines and rules from the ethicists and legal scholars so that algorithms can be programmed more fairly in the future."