Technology in the field of artificial intelligence is advancing at an unprecedented pace, facing science with the following questions: What opportunities are opened up by AI? What are the risks involved? Can we rely on the promise it holds? And above all: What do such new technologies mean for society – and for each and every one of us? There is an urgent need to address these and other important questions. Thereby, it is important to take into consideration not only the technical but also the ethical, moral and normative consequences of such developments. To reach successful outcomes, the Volkswagen Foundation believes it is imperative that the engineering and social sciences bundle their competencies from the beginning: With this aim in mind, in 2018, it launched the funding initiative "Artificial Intelligence and the Society of the Future" with a view to strengthening this idea of interdisciplinary collaboration.
In the first round of the initiative, funding has been approved for five projects. They encompass interdisciplinary research groups in the thematic areas of law, media studies and social sciences, computer science, molecular biology, philosophy and product design. The projects each run for three to four years. In addition to research and development, there is also a focus on how they can live up to their responsibility towards society and make a meaningful contribution to shaping the future.
The following projects were approved:
University of Duisburg-Essen, Bielefeld University, Evangelische Hochschule Nürnberg, University of Kassel: "The implication of conversing with intelligent machines in everyday life on people’s beliefs about algorithms, their communication behavior and their relationship building" (ca. 1.5 Mio. euro)
This project will contribute to the question of how increasingly common conversations with intelligent machines will influence people's mental model of the system depending on its transparency, the human communication culture and human relationship building. The scientists seek answers to the following questions: What effects emerge when humans communicate with a "black box" for which they have no mental model? Does the fact that the machine is always at people's service and the user does not need to be polite when talking to machines lead to a brutalization of human dialogue culture? What kinds of relationships and dependencies develop over time and will they partly be preferred over human-human relationships?. In order to address the research questions, three scenarios will be employed. The first scenario targets the most vulnerable group, i.e. children interacting with conversational devices. The second scenario analyzes adults interacting with a health app which is able to converse with the user and - based on machine learning algorithms - presents suggestions for health-related behaviors. The third scenario will address seniors in ambient assisted living situations.
German Cancer Research Center, EMBL Heidelberg – The European Molecular Biology Laboratory, Heidelberger Akademie der Wissenschaften, Charité – Universitätsmedizin Berlin: "Individualising and democratizing cancer patient care via Artificial Intelligence: transdisciplinary solutions and normative considerations" (ca. 1,4 Mio. euro)
A key promise of personalized medicine is that once realised, all citizens – irrespective of whether they are in cities or in rural areas – will be able to equally benefit from state-of-the-art individualized health care. The project will employ a transdisciplinary approach driven by AI to democratize precision medicine for prostate cancer in a regionally oriented model project targeting 8 % of the German population, which in the future may be expanded inter-regionally or internationally. This will build on machine learning methods, to guide targeted treatment decisions based on deep learning classifiers that integrate longitudinally observed clinical measurements with multi-omic data. Altogether, the project represents a step towards the sharing of human, physical, and intellectual resources in healthcare consistent with social values and individuals' reasonable expectations and the concepts of fairness, inclusivity and equality. It will additionally foster the further development of AI as a technological framework for public health governance and enable enhanced standardization of the implementation of the human right to health.
Technische Universität Kaiserslautern, Hans-Bredow-Institut für Medienforschung Hamburg, Zeppelin University Friedrichshafen, University of Birmingham: "Deciding about, by, and together with algorithmic decision making systems" (ca. 1,5 Mio. euro)
AI programs are of varying sophistication, with the most advanced employing complex "machine learning" techniques. For this, machine learning algorithms are used to deduce decision rules from input data and store them in, e. g., decision trees or neutral networks (algorithmic decision making; "ADM"). Over time, the AI tool improves itself by learning from its past decisions, correct or incorrect. The overarching ambit of this project is to examine whether there are limitations to this kind of ADM, within the range of AI systems used today. ADM systems are becoming increasingly popular, especially within notoriously cash-strapped criminal justice systems. In the USA, major civil liberties unions such as the ACLU have even advocated their use at all stages of the criminal process to avoid possible human biases. That increasing popularity of ADM within the CJS, coupled with the extremely grave potential consequences for individuals when it comes to errors of any CJS decision, makes the CJS an ideal research area to compare the following: On the one hand the various ways in which humans alone make decisions about other humans compared with how ADM systems alone make the same decisions about humans, with the ways in which humans in conjunction with ADM systems take decisions about other humans - but also the limits of the use of ADM systems. A very closely related question is how a given polity decides whether and how to use an ADM system within its CJS.
Bauhaus-Universität Weimar, Chemnitz University of Technology, University of Southern Denmark: RethiCare – Re-thinking Care Robots (ca. 1,2 Mio. euro)
As robot assistance for elderly in homes and care facilities become central funding topics, this raises ethical and methodological questions. What consequences has the automatization of care for the quality of life and dignity of those cared for? How can robotic assistance be designed adequately and carefully? And what challenges arise from this endeavor for robotics research, in return? The aim of this project is to investigate methods and approaches for interdisciplinary collaboration that result in more appropriate designs and technical solutions of robots for the care context, concerning appearance, intelligent behavior and situatedness in the social context of use. The interdisciplinary constellation will enable to re-think from the ground up what care-robotics should look like and 'do', using contemporary design methods in a rapid prototyping, design-driven approach. RethiCare will investigate robots, i.e., devices with autonomously controlled degrees of freedom. Hence the design process addresses, besides shape and functionality, also the design of behavior that is inherently proactive and adaptive.
Leibniz University Hannover: Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions (ca. 1,4 Mio. euro)
Whether selecting applicants or granting loans – more and more decisions are made by AI techniques based on data and algorithmic processing. Via search engines, Internet recommendation systems and social media bots, these techniques also influence our perception of political developments and even of scientific findings. However, there is growing concern about the quality of AI ratings and predictions. In particular, there is strong evidence that algorithms often do not eliminate bias and discrimination in the data, but rather reinforce them, thereby exerting negative effects on social cohesion and democratic institutions. In this research project, philosophers, lawyers, and computer scientists will jointly address the question of how standards of unbiased attitudes and non-discriminatory practice can be met in big data analyses and algorithm-based decision-making. They will provide philosophical analyses of the relevant concepts and principles in the context of AI, investigate their adequate reception in pertinent legal frameworks and develop concrete technical solutions.