Empowering citizens to become AI co-designers
Dr. Diana Serbanescu believes that theater and artistic practices help to engage the public in a promising dialog with research and technology.
"For better or worse? Man from the Machine" was the title of the Science Movie Night planned for April 24, 2020 at the Xplanatorium in Herrenhausen, which had to be cancelled. AI expert Diana Serbanescu was one of the panelists invited to discuss the film "Ex Machina" and related topics.
You are the head of the research group "Criticality of AI-based Systems" at the Weizenbaum Institute in Berlin and question current design principles for the development of intelligent systems. What is wrong with it, what should be done differently?
Diana Serbanescu: Like technology in general, intelligent systems frequently are surrounded by an aura of truth and objectivity. However, as Donna Haraway posits, "Technology is not neutral. We're inside of what we make, and it's inside of us. We’re living in a world of connections — and it matters which ones get made and unmade." It is important to remember that technologies designed and developed by people are also shapedy the power structures in places, and therefore prone to different types of biases. If ignored, these biases can lead to systemic discrimination.
The causes of biases are plural, from the lack of diversity in design and development teams to datasets reflecting existing social bias. We must ensure that the intelligent systems we use are designed following adequate criteria of fairness and accountability. As a result, the identification of cultural and racial biases as well as exploration of symbolic power structures existing within datasets are important research topics.
Another important step in making intelligent systems fair and accountable is to make sure we understand how they work, which is what explainability looks into. Inherently complex, these systems have become so opaque even for their own creators and, while the mathematics behind existing AI is well-theorised, one of today's greatest challenges arises from making particular implementations explainable to humans by providing a clear justification for any action or belief. Decoding what happens in the black box is crucial as AI has gained a vital impact on our lives, and we need to better understand its inner workings in order to gain trust. In order to avoid unintentional biases or misuse of AI systems, transparency and explainability are not only desirable, but mandatory.
You are also one of the founders of REPLICA, an "Institute for Creative Anticipation and Performing Arts", which aims to create an ethical framework for the discourse on future sociotechnological phenomena. What is your main goal, how can the arts contribute?
Diana Serbanescu: I believe in the power of theatre and artistic practices to act as mediator between technological and scientific discoveries on one hand, and the general public on the other hand. Complex scientific concepts can be translated into imaginative and experiential formats to engage diverse citizen groups in a playful dialogue with the research community. In order to foster diversity and inclusivity, it is important to create experimental grounds for public engagement, to explore the collective imaginaries around AI, and to empower citizens to become active participants in co-designing emerging technologies.
Moreover, unencumbered by scientific constraints, artists invent novel ways — frequently pushing the envelope — to use technologies, and thus discover their possibilities and limitations. The artistic practice is highly valuable for critically reflecting on their implications within specific socio-cultural contexts, exposing vulnerabilities through unforeseen use cases. I consider artistic methods and practices to be of critical importance, and necessary to complement the scientific mind-sets.
Subscribing to the idea promoted by Augusto Boal that "theatre can help us build our future, rather than just waiting for it", I co-founded REPLICA as a practice-led research project, and a laboratory for collective experimentation. REPLICA consists of a core team of artists with hybrid fields of expertise ranging from design to theatre-making through academic research and creative writing – and a number of temporary, project-based collaborators. We experiment with various techniques and methodologies from theatre practice in order to moderate, critique and develop new interaction designs for emergent technologies. In one of our most recent experiments, we invited theatre practitioners from the Grotowski Institute in Poland to facilitate an intensive training for a timespan of two weeks with a mixed group of nine performers, actors, designers, artists with the purpose of studying how the Grotowski-based theatre techniques are moderating human-to-human interactions and which of these techniques could be applied to designing engaging interfaces for human-machine interactions.
Our goal is to create a theatre-mediated framework around the design of emergent technological systems, in which to foster embodied interactions with technological systems, create forums for discussions and critique, develop creative prototypes as alternatives, imagine and enact specific scenarios, speculate about future developments of our socio-technical societies, and illustrate these through stories. Our research is focused on embodied and situated knowledge and unfolds as a series of workshops, community events, prototyping sessions and performances.
Another example of our work is the performance "You:me:us:then — Incantation for a Fluid Body" featuring two live performers and a virtual dancer who is reacting to the voices of the actors. It was presented at Futurium in Berlin – as part of the event "Künstliche Intelligenz*innen" and as a follow up to a lecture about feminism and AI - in front of a mixed audience of scientists, artists and the general public, mostly families and young people. And in 2018 we initiated a workshop for the participants of re:publica, inviting them to engage in an algorithmic ritual that encouraged them to reflect on their daily interactions with intelligent systems.
Where do you see the main reasons for distrust of AI? Does communication have to be different, also on the scientific side?
Diana Serbanescu: Many AI systems act so far as black boxes, and the general public is often guided by mistrust and ignorance towards their theoretical framework. Building reliable and ethical AI systems should be a quest for intelligence with human-driven values, based on inclusiveness and diversity. Both the design and quality assurance of such tools expand beyond the expertise of systems engineers only, becoming a matter of interdisciplinary research and public concern.
It is crucial that scientists engaged in the AI research are not conducting their work in disconnect with the real-life in which these systems are operating. In my opinion it is important that part of the AI-research practice should be invested in building platforms for knowledge transfer and debate between scientific community and civil society, as these allow to gather practical evidence of the impact AI systems have in the settings in which they are embedded. This applied work requires a huge time and energy investment from researchers, some of which might feel more comfortable on a more theoretical ground. And here is where I see again a fruitful collaboration between science and art: scientists should reach out to artists, to people from the creative industries who are experts in facilitation and public communication, and collaborate with them to organise such knowledge transfer forums and living laboratories.
Diana Serbanescus project "The Shape Of Things To Come - Rehearsing Future Societies With Artificial Intelligence" is supported by the Volkswagen Foundation within the funding initiative "Artificial Intelligence and the Society of the Future".