Artificial Intelligence: Where Does it Make Sense, and Where Should We be More Wary?

A panel member of this year's German Research Summit, science journalist Manuela Lenzen discusses how artificial intelligence (AI) affects society. In the following interview, the author of "AI – What it can do and what awaits us" answers some fundamental questions on the topic.

Jakob Vicari spoke with philosopher Dr. Manuela Lenzen, a freelance science journalist, who writes about digitalization, artificial intelligence and cognitive research.

Do we all have to know what artificial intelligence (AI) is?

One ought to at least take a look at it. It is always said that humans should concentrate on what constitutes humans. But this could be just what the corporate world wants: That we simply leave the field to them and only care about the creative and the social. All of us should be in a position to judge which discussions are mere panic-mongering and what is really dangerous.

There's lots of hype surrounding the subject. You write that AI is like a label that makes machines of all kinds interesting. Where do you think AI starts?

There is actually no definition for AI. Not everything bearing the label has to do with artificial intelligence. The term artificial intelligence goes back to the mathematician John McCarthy. In 1955, he submitted an application to the Rockefeller Foundation to sponsor a conference. He wanted to meet with colleagues to find out how machines could solve problems, use language and learn. And the term stuck. Artificial intelligence is a catchy term – even though one should be careful to point out that it has nothing to do with intelligence in the human sense. Today, the term is mainly used for machine learning.

Bei "Künstlicher Intelligenz" geht es nicht um Intelligenz im menschlichen Sinne, stellt Manuela Lenzen klar. (Foto: Isabel Winarsch für VolkswagenStiftung)

You've been following developments in the field for almost 20 years. How do you see the current debate?

It has developed for the better. When I started writing my book two years ago, many articles still contained illustrations of the Terminator. At that time it was said: "Super intelligence takes away our power and confines us to the rabbit hutch". That's rubbish. Because it’s always up to us humans to decide what artificial intelligence should do. Today, the discussion revolves more around what man should do with AI. I think that's a good thing: Especially when we discuss the danger of autonomous weapon systems or the possibilities of surveillance and manipulation.

How should we go about handling AI?

As soon as systems talk to us, they trigger anthropomorphism. In other words, we want to see them as cognitive counterparts. Just like people talk to their dogs and even their cars. We also talk to machines, even if they can't do much at all. The virtual assistant Alexa is such an example; and with humanoid robots the effect is even stronger. That's why I like to call them "confusing machines".

Where will we come across AI in everyday life?

We will see many gadgets, toys and assistance systems that help us in our daily lives, motivate us to learn and entertain us. However, I don't see a butler who will clean out the dishwasher and set the table anytime in the near future – although this is already being worked on.

Manuela Lenzen ist promovierte Philosophin und schreibt als freie Wissenschaftsjournalistin über Digitalisierung, Künstliche Intelligenz und Kognitionsforschung. (Foto: privat/Manuela Lenzen)

Too bad, I could use an intelligent robot like that. This often gets mixed up: Robotics and artificial intelligence. But how are they connected, and what's the difference?

Of course, they belong closely together: At the research institutes, robotics and AI are often embedded in the same area. But not every robot arm on an assembly line is intelligent, and not every AI has a robot body. If robots are to become more flexible, though, they must also become more intelligent and be capable of learning.

How can the layman evaluate what AI can do?

This is still very difficult. It would be nice if people did know what they were dealing with. Where does the data go? What can a system do? Ideally, there would be a classification that everyone could understand. We also have to ask: What about nursing, where people with cognitive impairments don't know exactly what they are dealing with? Or children: Do they need to know that it's a machine they're telling something to? Perhaps this can be solved through design.

You're a philosopher. What questions interest you in this area?

Quite a lot. Alone the topic of whether we understand people better when we try to recreate them – but also, of course, how our image of humankind changes through artificial intelligence. We become aware of just how difficult it is to imitate human beings. This increases our self-respect. Terms like creativity and consciousness suddenly become blurred. That is most interesting for a philosopher.

Am 14. Februar 2019 stand KI im Zentrum des von der Stiftung veranstalteten Herrenhäuser Gesprächs "Was Künstliche Intelligenz für uns Menschen bedeutet". (Foto: Halfpoint - stock.adobe.com)

And, of course, AI has very concrete effects. Many people fear that intelligent machines will soon do their work. Are they right?

Humans have been working for over 2,000 years to make machines do the work for them. And what has happened? We are working more and more. We have too little time for old people, for children, for our hobbies. My ideal world would see artificial intelligence and digitalization contributing toward us all working less and having more time for the really important things. Unfortunately, developments are often more about increasing efficiency.

But do jobs get lost?

That's where research is at odds. And also the question of what type of jobs we’re talking about is controversial. We’re always hearing that machines are taking over unskilled jobs. But it could come to the point that they replace above all the semi-skilled jobs. Ultimately, at the end of the production line, it will be people standing and packing the finished products into cartons, because that's going to be cheaper.

What can we expect, taking autonomous driving as an example? 

Autonomous driving will be a long time coming, although a lot of research is being done into it. The safety requirements are simply so high and the world is full of surprising situations that autonomous vehicles are not yet able to cope with. But we will see more and more assistance systems that will hopefully help to reduce the number of road accidents. 

Smart weapon systems are another example. Aren't we all really against it?

I think the only people who are really against it are those who can't afford it – otherwise, the arms-race logic wins through: Everyone wants to be prepared for the eventuality that others have such weapons. And this development really worries me. Not least because such weapons systems lack world knowledge. What if they are activated and subsequently decide that no human is to get in the way? Then a conflict could escalate simply because of a programming error.

Eine neue Förderinitiative der Stiftung zu KI möchte gemeinsame, integrative Forschungsansätze der Gesellschafts- und Technikwissenschaften ermöglichen. (Illustration: Phonlamai Photo - shutterstock.com)

How do German politics approach AI? Is more action called for?

Politicians are currently trying to learn a lot very quickly and to understand how things will develop. I think that's good. And people are thinking about what the framework conditions should be like, so that our privacy isn’t affected. As far as the somewhat constrained support from research funding and economic development organizations is concerned, I’m rather skeptical as to whether this should come from politics or not from industry and research itself – and from us.

Das von der Stiftung konzipierte Herrenhäuser Forum "Autonomes Fahren - Segen oder Fluch" beschäftigte sich am 15. Januar 2019 mit den Risiken und Chancen der Zukunftstechnologie. (Foto: AndSus - fotolia.com)

What do you mean exactly?

Everyone should consider for themselves where they want to have such systems. We should ask ourselves: What should these systems be able to do? Where could they help me in my everyday life? And at the same time: Where do I not want them? How can AI contribute toward a better world? It's simply not true that all this is inevitable and that we can do nothing about it.

Dr. Manuela Lenzen holds a doctorate in philosophy and writes as a freelance science journalist on the topics of digitalization, artificial intelligence and cognition research, among others for FAZ, NZZ, Psychologie Heute, Bild der Wissenschaft and Gehirn und Geist. Her current book "Künstliche Intelligenz - Was sie kann und was uns erwartet" (only available in German, translation "AI – What it can do and what awaits us") (C.H. Beck, Munich, 2018) has received many positive reviews as an objective reference work.

At the 2019 Research Summit held in Berlin on March 19, experts from business, science, civil society and politics discussed the topic: Artificial intelligence – Innovation Driver of a New Generation.

Inner Circle 1, in which Manuela Lenzen took part, is called "The Social Level – Meaning and Effect of Artificial Intelligence on Society and on the German Innovation System". The event is also available as a video.