Artificial Intelligence

Trust comes from Understanding

Norderstedt, December 11, 2019 – The German Informatics Society (GI, Gesellschaft für Informatik e.V.) and the German Federal Ministry of Education and Research have selected ten outstanding young researchers in the field of artificial intelligence. Theresa Tran, Data Scientist at Lufthansa Industry Solutions, is a newcomer to AI in the science category and has been honored for her research into explainable artificial intelligence (XAI). We spoke to Theresa Tran about her accolade and the trust placed in the decisions of artificial intelligence.

 The German Informatics Society has selected you as AI newcomer of the year 2019. What is your award for?

As part of my Master’s thesis I explored the field of explainable artificial intelligence (XAI), a research topic still very much in its infancy. Explainable AI refers to approaches which provide explanations for the decisions made by AI systems. Let me give you an example. An AI system makes a diagnosis suggestion to a doctor – possibly with a certainty (e.g. 98%). The doctor can now rely on this certainty alone. But wouldn’t it be more helpful if the AI system also stated the symptoms that led to it making the decision for this diagnosis? Providing this additional information would make it much easier for the doctor to decide if he wants to trust the suggestion or not.

During an internship I saw first-hand that even the best AI systems were useless if the user did not trust them. I believe that trust comes from understanding, so as part of my Master’s thesis I investigated how we can make AI systems interpretable and rigorously and mathematically proved the functionality of the methods.

How did you go about doing this?

In concrete terms, I investigated two different methods of providing explanations. On the one hand there are the Shapley value explanations, which guarantee fairness in the sense of cooperative game theory, but in practice it takes too long to calculate them. And on the other hand there’s LIME, which is quick in comparison, but the results don’t have the fairness qualities that we want.

But by selecting a specific parameter for LIME, both methods of explanation match and the resulting algorithm, called Kernel SHAP, combines the best of both worlds. However, there was no mathematically-substantiated proof for this key finding. I demonstrated this proof in my Master’s thesis and thereby enabled the legitimate use of Kernel SHAP. As far as I know, this evidence is the first of its kind.

What made you so interested in this topic?

It’s really important to me that we use artificial intelligence responsibly. XAI can help with this in the form of providing explanations that can be understood by humans. There are three advantages to this: Firstly, the explanations give developers important insights into how they can improve training data and model architectures. Secondly, the explanations help to detect and remove any bias. For example, we can recognize if an AI system is discriminating against people based on their gender or ethnic background, meaning XAI can help increase fairness. And last but not least, the explanations help build up trust and thus ensure greater acceptance of AI systems.

In the future we will be surrounded by intelligent objects that can make decisions for us. How much of this can we control or do we just have to completely trust the technology behind it?

That is the great thing about XAI: every prediction can be made explainable. My basic approach is that we as human beings can also explain what we are doing. But whether or not each and every decision needs to be made transparent and comprehensible, such as for automated advertisements, for example, is another question entirely. But in any case, all decisions that affect someone personally must be comprehensible. The European GDPR directive also states legal provisions for this that must be met.

Traceability, explainability and transparency are requirements of data and consumer protection specialists to ensure AI is handled responsibly. As an IT service provider, what can we do to be able to fulfill these requirements?

We can contribute to them in a really substantial way. For one thing, by actively informing our customers about the possibilities – and sometimes the necessity – of XAI. I made a conscious decision to work for LHIND because I could see from the use cases how responsibly and discursively technology is handled here. And for another thing, it is important to take this discussion to the outside world too. I do a lot of traveling in my role as a communicator and have already given several presentations about XAI for LHIND. A personal highlight for me this year was the Hacker School, where together with my colleague Julian Gimbel I led a 2-day AI workshop for children. I think it’s important that we have a general basic understanding of AI in society. What does AI make possible? And what’s not possible?

According to a study from Capgemini, customer loyalty is strengthened when AI interactions are perceived as ethical. Will the ethics of AI become a competitive factor for companies?

Yes. It is very important that the ethical aspects of using artificial intelligence are taken into consideration, because blind trust can quickly get very dangerous. How do we deal with AI when it comes to our own medical diagnosis? Or when automated driving takes over the wheel? In terms of technology, a great deal is already possible, but the question remains of what we want to implement and how.

About Lufthansa Industry Solutions

Lufthansa Industry Solutions is a service provider for IT consulting and system integration. This Lufthansa subsidiary helps its clients with the digital transformation of their companies. Its customer base includes companies both within and outside the Lufthansa Group, as well as more than 200 companies in various lines of business. The company is based in Norderstedt and employs more than 2,000 members of staff at several branch offices in Germany, Switzerland and the USA.