MaUnderstanding

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

Explainability of Artificial Intelligence (XAI): Taxonomy

Supervision: Prof. Dr. Nassir Navab, Dr. Shadi Albarqouni

Abstract

Recently, Artificial Intelligence, Machine Learning, Deep Learning have shown positive results in various domains: recommender systems, autonomous driving, speech recognition, etc. The demand for AI is increasing exponentially, leading to a lot of startups and investment in AI.

This brings in a lot of considerations, like certification and the fear of ‘Singularity’. The main reason for this is because Deep Learning is a black box. Most of the times to get something to work, we do a lot of tweaking and trying. Once we get it working, we are then able to interpret the reason for the functionality. But it is very difficult to know what the effect of a change would be before even trying it. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th, 2018 will make black-box approaches difficult to use in business. This requires a possibility to make the results re-traceable on demand.

To achieve this, we need to generate the underlying explanatory structures of models, which explains the cause of the result of the model, Explainable AI. This is required in every field, only then will everyone trust machines completely. Medical domain being one of the fields which needs precision and explainable ability the most. The goal is to categorize the different possibilities for explainable AI. Based on the results of the research, some of the approaches will be implemented on an available dataset, to verify the findings.

Requirements:

  • Good understanding of statistics and machine learning methods.
  • Very good programming skills in Python & TensorFlow? / PyTorch?

Location:

  • Garching

Literature

Resultant Paper

http://livingreview.in.tum.de/XAI/

ProjectForm
Title: Explainability of Artificial Intelligence (XAI): Taxonomy
Abstract: Recently, Artificial Intelligence, Machine Learning, Deep Learning have shown positive results in various domains: recommender systems, autonomous driving, speech recognition, etc. The demand for AI is increasing exponentially, leading to a lot of startups and investment in AI. This brings in a lot of considerations, like certification and the fear of ‘Singularity’. The main reason for this is because Deep Learning is a black box. Most of the times to get something to work, we do a lot of tweaking and trying. Once we get it working, we are then able to interpret the reason for the functionality. But it is very difficult to know what the effect of a change would be before even trying it. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th, 2018 will make black-box approaches difficult to use in business. This requires a possibility to make the results re-traceable on demand. To achieve this, we need to generate the underlying explanatory structures of models, which explains the cause of the result of the model, Explainable AI. This is required in every field, only then will everyone trust machines completely. Medical domain being one of the fields which needs precision and explainable ability the most. The goal is to categorize the different possibilities for explainable AI. Based on the results of the research, some of the approaches will be implemented on an available dataset, to verify the findings.
Student: Sukanya Raju
Director: Prof. Dr. Nassir Navab
Supervisor: Dr. Shadi Albarqouni
Type: Project
Area: Machine Learning, Medical Imaging
Status: finished
Start:  
Finish:  
Thesis (optional):  
Picture:  


Edit | Attach | Refresh | Diffs | More | Revision r1.3 - 23 Nov 2018 - 19:44 - ShadiAlbarqouni