Explainability of Artificial Intelligence (XAI): Taxonomy
Supervision: Prof. Dr. Nassir Navab,
Dr. Shadi Albarqouni
Abstract
Recently, Artificial Intelligence, Machine Learning, Deep Learning have shown positive results in various domains: recommender systems, autonomous driving, speech recognition, etc. The demand for AI is increasing exponentially, leading to a lot of startups and investment in AI.
This brings in a lot of considerations, like certification and the fear of ‘Singularity’. The main reason for this is because Deep Learning is a black box. Most of the times to get something to work, we do a lot of tweaking and trying. Once we get it working, we are then able to interpret the reason for the functionality. But it is very difficult to know what the effect of a change would be before even trying it. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th, 2018 will make black-box approaches difficult to use in business. This requires a possibility to make the results re-traceable on demand.
To achieve this, we need to generate the underlying explanatory structures of models, which explains the cause of the result of the model, Explainable AI. This is required in every field, only then will everyone trust machines completely. Medical domain being one of the fields which needs precision and explainable ability the most. The goal is to categorize the different possibilities for explainable AI. Based on the results of the research, some of the approaches will be implemented on an available dataset, to verify the findings.
Requirements:
- Good understanding of statistics and machine learning methods.
- Very good programming skills in Python & TensorFlow? / PyTorch?
Location:
Literature
Resultant Paper
http://livingreview.in.tum.de/XAI/