Uncertainty in Deep Learning for Medical Application
Supervision: Prof. Dr. Nassir Navab,
Dr. Shadi Albarqouni
Abstract
Deep learning has become the default tool to approximate nonlinear functions. The results are usually measured by accuracy or AUC. However, neither of them captures the uncertainty of the result. Dealing with systems that decide about human life, it is crucial to know to what extent the outcome can be trusted. Attempts have been made to tackle this problem, both in the computer vision and medical community. It was established that a model can be uncertain of its decision because of its parameters or because of the data that was fed. Nevertheless, none of these completely captures the uncertainty that can be caused by labels provided by multiple experts. In this work, a model is proposed to quantify this specific type of uncertainty. Its behavior will be studied under different conditions and it will be compared to the already known types of uncertainty. Finally, it will be shown that this information can be used to improve the overall performance of the model.
Requirements:
- Good understanding of statistics and machine learning methods.
- Very good programming skills in Python & TensorFlow? / PyTorch?
Location:
Literature
Resultant Paper
Coming Soon!