MaRealisticPred

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

Human-level deep neural networks with a rejection option

Supervision: Prof. Dr. Nassir Navab, Dr. Seong Tae Kim

Abstract

Recently, measuring statistical uncertainties in deep neural networks has been an important issue in various safe-critical applications such as autonomous driving, computer-aided diagnosis. However, training the predictor which has a rejection option without performance degradation is still an unsolved problem. In this project, we will investigate the feasibility of human-level deep neural networks to give the rejection option. The main goal of human-level deep neural networks is to learn deep neural networks which know what they can do and cannot do. It is an important issue to calibrate the uncertainty of prediction while maintaining accuracy. For this purpose, a new framework to calibrate predicted probability with accuracy will be developed by integrating confidence in a loss function.

Requirements:

  • Good understanding of statistics and machine learning methods.
  • Very good programming skills in Python & TensorFlow? / PyTorch?

Location:

  • Garching

References:

[1] S. Seo et al. “Learning for Single-Shot Confidence Calibration in Deep Neural Networks through Stochastic Inferences," CVPR2019.

[2] Y. Gelfman et al. “SelectiveNet: A Deep Neural Network with an Integrated Rejection Option," ICML2019.

[3] P. Wang and N. Vasconcelos, “Towards Realistic Predictors,” ECCV 2018.

[4] B. Lakshminarayanan et al. “Simple and scalable predictive uncertainty estimation using deep ensembles,” NIPS 2017.

[5] D. Hendricks et al. "A baseline for detecting misclassified and out-of-distribution examples in neural networks," ICLR2017.

If you are interested, please contact us via e-mail: seongtae.kim@tum.de

ProjectForm
Title: Human-level deep neural networks with a rejection option
Abstract: Recently, measuring statistical uncertainties in deep neural networks has been an important issue in various safe-critical applications such as autonomous driving, computer-aided diagnosis. However, training the predictor which has a rejection option without performance degradation is still an unsolved problem. In this project, we will investigate the feasibility of human-level deep neural networks to give the rejection option. The main goal of human-level deep neural networks is to learn deep neural networks which know what they can do and cannot do. It is an important issue to calibrate the uncertainty of prediction while maintaining accuracy. For this purpose, a new framework to calibrate predicted probability with accuracy will be developed by integrating confidence in a loss function.
Student:  
Director: Prof. Dr. Nassir Navab
Supervisor: Dr. Seong Tae Kim
Type: Master Thesis
Area: Machine Learning, Computer Vision
Status: open
Start:  
Finish:  
Thesis (optional):  
Picture:  


Edit | Attach | Refresh | Diffs | More | Revision r1.1 - 05 Sep 2019 - 20:42 - SeongTaeKim