MaXAIDistillation

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

Robustness of Knowledge Transfer Methods

Supervision: Prof. Dr. Nassir Navab, Dr. Shadi Albarqouni, Azade Farshad

Abstract

Neural networks have been the solution to many of the problems in topics such as computer vision, natural language processing, etc. in recent years. They can achieve superhuman performance in many tasks, However, they are black boxes that make them difficult to rely on in sensitive cases such as medical imaging. Interpreting the internal state of neural networks helps us to improve their performance or prevent errors by finding the reason behind their happening. There have been different works on the interpretability of neural networks which can generally be grouped into three categories[1], manual visual inspection[2, 3], saliency analysis[4, 5] and statistical analysis[6, 7]. The first two categories rely on human evaluation, while TCAV[6] and NetDissect?[7] can be quantitatively evaluated. Knowledge transfer objective is to train a student network from one or multiple teacher networks[8, 9]. Generally, the student network is small and fast while the teacher network is large and accurate. Tasks and goal In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7]. This study can be useful for comparing the internal state of teacher and student networks or comparing the distilled student network with the same student trained in a supervised manner with labels. The experiments in this project will be done with a limited amount of data to imitate the real world conditions. In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7].

Requirements:

  • Good understanding of statistics and machine learning methods.
  • Very good programming skills in Python & TensorFlow? / PyTorch?

Location:

  • Garching

Literature

[1] Thibault Sellam, Kevin Lin, Ian Yiran Huang, Michelle Yang, Carl Vondrick, and Eugene Wu. Deepbase: Deep inspection of neural networks. arXiv preprint arXiv:1808.04486, 2018.

[2] Andrej Karpathy, Justin Johnson, and Li Fei-Fei. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078, 2015.

[3] Hendrik Strobelt, Sebastian Gehrmann, Bernd Huber, Hanspeter Pfister, Alexander M Rush, et al. Visual analysis of hidden state dynamics in recurrent neural networks. CoRR?, abs/1606.07461, 2016.

[4] Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. CoRR?, abs/1610.02391, 2016.

[5] Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066, 2015.

[6] Been Kim, Justin Gilmer, Fernanda Viegas, Ulfar Erlingsson, and Martin Wattenberg. Tcav: Relative concept importance testing with linear concept activation vectors. arXiv preprint arXiv:1711.11279, 2017.

[7] David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. arXiv preprint arXiv:1704.05796, 2017.

[8] Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654–2662, 2014.

[9] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.

Resultant Paper


ProjectForm
Title: Robustness of Knowledge Transfer Methods
Abstract: Neural networks have been the solution to many of the problems in topics such as computer vision, natural language processing, etc. in recent years. They can achieve superhuman performance in many tasks, However they are black boxes that makes them difficult to rely on in sensitive cases such as medical imaging. Interpreting the internal state of neural networks helps us to improve their performance or prevent errors by finding the reason behind their happening. There has been different works on interpretability of neural networks which can generally be grouped into three categories[1], manual visual inspection[2, 3], saliency analysis[4, 5] and statistical analysis[6, 7]. The first two categories rely on human evaluation, while TCAV[6] and NetDissect?[7] can be quantitatively evaluated. Knowledge transfer objective is to train a student network from one or multiple teacher networks[8, 9]. Generally, the student network is small and fast while the teacher network is large and accurate. Tasks and goal In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7]. This study can be useful for comparing the internal state of teacher and student networks or comparing the distilled student network with the same student trained in a supervised manner with labels. The experiments in this project will be done with limited amount of data to imitate the real world conditions. In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7].
Student: Yousef Yeganeh
Director: Prof. Dr. Nassir Navab
Supervisor: Dr. Shadi Albarqouni, Azade Farshad
Type: Project
Area: Machine Learning, Medical Imaging
Status: finished
Start:  
Finish:  
Thesis (optional):  
Picture:  


Edit | Attach | Refresh | Diffs | More | Revision r1.3 - 17 Apr 2019 - 20:11 - ShadiAlbarqouni