MaLessforget

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

Continual and incremental learning with less forgetting strategy

Supervision: Prof. Dr. Nassir Navab, Dr. Seong Tae Kim

Abstract

Recently, deep learning has great success in various applications such as image recognition, object detection, and medical applications, etc. However, in the real world deployment, the number of training data (sometimes the number of tasks) continues to grow, or the data cannot be given at once. In other words, a model needs to be trained over time with the increase of the data collection in a hospital (or multiple hospitals). A new type of lesion could be also defined by medical experts. Then, the pre-trained network needs to be further trained to diagnose these new types of lesions with increased data. ‘Class-incremental learning’ is a research area that aims at training the learned model to add new tasks while retaining the knowledge acquired in the past tasks. It is challenging because DNNs are easy to forget previous tasks when learning new tasks (i.e. catastrophic forgetting). In real-world scenarios, it is difficult to store all training data which was used when training DNN at the previous time due to the privacy issues of medical data. In this project, we will develop a solution to this problem in medical applications by investigating an effective and novel learning method.

Requirements:

  • Good understanding of statistics and machine learning methods.
  • Very good programming skills in Python & TensorFlow? / PyTorch?

Location:

  • Garching

References:


ProjectForm
Title: Continual and incremental learning with less forgetting strategy
Abstract: Recently, deep learning has great success in various applications such as image recognition, object detection, and medical applications, etc. However, in the real world deployment, the number of training data (sometimes the number of tasks) continues to grow, or the data cannot be given at once. In other words, a model needs to be trained over time with the increase of the data collection in a hospital (or multiple hospitals). A new type of lesion could be also defined by medical experts. Then, the pre-trained network needs to be further trained to diagnose these new types of lesions with increased data. ‘Class-incremental learning’ is a research area that aims at training the learned model to add new tasks while retaining the knowledge acquired in the past tasks. It is challenging because DNNs are easy to forget previous tasks when learning new tasks (i.e. catastrophic forgetting). In real-world scenarios, it is difficult to store all training data which was used when training DNN at the previous time due to the privacy issues of medical data. In this project, we will develop a solution to this problem in medical applications by investigating an effective and novel learning method.
Student: Afshar Kakaei
Director: Prof. Dr. Nassir Navab
Supervisor: Dr. Seong Tae Kim
Type: Master Thesis
Area: Machine Learning, Medical Imaging
Status: finished
Start:  
Finish:  
Thesis (optional):  
Picture:  


Edit | Attach | Refresh | Diffs | More | Revision r1.5 - 28 Jul 2020 - 07:56 - SeongTaeKim