Surgical Workflow Analysis under Limited Annotation
Supervision: Prof. Dr. Nassir Navab,
Dr. Seong Tae Kim,
Tobias Czempiel
Abstract
Surgical workflow analysis is of importance for understanding the onset and persistence of surgical phases and individual tool usage across surgery and in each phase. It is beneficial for clinical quality control and to hospital administrators for understanding surgery planning. To automatize this process, automatic surgical phase recognition from the video acquired during the surgery is very important. As the success of deep learning, various architectures have been also reported in video understanding [1-3]. While it has been very successful to classify short trimmed videos, temporally locating or detecting action segments in long untrimmed videos is still very challenging. Surgical scenes are usually represented as high intra-phase variance but limited inter-phase variance. Moreover, annotating the surgical video for training deep neural networks is a very expensive task because the videos are usually long and the frame-level annotation is required to train the models with traditional approaches. In this project, to address this issue, we would like to explore a new surgical phase recognition model which could be trained under training data with limited annotation.
Requirements:
- Good understanding of statistics and machine learning methods.
- Very good programming skills in Python & TensorFlow? / PyTorch?
Location: