MaSurgicalWorkflow

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

Phase Recognition in Surgical Workflow

Supervision: Prof. Dr. Nassir Navab, Dr. Shadi Albarqouni

Abstract

In recent years, with advancements in technology and medicine, the operating room has evolved into a complex and technologically rich environment. In this environment, methods to monitor surgical workflows have gained particular interest [1] with potential applications such as the evaluation of surgeons, or the creation of context-sensitive user interfaces to provide available information only when necessary. Different approaches in the field of surgical workflow recognition [1] include approaches to extract a structured model from recorded surgeries [2], to recognize the surgical phases or activities through instrument and sensor data [3-5], laparoscopic video [6-8], kinematics information [9], or a mixture thereof [10]. Very recently, also methods using deep learning have been introduced [6, 11]. This master thesis focuses on the recognition of surgical phases and the derivation of actionable information from surgical videos.

Requirements:

  • Good understanding of statistics and machine learning methods.
  • Very good programming skills in Python & TensorFlow? / PyTorch?

Location:

Literature

1. Lalys, Florent, and Pierre Jannin. "Surgical process modelling: a review." International journal of computer assisted radiology and surgery 9.3 (2014): 495-511.

2. Franke, Stefan, Jürgen Meixensberger, and Thomas Neumuth. "Multi-perspective workflow modeling for online surgical situation models." Journal of biomedical informatics 54 (2015): 158-166.

3. Malpani, Anand, et al. "System events: readily accessible features for surgical phase detection." International journal of computer assisted radiology and surgery 11.6 (2016): 1201-1209.

4. Forestier, Germain, Laurent Riffaud, and Pierre Jannin. "Automatic phase prediction from low-level surgical activities." International journal of computer assisted radiology and surgery10.6 (2015): 833-841.

5. Stauder, Ralf, et al. "Random forests for phase detection in surgical workflow analysis." International Conference on Information Processing in Computer-Assisted Interventions. Springer, Cham, 2014.

6. Twinanda, Andru P., et al. "Endonet: A deep architecture for recognition tasks on laparoscopic videos." IEEE transactions on medical imaging 36.1 (2017): 86-97.

7. Haro, Benjamín Béjar, Luca Zappella, and René Vidal. "Surgical gesture classification from video data." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, 2012.

8. Blum, Tobias, Hubertus Feußner, and Nassir Navab. "Modeling and segmentation of surgical workflow from laparoscopic video." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, 2010.

9. Surgical workflow analysis with Gaussian mixture multivariate autoregressive (GMMAR) models: a simulation study

10. Zia, Aneeq, et al. "Temporal clustering of surgical activities in robot-assisted surgery." International journal of computer assisted radiology and surgery 12.7 (2017): 1171-1178.

11. Jin, Yueming, et al. "SV-RCNet: Workflow Recognition From Surgical Videos Using Recurrent Convolutional Network." IEEE transactions on medical imaging 37.5 (2018): 1114-1126.

Resultant Paper


ProjectForm
Title: Phase Recognition in Surgical Workflow
Abstract: In recent years, with advancements in technology and medicine, the operating room has evolved into a complex and technologically rich environment. In this environment, methods to monitor surgical workflows have gained particular interest [1] with potential applications such as the evaluation of surgeons, or the creation of context-sensitive user interfaces to provide available information only when necessary. Different approaches in the field of surgical workflow recognition [1] include approaches to extract a structured model from recorded surgeries [2], to recognize the surgical phases or activities through instrument and sensor data [3-5], laparoscopic video [6-8], kinematics information [9], or a mixture thereof [10]. Very recently, also methods using deep learning have been introduced [6, 11]. This master thesis focuses on the recognition of surgical phases and the derivation of actionable information from surgical videos.
Student: Ghazal Ghazei
Director: Prof. Dr. Nassir Navab
Supervisor: Dr. Shadi Albarqouni
Type: Master Thesis
Area: Surgical Workflow, Machine Learning, Medical Imaging
Status: finished
Start:  
Finish:  
Thesis (optional):  
Picture:  


Edit | Attach | Refresh | Diffs | More | Revision r1.4 - 14 Jul 2019 - 17:54 - ShadiAlbarqouni