ThesesPage

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

Diploma, Master and Bachelor Theses

Running Theses

Robotic Anisotropic X-ray Dark-field Tomography: calibration (Bachelor Thesis)

Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics.
Supervisor:
Director:Tobias Lasser
Student:Carolin Bruckmaier
start-end:2018/12/15 - 2019/04/15
Octree-based line integral algorithms for X-ray Computed Tomography (Bachelor Thesis)

X-ray Computed Tomography (CT) has been essential for medical diagnostics for decades now. Based on accurate forward modeling and solving the inverse problem, tomographic reconstruction is the algorithmic framework to enable X-ray CT. Central to the forward model is the projector and back-projector pair computing discretized line integrals, which model the interactions of X-rays with the sample and the acquisition geometry. Along with high computational demands, accuracy is paramount.
Supervisor:
Director:Tobias Lasser
Student:Philipp Bock
start-end:2018/11/15 - 2019/03/15
Robotic Anisotropic X-ray Dark-field Tomography: robot integration and collision detection (Master Thesis)

Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics.
Supervisor:
Director:Tobias Lasser
Student:Erdal Pekel
start-end:2018/11/15 - 2019/05/15
Intraoperative Ophthalmic Scene Reconstruction for surgical AR (DA/MA/BA)

Ophthalmic microsurgical interventions require a high level of handling precision for the surgeon as even little errors can damage intricate structures inside the human eye. Furthermore, the microscopic top-down view through the microscope is a limiting factor for the surgeon's depth perception, motivating the use of augmented reality to enhance the surgical view. With the use of volumetric intraoperative Optical Coherence Tomograph (OCT), 3D imaging can be performed during the surgery, yielding an additional source of information to guide the surgeon. However, to display this additional data to the surgeon, proper integration into the surgical view has to be considered. One very intriguing way to integrate the information is to use a focus and context visualization method already used by Bichlmeier et al. [1], see Figure 1. To be able to provide perceptually plausible in-situ rendering of the acquired volumes as a focus-and-context visualization, first the 3D surface of the retinal surface needs to be reconstructed as a 3D surface. This can be achieved by applying stereo block matching using a calibrated stereo image pair. To properly position the iOCT overlay, the surface must then be aligned to the coordinate system of the OCT volume. This, however, cannot be done preoperatively due to the complex optical setup involving the patient's own eye. Therefore, the second part of this project is to devise an online alignment that can compensate for the (potentially changing) optical pathway affecting all imaging modalities. This online alignment can involve a simple surface alignment step as well as more complex algorithms to account for deformations due to the different optical pathways between the two modalities.
Supervisor:Jakob Weiss, Federico Tombari
Director:Prof. Nassir Navab
Student:Ekaterina Kanaeva
start-end:04/2018 -
Deep Learning for Tool Detection and Tracking in Microsurgery (Bachelor Thesis)

The aim of this project is the investigation of the state-of-the-art deep learning architectures and frameworks with the purpose of detection and tracking of instruments in retinal microsurgeries. An implementation of a deep learning based instrument detection workflow shall be provided at the end of the project.
Supervisor:Hasan Sarhan, Dr. Mehmet Yigitsoy
Director:Prof. Nassir Navab
Student:Luca Alessandro Dombetzki
start-end:01.04.2018 -
Deep Learning on visual data in the context of autonomous driving (Master Thesis)

We are offering a thesis in collaboration with BMW focusing on the application of deep learning on visual data in the context of autonomous driving. Therefore we are looking for motivated students that ideally already acquired some experience in computer vision and/or machine learning. If you are interested to work on a thesis in this field please contact Jakob Mayr or Federico Tombari.
Supervisor:Jakob Mayr, Federico Tombari
Director:Prof. Nassir Navab
Student:
start-end: -
Distributed SLAM - Jointly mapping 3D Geometry (DA/MA/BA)

Exploring an unknown scene and self-positioning within it, is a common and well-studied problem in Computer Vision which is known as SLAM (Simultaneous Localization and Mapping). Core fields of application are autonomous cooperative robotics and vehicles as well as tracking and detection systems in the medical domain. Traditional methods target a single system, equipped with image sensors, exploring the scene and building up a map for localization (e.g. a single robot or drone moving within an unknown environment). New approaches also incorporate information from other sensors such as IMUs, gyro or GPS. Another objective for the determination of the position is outside-in-tracking of an object via marker tracking with external sensors, thus providing the relative position of an object with respect to the tracking system. To overcome the line-of-sight problem of outside-in-tracking, and the singularity constraint of traditional SLAM methods, the project aims to develop a distributed SLAM approach. Multiple systems (referred to as sensor nodes hereafter), equipped with an image sensor, contribute to a common map of the scene for localization, while being also tracked by outside-in-tracking for accuracy. Thus, accuracy and applicability can be elevated with a distributed SLAM approach, combining the information of multiple sensor nodes and an external tracking system. Furthermore, the necessity of complicated and error prone calibration processes for individual systems within one application scenario can be avoided. The objective is to develop a generative distributed SLAM approach for challenging scenes and applications. Features like loop detection and closing, pose graph optimization, re-localization and mapping should be extended to a distributed approach, also enabling scalability.
Supervisor:Patrick Ruhkamp, Benjamin Busam
Director:Prof. Dr. Nassir Navab
Student:Joe Bedard
start-end: -
Explained Predictions for Neural Networks (Master Thesis)

Supervisor:Christian Rupprecht, Iro Laina, Federico Tombari
Director:Prof. Nassir Navab
Student:Sharru Möller
start-end: -
Meta-Learning in Medical Imaging (IDP)

Medical Image classification has become competitive with human experts, in some domains. However, its success seems contingent on the availability of large bodies of annotated data; and it is often difficult and expensive to acquire such datasets. High data requirements are a general issue in modern Deep Learning, and increasing sample efficiency is one of the fundamental research problems today. Meta-learning is one of the directions taken to alleviate the need for huge datasets via learning to learn. The goal of this project is to build on work presented in (Snell et al. 2917; Finn et al. 2017) to find approaches to image classification which are sample efficient and adaptable to Semi-Supervised Learning.
Supervisor:Dr. Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:Ahmed Ayad
start-end: -
Few Shot Segmentation in Medical Imaging (Master Thesis)

Semantic segmentation does a pixel-wise classification to assign a class or background to each pixel of an image. This problem requires a very large data set of pixel level annotations, which is often unavailable or very costly to create. The aim of this project is to build a state of the art low shot deep learning technique for medical images, which can from few dense or sparse annotated medical image labels derive semantic segmentation of a new previously unseen class.
Supervisor:Dr. Shadi Albarqouni, Ari Tran
Director:Prof. Dr. Nassir Navab
Student:Abhijeet Parida
start-end: -
Uncertainty in Deep Learning for Medical Application (Master Thesis)

Deep learning has become the default tool to approximate nonlinear functions. The results are usually measured by accuracy or AUC. However, neither of them captures the uncertainty of the result. Dealing with systems that decide about human life, it is crucial to know to what extent the outcome can be trusted. Attempts have been made to tackle this problem, both in the computer vision and medical community. It was established that a model can be uncertain of its decision because of its parameters or because of the data that was fed. Nevertheless, none of these completely captures the uncertainty that can be caused by labels provided by multiple experts. In this work, a model is proposed to quantify this specific type of uncertainty. Its behavior will be studied under different conditions and it will be compared to the already known types of uncertainty. Finally, it will be shown that this information can be used to improve the overall performance of the model.
Supervisor:Dr. Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Robustness of Knowledge Transfer Methods (Project)

Neural networks have been the solution to many of the problems in topics such as computer vision, natural language processing, etc. in recent years. They can achieve superhuman performance in many tasks, However they are black boxes that makes them difficult to rely on in sensitive cases such as medical imaging. Interpreting the internal state of neural networks helps us to improve their performance or prevent errors by finding the reason behind their happening. There has been different works on interpretability of neural networks which can generally be grouped into three categories[1], manual visual inspection[2, 3], saliency analysis[4, 5] and statistical analysis[6, 7]. The first two categories rely on human evaluation, while TCAV[6] and NetDissect?[7] can be quantitatively evaluated. Knowledge transfer objective is to train a student network from one or multiple teacher networks[8, 9]. Generally, the student network is small and fast while the teacher network is large and accurate. Tasks and goal In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7]. This study can be useful for comparing the internal state of teacher and student networks or comparing the distilled student network with the same student trained in a supervised manner with labels. The experiments in this project will be done with limited amount of data to imitate the real world conditions. In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7].
Supervisor:Dr. Shadi Albarqouni, Azade Farshad
Director:Prof. Dr. Nassir Navab
Student:Yousef Yeganeh
start-end: -

Finished Theses



Edit | Attach | Refresh | Diffs | More | Revision r1.14 - 02 Jan 2019 - 11:39 - TobiasLasser

Lehrstuhl für Computer Aided Medical Procedures & Augmented Reality    rss.gif