ThesesPage

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

Diploma, Master and Bachelor Theses

Running Theses

Robotic Anisotropic X-ray Dark-field Tomography: calibration (Bachelor Thesis)

Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics.
Supervisor:
Director:Tobias Lasser
Student:Carolin Bruckmaier
start-end:2018/12/15 - 2019/04/15
Robotic Anisotropic X-ray Dark-field Tomography: robot integration and collision detection (Master Thesis)

Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics.
Supervisor:
Director:Tobias Lasser
Student:Erdal Pekel
start-end:2018/11/15 - 2019/05/15
Intraoperative Ophthalmic Scene Reconstruction for surgical AR (DA/MA/BA)

Ophthalmic microsurgical interventions require a high level of handling precision for the surgeon as even little errors can damage intricate structures inside the human eye. Furthermore, the microscopic top-down view through the microscope is a limiting factor for the surgeon's depth perception, motivating the use of augmented reality to enhance the surgical view. With the use of volumetric intraoperative Optical Coherence Tomograph (OCT), 3D imaging can be performed during the surgery, yielding an additional source of information to guide the surgeon. However, to display this additional data to the surgeon, proper integration into the surgical view has to be considered. One very intriguing way to integrate the information is to use a focus and context visualization method already used by Bichlmeier et al. [1], see Figure 1. To be able to provide perceptually plausible in-situ rendering of the acquired volumes as a focus-and-context visualization, first the 3D surface of the retinal surface needs to be reconstructed as a 3D surface. This can be achieved by applying stereo block matching using a calibrated stereo image pair. To properly position the iOCT overlay, the surface must then be aligned to the coordinate system of the OCT volume. This, however, cannot be done preoperatively due to the complex optical setup involving the patient's own eye. Therefore, the second part of this project is to devise an online alignment that can compensate for the (potentially changing) optical pathway affecting all imaging modalities. This online alignment can involve a simple surface alignment step as well as more complex algorithms to account for deformations due to the different optical pathways between the two modalities.
Supervisor:Jakob Weiss, Federico Tombari
Director:Prof. Nassir Navab
Student:Ekaterina Kanaeva
start-end:04/2018 -
Deep Learning for Tool Detection and Tracking in Microsurgery (Bachelor Thesis)

The aim of this project is the investigation of the state-of-the-art deep learning architectures and frameworks with the purpose of detection and tracking of instruments in retinal microsurgeries. An implementation of a deep learning based instrument detection workflow shall be provided at the end of the project.
Supervisor:Hasan Sarhan, Dr. Mehmet Yigitsoy
Director:Prof. Nassir Navab
Student:Luca Alessandro Dombetzki
start-end:01.04.2018 -
Deep Learning on visual data in the context of autonomous driving (Master Thesis)

We are offering a thesis in collaboration with BMW focusing on the application of deep learning on visual data in the context of autonomous driving. Therefore we are looking for motivated students that ideally already acquired some experience in computer vision and/or machine learning. If you are interested to work on a thesis in this field please contact Jakob Mayr or Federico Tombari.
Supervisor:Jakob Mayr, Federico Tombari
Director:Prof. Nassir Navab
Student:
start-end: -
Distributed SLAM - Jointly mapping 3D Geometry (DA/MA/BA)

Exploring an unknown scene and self-positioning within it, is a common and well-studied problem in Computer Vision which is known as SLAM (Simultaneous Localization and Mapping). Core fields of application are autonomous cooperative robotics and vehicles as well as tracking and detection systems in the medical domain. Traditional methods target a single system, equipped with image sensors, exploring the scene and building up a map for localization (e.g. a single robot or drone moving within an unknown environment). New approaches also incorporate information from other sensors such as IMUs, gyro or GPS. Another objective for the determination of the position is outside-in-tracking of an object via marker tracking with external sensors, thus providing the relative position of an object with respect to the tracking system. To overcome the line-of-sight problem of outside-in-tracking, and the singularity constraint of traditional SLAM methods, the project aims to develop a distributed SLAM approach. Multiple systems (referred to as sensor nodes hereafter), equipped with an image sensor, contribute to a common map of the scene for localization, while being also tracked by outside-in-tracking for accuracy. Thus, accuracy and applicability can be elevated with a distributed SLAM approach, combining the information of multiple sensor nodes and an external tracking system. Furthermore, the necessity of complicated and error prone calibration processes for individual systems within one application scenario can be avoided. The objective is to develop a generative distributed SLAM approach for challenging scenes and applications. Features like loop detection and closing, pose graph optimization, re-localization and mapping should be extended to a distributed approach, also enabling scalability.
Supervisor:Patrick Ruhkamp, Benjamin Busam
Director:Prof. Dr. Nassir Navab
Student:Joe Bedard
start-end: -
Privacy-Preserving Federated Learning in Medical Imaging (Project)

Supervisor:Dr. Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:Wasiq Kasam Rumaney
start-end: -
Few Shot Segmentation in Medical Imaging (Master Thesis)

Semantic segmentation does a pixel-wise classification to assign a class or background to each pixel of an image. This problem requires a very large data set of pixel level annotations, which is often unavailable or very costly to create. The aim of this project is to build a state of the art low shot deep learning technique for medical images, which can from few dense or sparse annotated medical image labels derive semantic segmentation of a new previously unseen class.
Supervisor:Dr. Shadi Albarqouni, Ari Tran
Director:Prof. Dr. Nassir Navab
Student:Abhijeet Parida
start-end: -
Phase Recognition in Surgical Workflow (Master Thesis)

In recent years, with advancements in technology and medicine, the operating room has evolved into a complex and technologically rich environment. In this environment, methods to monitor surgical workflows have gained particular interest [1] with potential applications such as the evaluation of surgeons, or the creation of context-sensitive user interfaces to provide available information only when necessary. Different approaches in the field of surgical workflow recognition [1] include approaches to extract a structured model from recorded surgeries [2], to recognize the surgical phases or activities through instrument and sensor data [3-5], laparoscopic video [6-8], kinematics information [9], or a mixture thereof [10]. Very recently, also methods using deep learning have been introduced [6, 11]. This master thesis focuses on the recognition of surgical phases and the derivation of actionable information from surgical videos.
Supervisor:Dr. Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:
start-end: -

Finished Theses



Edit | Attach | Refresh | Diffs | More | Revision r1.14 - 02 Jan 2019 - 11:39 - TobiasLasser

Lehrstuhl für Computer Aided Medical Procedures & Augmented Reality    rss.gif