ThesesPage

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

Diploma, Master and Bachelor Theses

Running Theses

,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(1, ,) $IF(0, ,) $IF(0, ,) $IF(1, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF(0, ,) $IF($EXACT(finished,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(running,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(running,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(running,running), ,) $IF($EXACT(running,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(running,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(,running), ,) $IF($EXACT(,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(open,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(running,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(,running), ,) $IF($EXACT(finished,running), ,) $IF($EXACT(draft,running), ,) $IF($EXACT(finished,running), ,)
Uncertainty Estimation for Segmentation in Autonomous Driving (Master Thesis)

In the context of Autonomous Driving, it is crucial to have a measure of the uncertainty associated to the various predictions performed by the Deep Learning models. This helps not only to combine various predictions from different pipelines, but also to understand the real confidence associated to each prediction. Networks tend to be overly confident (~99%, as derived from the logits probabilities) also on wrong predictions, or rather unknown data and scenarios. This makes such confidence unreliable. Therefore, uncertainty estimation complements predictions by quantifying how certain the models really are, with respect to the inputs, or their own weights and the way they were trained. Among the fundamental tasks of Autonomous Driving there is segmentation (semantic segmentation, instance segmentation, panoptic segmentation, part segmentation...), which is where we would like to integrate uncertainty estimation. This Master Thesis will be done in cooperation with BMW.
Supervisor:Stefano Gasperini
Director:Federico Tombari
Student:
start-end:soon - 6 Months after start
Temporal 3D Object Detection in Lidar Point Clouds for Autonomous Driving (Master Thesis)

Although Lidar data is acquired over time most Object Detection methods only work on a frame by frame basis and neglect useful temporal information. The goal of this master thesis is develop and implement novel ways to use temporal information from Lidar Point Clouds to improve Object Detection or Motion Forecasting. This Thesis will be done in cooperation with BMW
Supervisor:Alexander Lehner
Director:Federico Tombari
Student:Theo Beffart
start-end:15.04.2021 -
Deep Learning for Tool Detection and Tracking in Microsurgery (Bachelor Thesis)

The aim of this project is the investigation of the state-of-the-art deep learning architectures and frameworks with the purpose of detection and tracking of instruments in retinal microsurgeries. An implementation of a deep learning based instrument detection workflow shall be provided at the end of the project.
Supervisor:Hasan Sarhan, Dr. Mehmet Yigitsoy
Director:Prof. Nassir Navab
Student:Luca Alessandro Dombetzki
start-end:01.04.2018 -
Uncertainty Aware Methods for Camera Pose Estimation in Images and 3-Dimensional Data (Project)

Camera pose estimation is the term for determining the 6-DoF rotation and translation parameters of a camera. It is now a key technology in enabling multitudes of applications such as augmented reality, autonomous driving, human computer interaction and robot guidance. For decades, vision scholars have worked on finding the unique solution of this problem. Yet, this trend is witnessing a fundamental change. The recent school of thought has begun to admit that for our highly complex and ambiguous real environments, obtaining a single solution is not sufficient. This has led to a paradigm shift towards estimating rather a range of solutions in the form of full probability or at least explaining the uncertainty of camera pose estimates. Thanks to the advances in Artificial Intelligence, this important problem can now be tackled via machine learning algorithms that can discover rich and powerful representations for the data at hand. In collaboration, TU Munich and Stanford University plan to devise and implement generative methods that can explain uncertainty and ambiguity in pose predictions. In particular, our aim is to bridge the gap between 6DoF pose estimation either from 2D images/3D point sets and uncertainty quantification through multimodal variational deep methods.
Supervisor:Dr. Shadi Albarqouni, Dr. Tolga Birdal
Director:Prof. Dr. Nassir Navab, Prof. Dr. Leonidas Guibas
Student:Mai Bui, Haowen Deng
start-end:01.01.2020 -
3D Pedestrian Detection and Pose Estimation (Hiwi)

Autonomous driving systems are right on the corner and one key concern around the development and social acceptance of such systems is safeguarding. In this project, we want to look at the task of pedestrian detection from LiDAR? point clouds and their 3D pose estimation from the RGB camera input. 3D object detection from sparse point cloud data and multiple pedestrian 3D pose estimation are two challenging tasks and therefore active research fields in both academia and industry. In this project, we want to integrate the state of the art deep learning methods, train models on synthetic renderings and improve their performance based on safeguarding KPIs defined.
Supervisor:Mahdi Saleh
Director:Federico Tombari
Student:
start-end: -
Intraoperative 2D-3D Registration for Knee Alignment Surgery (DA/MA/BA)

In this project a system is developed, which assists surgeons in verifying a surgical result by comparing interventional X-Rays to a preoperative plan. The targeted surgical procedure is knee osteotomy in which a bone (usually the tibia) is cut at a specific point and the two segments are repositioned to correct knee alignment. The 2D X-Ray images are to be registered to the 3D preoperative plan in order to compute the achieved 3D configuration between the two bone segments. This allows the surgeon to make sure that the plan is carried out accurately, or make adjustments if necessary, as correct knee alignment leads to improved patient outcome.
Supervisor:Alexander Winkler, Matthias Grimm
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Self-supervised learning via Contrastive Generative Models (Master Thesis)

Supervisor:Azade Farshad
Director:Prof. Dr. Nassir Navab
Student:Anatasia Makarevich
start-end: -
Distributed SLAM - Jointly mapping 3D Geometry (DA/MA/BA)

Exploring an unknown scene and self-positioning within it, is a common and well-studied problem in Computer Vision which is known as SLAM (Simultaneous Localization and Mapping). Core fields of application are autonomous cooperative robotics and vehicles as well as tracking and detection systems in the medical domain. Traditional methods target a single system, equipped with image sensors, exploring the scene and building up a map for localization (e.g. a single robot or drone moving within an unknown environment). New approaches also incorporate information from other sensors such as IMUs, gyro or GPS. Another objective for the determination of the position is outside-in-tracking of an object via marker tracking with external sensors, thus providing the relative position of an object with respect to the tracking system. To overcome the line-of-sight problem of outside-in-tracking, and the singularity constraint of traditional SLAM methods, the project aims to develop a distributed SLAM approach. Multiple systems (referred to as sensor nodes hereafter), equipped with an image sensor, contribute to a common map of the scene for localization, while being also tracked by outside-in-tracking for accuracy. Thus, accuracy and applicability can be elevated with a distributed SLAM approach, combining the information of multiple sensor nodes and an external tracking system. Furthermore, the necessity of complicated and error prone calibration processes for individual systems within one application scenario can be avoided. The objective is to develop a generative distributed SLAM approach for challenging scenes and applications. Features like loop detection and closing, pose graph optimization, re-localization and mapping should be extended to a distributed approach, also enabling scalability.
Supervisor:Patrick Ruhkamp, Benjamin Busam
Director:Prof. Dr. Nassir Navab
Student:Joe Bedard
start-end: -
Evaluating Human Skills using Deep Neural Networks (Master Thesis)

Recently, deep learning has great success in various applications such as image recognition, object detection, and medical applications, etc. Using YouTube? and Vimeo, how-to videos are widely used to transfer the skills of experts. It is done by capturing the reference video for a specific task and users could learn a new skill according to the how-to videos. Then, how to evaluate the newly learned skills? Usually, it has to be evaluated by the experts but it takes high cost. To address this problem, evaluating human skills with the video is required [1-5]. Human skill evaluation or determination is a research area where researchers develop a new solution to automatically assess the human skills from the video. This technology could be extended to surgical skill assessment in medical applications where it’s accuracy become much more important [6-7]. In this project, we will develop a solution to automatically assess the human skills from the video.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
3D GAN for conditional medical image synthesis and cross-modality translation (Master Thesis)

GAN in 3D, especially in medical imaging application is challenging in many aspects, mainly due to the 'curse of dimensionality' and limited available data set. The goal of this project is to develop an optimum strategy to scale GAN in 3D that generalizes well for conditional medical image synthesis. The student will be provided with all-round support including good research environment, sufficient computational resources and active guidance to make the thesis successful.
Supervisor:Suprosanna Shit
Director:Prof. Bjoern Menze
Student:
start-end: -
Towards Human-Like Predictor with Rejection Option (Master Thesis)

Recently, measuring statistical uncertainties in deep neural networks has been an important issue in various safe-critical applications such as autonomous driving, computer-aided diagnosis. However, training the predictor which has a rejection option without performance degradation is still an unsolved problem. In this project, we will investigate a novel method (i.e. human-like predictor) where the neural network could reject uncertain samples. The main goal of human-like predictor is to learn deep neural networks which know what they can do and cannot do. It would be important to calibrate the uncertainty of prediction while maintaining accuracy.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Interactive Segmentation for Improving Infection Quantification in CT scans (Master Thesis)

Longitudinal changes of pathologies in CT images is an important indicator for analyzing patients from COVID19. To accurately analyze changes of pathologies, consistent segmentation across multiple time-points is required. Only a few studies have been reported for COVID inspection segmentation but there is no study on longitudinal data [1-3]. Moreover, it is very challenging to segment multiple pathologies due to the inter-class similarity and intra-class variability. To address these issues, in this project, we will explore a novel method to fully exploit user guidance. We consider two types of user-guided interactive segmentation. 1 The user provides a pathology mask (segmentation mask, line, circle, scribbles, etc.) on the reference scan. In this scenario, the longitudinal segmentation model will be designed to use the reference scan’s mask as additional input for the segmentation as in [4-5]. By utilizing the information on the target pathology from the user’s input, the network focuses on the target class, which makes the problem easier. 2) The user indicates erroneous areas. Then, the network will utilize the information to refine the segmentation [6-8]. We would explore this interactive segmentation on the longitudinal Covid-19 segmentation problem.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Deep Intrinsic Image Decomposition (Master Thesis)

In this project we aim at decomposing a simple photograph into layers of material properties like reflectance, albedo, materials etc. This a very challenging but important topic in bot Computer Vision and Computer Graphics as it improves tasks like scene understanding, augmented reality and object recognition. We want to tackle this problem using the human annotated OpenSurfaces? dataset and our recent advances in deep learning especially fully connected residual networks.
Supervisor:Christian Rupprecht, Iro Laina, Federico Tombari
Director:Prof. Nassir Navab
Student:Udo Dehm
start-end: -
Invariant Landmark Detection for highly accurate positioning (DA/MA/BA)

An essential task to enable highly autonomous driving is the self-localization and ego-motion estimation of the car. Together, they enable accurate absolute positioning and reasoning about the road ahead for e.g. path planning. If a single camera is used as sensor to measure the current vehicle location, positioning is based on visual landmarks and is related to the problem of visual odometry. Current navigation systems rely solely on GPS and vehicle odometry. Newer systems use object detections in the image, like traffic signs and road markings to triangulate the vehicle position within a map. To do so, visual landmarks are detected in the camera image and their position relative to the vehicle is computed. Given the landmark positions, the most likely position of the vehicle with respect to the landmarks in the map can be deduced. Current approaches use detected objects as landmarks (e.g. traffic signs/lights, poles, reflectors, lane markings), but often there are not enough of these objects to localize accurately.
In the scope of this project, a method to detect more generic landmarks should be developed. This method, that will be focusing on deep learning, should extract features that are more invariant to different invariances (e.g. illumination) and also provide a good matchability.

Challenges:
• Create a dataset from different sources: already existing datasets, public webcam streams and synthetic datasets.
• Design and implement a method/network to extract robust generic landmarks/features in different environments (highway, city, country roads) and match them to previously extracted landmarks.
• Leverage deep learning in order to achieve a high invariance to different environment conditions.

Tasks:
• Literature review of methods to extract robust landmarks with focus on:
      o Robustness to changes in appearance, viewpoint.
      o Uniqueness to match them corretly to already extracted landmarks.
• Implementation and evaluation of a deep neural network that is capable of extracting invariant features, that offer the possibility for robust matching.
• Application of the feature in a state-of-the-art SLAM algorithm.

Literature :
[1] LIFT: https://arxiv.org/abs/1603.09114
[2] TILDE: https://infoscience.epfl.ch/record/206786/files/top.pdf
[3] Playing for Data: https://download.visinf.tu-darmstadt.de/data/from_games/data/eccv-2016-richter-playing_for_data.pdf
[4] ORB_SLAM: http://webdiis.unizar.es/~raulmur/orbslam/
Supervisor:Jakob Mayr, Federico Tombari
Director:Prof. Nassir Navab
Student:Josefine Gaßner
start-end: -
Inverse Problems in PDE-driven Processes Using Deep Learning (IDP)

We are looking for extremely motivated student to work on the topic "Inverse Problems in PDE-driven Processes Using Deep Learning". The scope of this project is the intersection of numerical methods and machine learning. The objective is to develop theoretical framework and efficient algorithms that can be applied to broad class of PDE-driven systems. However, we can tailor the focus and scope of the project to your preferences.
Supervisor:Suprosanna Shit
Director:Prof. Bjoern Menze
Student:
start-end: -
Evaluation of iterative solving methods for the statistical reconstruction of Light Field Microscopy data (DA/MA/BA)

Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. Once the forward light transfer is determined based on the optical system response, the reconstruction process is an inverse problem. In fluorescence microscopy, besides the read-out noise, Poisson noise is present due to the low photon count. Hence a Poisson-Gaussian mixture model would be an appropriate approach for likelihood-based statistical reconstruction. Various iteration schemes may result from different likelihoods coupled with regularization.
Supervisor:Anca Stefanoiu
Director:Tobias Lasser
Student:
start-end: -
Automatic Early Detection of Keratoconus (DA/MA/BA)

Keratoconus (KCN) is a bilateral, non-inflammatory, and degenerative disorder of the cornea in the eye with an incidence of approximately 1 per 2,000 in the general population [1,2], It is characterized by progressive thinning and cone-shaped bulge of the cornea (fig.1) leading to substantial distortion of the vision [2]. The early diagnosis of keratoconus is of a great importance for patients seeking eye surgery (i.e. LASIK), which can prevent the progression of pathology after surgery [3-4]. Rabinowitz [5] shows that his preliminary research using a Wavefront analysis together with Corneal topography demonstrates a good classification between early KCN subtypes and normals. Further, Jhanji et. al. [6] concluded that swept-source OCT may provide a reliable alternative for the parameters of corneal topography (fig.2). On the other hand, Pérez et. al. [3] shows that all of these surveyors including videokeratography, Orbscan, and Pentacam together with the indices can lead to early KCN detection, however, with an increase in false positive detection. Therefore, developing a highly specific diagnostic tool for KCN detection with few false positive is highly desirable. In this IDP/MA project, the objective is to analysis data from approximately 200 patients, treated at the ophthalmology department at Klinikum Rechts der Isar / TUM. Using a retrospective corneal topographic data (fig.2) collected during follow-ups, the aim is to build an early predictive model for KCN detection.
Supervisor:Shadi Albarqouni, Ali Nasseri
Director:Prof. Nassir Navab
Student:
start-end: -
[[Students.MaKeil][]] ()

Supervisor:
Director:
Student:
start-end: -
Kyphoplasty balloon simulation (DA/MA/BA)

Kyphoplasty, a percutaneous, image-guided minimally invasive surgery, is a recently introduced treatment of painful vertebral fractures which is being performed extensively worldwide. The objective of kyphoplasty is to inject polymethylmethacrylate (PMMA) bone cement under radiological image guidance into the collapsed vertebral body to stabilize it. Before injecting the cement, an inflatable balloon is placed in the vertebral body and subsequently inflated in order to restore the vertebral height and correct the kyphotic deformity caused by the compression fracture. After the balloon is deflated and removed from the vertebral body, the created cavity is filled with PMMA bone cement. The goal of the project is the implementation and validation of a kyphoplasty balloon simulation.
Supervisor:Patrick Wucherer, Philipp Stefan
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Left Atrium Segmentation in 3D Ultrasound Using Volumetric Convolutional Neural Networks (Master Thesis)

Segmentation of the left atrium and deriving its size can help to predict and detect various cardiovascular conditions. Automation of this process in three-dimensional Ultrasound image data is desirable, since manual delineations are time-consuming, challenging and operator-dependent. Convolutional neural networks have made improvements in computer vision and in medical image processing. Fully convolutional networks have successfully been applied to segmentation tasks and were extended to work on volumetric data. This work examines the performance of a combined neural network architecture of existing models on left atrial segmentation. The loss function merges the objectives of volumetric segmentation, incorporation of a shape prior and the unsupervised adaptation to different Ultrasound imaging devices.
Supervisor:Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:Markus Degel
start-end: -
Deep Generative Model for Longitudinal Analysis (Master Thesis)

Longitudinal analysis of a disease is an important issue to understand its progression as well as to design prognosis and early diagnostic tools. From the longitudinal sample series where data is collected from multiple time points, both the spatial structural abnormalities and the longitudinal variations are captured. Therefore, the temporal dynamics of a disease are more informative than static observations of the symptoms, in particular for neuro-degenerative diseases whose progression span over years with early subtle changes. In this project, we will develop a deep generative method to model the lesion progression over time.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:Umut Küçükaslan
start-end: -
Instrument Tracking for Safety and Surgical View Optimization in Laparoscopic Surgery (DA/MA/BA)

Laparoscopic (minimally invasive, key hole) surgery involves usage of a laparoscope (camera), and laparoscopic instruments (graspers, scissors, monopolar and bipolar devices). First the abdomen is insufflated with carbon dioxide to create a space between the abdominal wall and organs. The laparoscope and laparoscopic instruments are then inserted through small 5 or 10 mm incisions in the abdomen. The laparoscope projects the image within the abdomen onto a screen. The surgeon can therefore visualise the inside of the abdomen and the operating instruments to carry the surgical procedure. At present, there is increasing interest in surgical procedures using a robot-assisted device. The advantages of using such a device include a steady, tremor-free image, the elimination of small inaccurate movements and decreased energy expenditure by the assistant. A number of studies have evaluated the advantages of robotic camera devices compared with manually controlled cameras or different types of devices. The possibility of developing a laparoscope with a tracking system that will automatically identify and follow the operating surgeon’s instruments does provide significant benefit without requiring bulky robotic systems. Firstly by withdrawing the need to always have an assistant will reduce cost. With an instrument tracking system, there is no need for additional pedals and headband to move the camera, which can be confusing, uncomfortable, unsafe and may actually increase the length of surgery. Besides that, an increased safety of the procedure will be achieved by providing a steadier image and with incorporated safety mechanisms. The current project aims at developing a laparoscopic camera system mounted on the operating bed. The proposed system will track the primary surgeon’s instruments without the need for any constant input. The aim is thereby to recognise key tools with priority (sharp tool 1st), and track their movement in situ to move a camera accordingly. With safety features being one priority, the camera will by default be focused on the instrument with higher priority (i.e scissors, monopolar and bipolar devices) in view. Whenever e.g. the monopolar or bipolar device is out of view, this will allow in future to disable the energy source of those instruments, which will greatly reduce one of the commonest cause of injuries during laparoscopic surgery.
Supervisor:Dr. Christoph Hennersperger, Dr Kushal Chummun
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Real-time large-scale SLAM from RGB-D data (Master Thesis)

You will extend an existing RGB-D reconstruction system to support large-scale scenes. In a first step you will evaluate and implement state-of-the-art algorithms for tracking and reconstruction from RGB-D data. Secondly, you will also evaluate and implement algorithms for texturing the obtained reconstruction from camera images. The real-time critical components will be implemented on a GPU.
Supervisor:Alexander Ladikos
Director:
Student:
start-end: -
[[Students.MaLatein][]] ()

Supervisor:
Director:
Student:
start-end: -
Learning to learn: Which data we have to annotate first in medical applications? (Master Thesis)

Although the semi-supervised or unsupervised learning has been developed recently, the performance of them is still bound to the performance of fully-supervised learning. However, the cost of the annotation is extremely high in medical applications. It requires medical specialists (radiologists or pathologists) required to annotate the data. For those reasons, it is almost impossible to annotate all available dataset and sometimes, the only a subset of a dataset is possible to be selected for annotation due to the limited budget. Active learning is the research field which tries to deal with this problem [1-5]. Previous studies have been conducted in mainly three approaches: an uncertainty-based approach, a diversity-based approach, and expected model change [3]. These studies have been verified that active learning has the potential to reduce annotation cost. In this project, we aim to propose a novel active learning method which learns a simple uncertainty calculator to select more informative data to learn the current deep neural networks in medical applications.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:Farrukh Mushtaq
start-end: -
Continual and incremental learning with less forgetting strategy (Master Thesis)

Recently, deep learning has great success in various applications such as image recognition, object detection, and medical applications, etc. However, in the real world deployment, the number of training data (sometimes the number of tasks) continues to grow, or the data cannot be given at once. In other words, a model needs to be trained over time with the increase of the data collection in a hospital (or multiple hospitals). A new type of lesion could be also defined by medical experts. Then, the pre-trained network needs to be further trained to diagnose these new types of lesions with increased data. ‘Class-incremental learning’ is a research area that aims at training the learned model to add new tasks while retaining the knowledge acquired in the past tasks. It is challenging because DNNs are easy to forget previous tasks when learning new tasks (i.e. catastrophic forgetting). In real-world scenarios, it is difficult to store all training data which was used when training DNN at the previous time due to the privacy issues of medical data. In this project, we will develop a solution to this problem in medical applications by investigating an effective and novel learning method.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:Afshar Kakaei
start-end: -
Depth estimation in Light Field Microscopy (DA/MA/BA)

Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera, allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be arranged into multi-views and used to retrieve the depth map of the imaged scene.
Supervisor:Anca Stefanoiu
Director:Tobias Lasser
Student:
start-end: -
A light field renderer for Light Field Microscopy data visualization (DA/MA/BA)

Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to retrieve angular perspectives of the imaged sample.
Supervisor:Anca Stefanoiu
Director:Tobias Lasser
Student:
start-end: -
Real-Time Volumetric Fusion for iOCT (Bachelor Thesis)

Microscope-integrated Optical Coherence Tomography (iOCT) is able to provide live cross-sectional images during an ophthalmic intervention. Current OCT engines have a limited acquisition rate, allowing to either image cross-sectional 2D images at high frame rate, low resolution and low field-of-view volumes at medium update rate or high-resolution volumes with low update rate. In order to provide full field of view visualization during a surgical intervention, a high resolution scan can be acquired at the start of the intervention which is then tracked with an optical retina tracker to compensate for movement. Goal of this thesis is to devise a method to dynamically update this high resolution volume with the live data acquired during the ongoing intervention, in order to provide a responsive visualization of the surgeon's working environment. Integration of the live data into the volume requires compensation for deformation of the tissue as well as incorporation of motion data from the optical tracker, to accurately find the correct region to update in the volume.
Supervisor:Jakob Weiss
Director:Prof. Nassir Navab
Student:Michael Sommersperger
start-end: -
Diverse Anomaly Detection Projects @deepc (Master Thesis)

Supervisor:Rami Eisawy
Director:Bjoern Menze
Student:
start-end: -
Multiple sclerosis lesion segmentation from Longitudinal brain MRI (IDP)

Longitudinal medical data is defined that imaging data are obtained at more than one time-point where subjects are scanned repeatedly over time. Longitudinal medical image analysis is a very important topic because it can solve some difficulties which are limited when only spatial data is utilized. Temporal information could provide very useful cues for accurately and reliably analyzing medical images. To effectively analyze temporal changes, it is required to segment region-of-interest accurately in a short time. In the series of images acquired over multiple times of imaging, available cues for segmentation become richer with the intermediate predictions. In this project, we will investigate a way to fully exploit this rich source of information.
Supervisor:Dr. Seong Tae Kim, Ashkan Khakzar
Director:Prof. Dr. Nassir Navab
Student:Moiz Sajid, Stefan Denner
start-end: -
MS Lesion Segmentation in multi-channel subtraction images (IDP)

Supervisor:Christoph Baur, Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:Sarthak Gupta
start-end: -
Gradient Surgery for Multitask Longitudinal CT Analysis (Master Thesis)

Longitudinal changes of pathology in CT images is an important indicator for analyzing patients from COVID19. In the clinical setting, clinicians read longitudinal images to get various information such as disease progression, needs for ICU admission, the severity of the disease. They are important to increase the survival rate. However, reading longitudinal 3D CT scans takes a long time which might decrease the efficiency of the clinician's performance. In this project, we will explore a method to automatically analyze longitudinal CT scans to help the radiologist's reading. In particular, we will explore a multitask learning method to fully exploit the relation between different tasks and adaptively balancing the gradients from different objective functions (i.e. Gradient surgery).
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Simulation of Muscle Activity for an Augmented Reality Magic Mirror (DA/MA/BA)

We have previously shown an augmented reality (AR) magic mirror. We create the illusion that a user standing in front of the system can look inside the own body. The video of this system received a lot of attention and has been seen over 200.000 time on Youtube. We now want to build a system for education of human anatomy using augmented reality visualization. We want to use the system to visualize muscle activity.
Supervisor:Ma Meng
Director:Nassir Navab
Student:
start-end: -
Tracking using Autoencoders and Manifolds (DA/MA/BA)

In this project we want to explore the possibilities of using autoencoders to perform object tracking in video sequences. The object's bounding box is given in the first frame and needs to be tracked thoughte the sequence. We would like to use autoencoders to encode the appearance of the object and to predict its future (location and appearance).
Supervisor:Christian Rupprecht, Federico Tombari
Director:Prof. Nassir Navab
Student:
start-end: -
Marker-based inside-out tracking for medical applications using a single optical camera (Master Thesis)

Nowadays, tracking and navigation for small imaging systems are performed mainly by devices based on infrared cameras or electromagnetic fields. These systems impose some disadvantages for the use with freehand devices such as gamma cameras or ultrasound probes: a separate system for “Outside-in” tracking is needed, which causes the main issue of a required line-of-sight between the tracking system and the tool to be tracked in the surgical environment. To solve this problem, the idea is to have a small add-on system attached to the devices being tracked. The add-on system contains of an optical camera to track several markers that are attached to the patient and calculate the inverse trajectory, i.e. the movement of the device. The idea of this project is to develop a tracking software, the “inside-out” tracking technique, with the required data set to have more accurate tracking and image fusion process. An algorithm for multi marker tracking and calculation of "best pose” will be implemented and the problems of illumination, occlusion, and stability will be addressed. Finally, the accuracy will be evaluated and compared to other tracking modalities, especially optical and electro-magnetical tracking.
Supervisor:Philipp Matthies
Benjamin Frisch
Director:Nassir Navab
Student:Shih Chen-Hsuan
start-end: -
Medical Augmented Reality with SLAM-based perception (IDP)

Supervisor:Federico Tombari, Ulrich Eck
Director:Prof. Nassir Navab
Student:
start-end: -
StainGAN: Stain style transfer for digital histological images (Master Thesis)

Digitized Histopathological diagnosis is in increasing demand, but stain color variations due to stain preparation, differences in raw materials, manufacturing techniques of stain vendors and use of different scanner manufacturers are imposing obstacles to the diagnosis process. The problem of stain variations is a well-defined problem with many proposed methods to overcome it each depending on the reference slide image to be chosen by a pathologist expert. We propose a deep-learning solution to that problem based on the Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks eliminating the need for an expert to pick a representative reference image. Our approach showed promising results that we compare quantitatively and quantitatively against the state of the art methods.
Supervisor:Shadi Albarqouni, Christoph Baur
Director:Prof. Dr. Nassir Navab
Student:M. Tarek Shaban
start-end: -
Understanding Medical Images to Generate Reliable Medical Report (Project)

The reading and interpretation of medical images are usually conducted by specialized medical experts [1]. For example, radiology images are read by radiologists and they write textual reports to describe the findings regarding each area of the body examined in the imaging study. However, writing medical-imaging reports requires experienced medical experts (e.g. experienced radiologists or pathologists) and it is time-consuming [2]. To assist in the administrative duties of writing medical-imaging reports, in recent years, a few research efforts have been devoted to investigating whether it is possible to automatically generate medical image reports for given medical image [3-8]. These methods are usually based on the encoder-decoder architecture which has been widely used for image captioning [9-10]. In this project, a novel automatic medical report generation method is investigated. It is challenging to generate accurate medical reports with large variation due to the high complexity in the natural language [11]. So, the traditional captioning methods suffer a problem where the model duplicates a completely identical sentence of the training set. To address the aforementioned limitations, this project focuses on the development of a reliable medical report generation method.
Supervisor:Dr. Shadi Albarqouni, Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:Hossain Shaikh Saadi
start-end: -
Creating Diagnostic Model for Assessing the Success of Treatment for Eye Melanoma (Project)

An eye melanoma, also called ocular melanoma, is a type of cancer occurring in the eye. Patients having an eye melanoma typically remain free of symptoms in early states. In addition, it is not visible from the outside, which makes early diagnosis difficult. The choroidal melanoma, which is located in the choroid layer of the eye, is the most common primary malignant intra-ocular tumor in adults. At the same time, intra-ocular cancer is relatively rare – only an estimated 2,500 - 3,000 adults were diagnosed in the United States in 2015. Treatment usually consists of radiotherapy or surgery if radiotherapy was unsuccessful. For larger tumors, radiation therapy maybe associated with some loss of vision. Currently, it is unknown which factors lead to the development of such cancer and which factors determine whether a patient is responding to radiotherapy In this master thesis project, the objective is to analyze data from approximately 200 patients, treated at the ophthalmology department at Ludwig-Maximilians University hospital. Treatment consisted of a single-session, frameless outpatient procedure with the Cyberknife System by Accuray. Using pre-procedural data and information collected during follow-up, the aim is to identify factors predictive of a patient's response to treatment and the impact on a patient's visual acuity, measured by the so-called Visus.
Supervisor:Shadi Albarqouni
Director:Prof. Nassir Navab
Student:
start-end: -
3D Mesh Analysis and Completion (IDP)

During a scanning process, it is not possible to acquire all parts of the scanned surface. Data are inevitably missing due to the complexity of the scanned part or imperfect scanning process. This create holes in the mesh, bad triangles, and numerous problems and issues.The goal of the project is to use available libraries to a) compute a quality measure and characteristic for a given 3D mesh, b) identify problems/issues and c) fix it.
Supervisor:Mahdi Hamad
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Meta-clustering (Master Thesis)

Supervisor:Azade Farshad
Director:Prof. Dr. Nassir Navab
Student:Samin Hamidi
start-end: -
Meta-Optimization (Project)

Supervisor:Azade Farshad
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Glass/Mirror Detection (DA/MA/BA)

Mirror and transparent-objects have been an issue for simultaneous re-localization and mapping (SLAM). Mirrors reflect light rays which cause the wrong reconstruction and windows are hard to be observed by cameras. This is especially dangerous for robotics since robots may try to go into a mirror or go through a window. The main goal of this work is to solve this issue by detecting mirrors/windows and reconstructing a correct map. The potential approach is to use an object detection network, such as YOLO, to detect possible mirrors and windows. Then designing a function to correctly reconstruction the reflected region in the map. This work involves knowledge in deep learning and SLAM.
Supervisor:Shun-Cheng Wu
Director:Federico Tombari
Student:
start-end: -
Modeling brain connectivity from multi-modal imaging data (Master Thesis)

Supervisor:Dr. Igor Yakushev & Dr. Kuangyu Shi
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
MR-CT Domain Translation of Spine Data (DA/MA/BA)

The goal of this project is to synthesise MR images from CT scans of the spine and vive versa in an unpaired setting.
Supervisor:Anjany Sekuboyina
Director:Bjoern Menze
Student:
start-end: -
Multi-modal Deformable Registration in the Context of Neurosurgical Brain Shift (Master Thesis)

Registration of medical images is crucial for bringing data obtained by different sources or at a different time into a common reference frame. Adding real-time requirements to 3D multi-modal registration allows physicians to analyze the combination of medical data both preoperatively as well as intraoperatively, providing additional benefits for the patient and helping to achieve a desirable procedure outcome. Different applications usually induce several underlying geometrical transformations ranging from global rigid movements to local nonlinear deformations such as brain shift in neurosurgery or compression of liver tissue during respiratory motion. Using a deformable registration to correct local tissue distortions allows for a transformation of preoperative data into an intraoperatively acquired local reference frame. Preoperative X-ray Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) data are commonly available for diagnostics and procedure planning, but a multi-modal deformable registration with intraoperative Ultrasound (US) is needed for the successful guidance of minimally invasive procedures. Within the scope of this thesis, existing registration techniques have been researched and compared and new cost functions for multi-modal deformable registration of 3D US with preoperative CT and MRI data are proposed.
Supervisor:Christoph Hennersperger, Julia Rackerseder, Dr. Benjamin Frisch
Director:Prof. Dr. Nassir Navab
Student:Stefan Matl
start-end: -
Registration of Multi-View Ultrasound with Magnetic Resonance Images (Master Thesis)

Supervisor:Bernhard Fuerst, Wolfgang Wein, Ahmad Ahmadi
Director:Nassir Navab
Student:
start-end: -
Multiple Screen Detection for Eye-Tracking Based Monitor Interaction (DA/MA/BA)

A modern operating room usually offers multiple different monitors to present various information to the surgical staff. With the trend to go from highly invasive open surgery to minimally invasive techniques such as laparoscopic surgery, Single-Port surgery or even NOTES, the amount of additional monitors is likely to increase. Knowing which monitor the surgeon is looking at, and on which part of the monitor they are focused allows for a wide variety of supporting systems, such as automatic adjustments of the endoscopic camera position through a robotic system. The goal of this work is to develop a method to recognize monitors with changing content through the cameras of head-mounted eye-tracking systems and translate the detected gaze point to the coordinate frame of the detection monitor. The work will be based on an existing framework (developed in C#) that is able to detect a single monitor, which should be extended to an arbitrary number of monitors and distinguish between them.
Supervisor:Ralf Stauder, Mathias Magg (MITI)
Director:Prof. Nassir Navab
Student:
start-end: -
Neural solver for PDEs (Hiwi)

Supervisor:Suprosanna Shit
Director:Bjoern Menze
Student:
start-end: -
New Mole Detection (Master Thesis)

Supervisor:Diana Mateus
Director:Prof. Nassir Navab
Student:
start-end: -
Robust training of neural networks under noisy labels (Master Thesis)

The performance of supervised learning methods highly depends on the quality of the labels. However, accurately labeling a large number of datasets is a time-consuming task, which sometimes results in mismatched labeling. When the neural networks are trained with noisy data, it might be biased to the noisy data. Therefore the performance of the neural networks could be poor. While label noise has been widely studied in the machine learning society, only a few studies have been reported to identify or ignore them during the process of training. In this project, we will investigate the way to train the neural network under noisy data robustly. In particular, we will focus on exploring effective learning strategies and loss correction methods to address the problem.
Supervisor:Dr. Seong Tae Kim, Dr. Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:Cagri Yildiz
start-end: -
Real-Time Simulation of 3D OCT Images (Master Thesis)

Optical Coherence Tomography (OCT) is widely used in diagnosis for ophthalmology and is also gaining popularity in interventional settings. OCT generates images in an image formation process similar to ultrasound imaging: Coherent light waves are emitted into the tissue and this light signal is partially reflected at discontinuities of optical density. The reflected light waves are then used to reconstruct a depth slice of the tissue. The ability to simulate such a modality in real time has many potential applications: For example, it can greatly help to evaluate image processing algorithms where ground truth is not easily obtainable. It is also a crucial part of a fully virtual simulation environment for ophthalmic interventions, which can be used for training as well as prototyping of visualization concepts. As a first step, existing simulation algorithms shall be reviewed and evaluated in terms of computational efficiency when adapted to 3D. A new or adapted algorithm shall be proposed to support simulation of OCT images from a volumetric model of the eye. This should consider efficient implementation on the GPU and consider realistic simulation of the modality's artifacts, such as speckle noise, reflections and shadowing.
Supervisor:Jakob Weiss
Director:Prof. Nassir Navab
Student:
start-end: -
Self-supervised learning for out-of-distribution detection in medical applications (Master Thesis)

Although recent neural networks have achieved great successes when the training and the testing data are sampled from the same distribution, in real-world applications, it is unnatural to control the test data distribution. Therefore, it is important for neural networks to be aware of uncertainty when new kinds of inputs (which is called out-of-distribution) are given. In this project, we consider the problem of out-of-distribution detection in neural networks. In particular, we will develop a novel self-supervised learning approach for out-of-distribution detection in medical applications.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:Abinav Ravi Venkatakrishnan
start-end: -
Seamless stitching for 4D opto-acoustic imaging (DA/MA/BA)

Optoacoustic tomography enables high resolution biological imaging based on the excitation of ultrasound waves due to the absorption of light. A laser pulse penetrates soft tissue up to a few centimeters in depth and provides 3D-visualization of biological tissues. With its rich contrast and high spatial and temporal resolution optoacoustic tomography is especially preferred for vasculature imaging. The size of a single optoacoustic volume is limited by the size and the field of view of the scanner. In order to get a good general view of the finer biological structure, however, greater field of views are necessary. During this master thesis we will investigate several methods for combining multiple volumes into one larger volume and evaluate which of those existing methods are adaptable for optoacoustic scans. Therefore, we need to find a way to align volumes to each other without having their positions tracked. Additionally, the voxels in the overlapping areas have to be blended in a way that any abrupt transitions or resolution losses are avoided, even though the resolution of each scan decreases around the volume edges and with distance to the scanner. Finally, we aim to propose a method to seamlessly stitch several optoacoustic scans into one high resolution volume without any additional information on the position of the single volumes.
Supervisor:Christoph Hennersperger; Daniel Razansky
Director:Prof. Dr. Nassir Navab
Student:Suhanyaa Nitkunanantharajah
start-end: -
Image based tracking for medical augmented reality in orthodontic application (DA/MA/BA)

This Master Thesis suggests a low-cost Augmented Reality system, termed OrthodontAR?, for orthodontic applications and examines image-based tracking techniques specific to orthodontic use. The procedure addressed is guided bracket placement for orthodontic correction using dental braces. Related research has developed FEM simulations based on cone-beam CT reconstructions of teeth and bone. Such simulations could be used in the planning of optimal bracket placement and wire tension, such that patient teeth move in an optimal manner while minimizing rotation. The benefits would include reduced overall chair time due to fewer corrections and reduced likelihood of relapse due to reduced twisting. The system suggested in this thesis tackles the guided placement of brackets on the teeth, which is required to realize pre-procedure planning. Augmentation of a patient video with a newly placed bracket with its planned position would suffice. The surgeon could visually align planned and actual position in a video see-through head mounted display (HMD). To reduce technical complexity, the system shall be fully image guided. It shall rely on information from both CT and video images to track the patient's jaw. The goal of this thesis is to develop and evaluate image-based methods to overlay the CT of the patient with the video image. A prototype system shall be evaluated in terms of robustness and accuracy to determine if it meets practical requirements.
Supervisor:Wolfgang Wein Tobias Reichl
Director:Nassir Navab
Student:Andre Aichert
start-end: - 2011/12/15
Computer-aided Early Diagnosis of Pancreatic Cancer based on Deep Learning (Master Thesis)

Pancreatic ductal adenocarcinoma (PDAC) remains as the deadliest cancer worldwide and most of them are diagnosed in the advanced and incurable stage (1). For the year 2020, it is estimated that the number of cancer deaths caused by pancreatic ductal adenocarcinoma (PDAC) will surpass colorectal and breast cancer and will be responsible for the most overall cancer deaths after lung cancer (2). This lethal nature of PDAC has led to the consensus of screening high-risk individuals (HRIs) at early curable stage to improve the survival (3-6). The lethal nature of pancreatic ductal adenocarcinoma (PDAC) has led to the consensus of screening high-risk individuals at early curable stage. However, there is no non-invasive imaging method available for effective screening of PDAC at the moment. Strong evidence has shown that the pathological progression from normal ductal tissue to PDAC is via paraneoplastic lesions, such as pancreatic intraepithelial neoplasia (PanIN?), intraductal papillary mucinous neoplasm (IPMN) and mucinous cystic neoplasm (MCN) (7). Pancreatic carcinogenesis progresses for years from precursors to invasive cancer, indicating a long window of opportunity for early diagnosis in the curative stage (8). Deep learning technologies extend the human perception of information from digital data and its implementation has led to record-breaking advancements in many applications. The proposed master thesis will employ deep learning methods on CT or PET imaging for the early diagnosis of the precursor lesion IPMN. The student is expected to have good knowledge in medical imaging. Advanced skill in python programming is required.
Supervisor:Dr. Kuangyu Shi
Director: Prof. Dr. Bjoern Menze
Student:
start-end: -
Automatic Detection of Probe Count and Size in Digital Pathology (DA/MA/BA)

In recent time, an increasing trend towards an automatic sample preparation process can be observed in histopathology. In conjunction with Ithe Startup Inveox, laboratory automation enhances efficiency, increases process safety and eliminates potential errors. Tracking and processing of incoming probes is currently still done manual - and this is where the Inveox technology provides a fundamentatl step forward. The goal of this project is to develop a solution for an automatic detection of the size and number of tissue probes within the automatic processing system of Inveox. This includes the selection of appropriate hardware and its arrangement within the automation system. On this foundation, a method to automatically analyze the size (area) and number of samples should be developed.
Supervisor:Dr. Christoph Hennersperger
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Persistent SLAM (Master Thesis)

Supervisor:Shun-Cheng Wu Johanna Wald
Director:Federico Tombari
Student:
start-end: -
Investigation of Interpretation Methods for Understanding Deep Neural Networks (Project)

Machine learning and deep learning has made breakthroughs in many applications. However, the basis of their predictions is still difficult to understand. Attribution aims at finding which parts of the network’s input or features are the most responsible for making a certain prediction. In this project, we will explore the perturbation-based attribution methods.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Improving photometric quality of SLAM (DA/MA/BA)

Existing incremental scene reconstruction approaches rely on different fusion methods to integrate sensor data from different view angles in order to reconstruct a scene. For example, KinectFusion?[1] uses running average on TSDF[3] and RGB values on each voxel. Similar aggremetion methods are also used in other works[2]. Accurate geometry is possible to be reconstructed by using this approach. However, the reconstructed texture is usually blurry and is less realistic (See Figure1).
Supervisor:Shun-Cheng Wu
Director:Federico Tombari
Student:
start-end: -
Photorealistic Rendering of Training Data for Object Detection and Pose Estimation with a Physics Engine (Bachelor Thesis)

3D Object Detection is essential for many tasks such a Robotic Manipulation or Augmented Reality. Nevertheless, recording appropriate real training data is difficult and time consuming. Due to this, many approaches rely on using synthetic data to train a Convolutional Neural Network. However, those approaches often suffer from overfitting to the synthetic world and do not generalize well to unseen real scenes. There are many works that try to address this problem. In this work we try to follow , and intend to render photorealistic scenes in order to cope with this domain gap. Therefore, we will use a physics engine to generate physically plausible poses and use ray-tracing to render high-quality scenes.
Supervisor:Fabian Manhardt, Johanna Wald
Director:Federico Tombari
Student:Wessam Abdelbari Ali Frrag
start-end: -
Deep Learning to Solve sedimentation diffussion (Master Thesis)

Supervisor:Suprosanna Shit
Ivan Ezhov
Director:Bjoern Menze
Student:
start-end: -
Planning on Dense Semantic Reconstructions (DA/MA/BA)

Supervisor:Nikolas Brasch
Director:Federico Tombari
Student:
start-end: -
3D Object Detection and Segmentation from Point Clouds (DA/MA/BA)

With the success of CNN architectures in computer vision tasks such as object detection and semantic segmentation on 2D data and images, there has been ongoing research on how to apply such deep learning models on 3D data. In fields such as robotics and autonomous driving, one can use 3D depth sensors to encapsulate 3D data. However, these data are sparse and computationally challenging to process. In this project, we want to process 3d data, namely, point clouds, segment them semantically and predict the bounding boxes around them.
Supervisor:Mahdi Saleh
Director:Federico Tombari
Student:
start-end: -
Siemens AG: X-ray PoseNet - Recovering the Poses of Portable X-Ray Device with Deep Learning (Master Thesis)

For most CT setups usually the systems geometric parameters are known. This is necessary to compute an accurate reconstruction of the scanned object. Unfortunately for a Mobile CT this might not be the case. However to enable the reconstruction of an object given its projections from unknown geometric parameters, this master thesis explores the possibility of using Convolutional Neural Networks to train a model and estimate the necessary geometric parameters needed for tomographic reconstruction.
Supervisor:Shadi Albarqouni, Slobodan Ilic
Director:Prof. Nassir Navab
Student:Mai Bui
start-end: -
Memory-enhanced Categrory-Level Pose Estimation (Master Thesis)

Category-level pose estimation jointly estimates the 6D pose: Rotation and translation, and object size for unseen objects with known category labels. Currently, the SOTA methods in 9D are FS-Net [1] and DualPoseNet? [2]. And one straightforward idea to improve the performance is to introduce priors into the network. ShapePrior? [3] and CPS [4] leverage the point cloud to represent the mean shape of each category. FS-Net adopts the average size of each category. We, instead, can use a memory module to store typical shapes of each category, similar to point cloud segmentation methods [5]. The way to establish the memory module: 1 First we train the network to extract features and then utilize the feature to reconstruct observed points, as in FS-Net. 2 Assume the features follow GM distribution, we can use a K-means to build the module, or some other unsupervised learning methods may be doable. The way to train the network: Our network structure is similar to FS-Net and ShapePrior?, thus the training procedures may be similar too. References: [1] Chen, W., Jia, X., Chang, H. J., Duan, J., Shen, L., & Leonardis, A. (2021). FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose Estimation with Decoupled Rotation Mechanism. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1581-1590). [2] Lin, J., Wei, Z., Li, Z., Xu, S., Jia, K., & Li, Y. (2021). DualPoseNet?: Category-level 6D Object Pose and Size Estimation using Dual Pose Network with Refined Learning of Pose Consistency. arXiv preprint arXiv:2103.06526. [3] Tian, M., Ang, M. H., & Lee, G. H. (2020, August). Shape Prior Deformation for Categorical 6D Object Pose and Size Estimation. In European Conference on Computer Vision (pp. 530-546). Springer, Cham. [4] Manhardt, F., Wang, G., Busam, B., Nickel, M., Meier, S., Minciullo, L., ... & Navab, N. (2020). CPS++: Improving Class-level 6D Pose and Shape Estimation From Monocular Images With Self-Supervised Learning. arXiv preprint arXiv:2003.05848. [5] He, T., Gong, D., Tian, Z., & Shen, C. (2020). Learning and Memorizing Representative Prototypes for 3D Point Cloud Semantic and Instance Segmentation. arXiv preprint arXiv:2001.01349
Supervisor:Yan DI Yanyan Li
Director:Federico Tombari
Student:
start-end: -
Predicate-based PET-MR visualization (Master Thesis)

The combined visualization of multi-modal data such as PET-MR is a challenging task. The recently introduced predicate paradigm for visualization offers a promising approach to reduce the dimensionality and complexity of the classification (transfer function) domain and provides the clinician with an intuitive user interface. The goal of this project is to extend this technique to multi-modal visualization and apply it to for instance to PET-MR scans of prostate.
Supervisor:Christian Schulte zu Berge? Benjamin Frisch
Director:Nassir Navab
Student:Faisal Ibne Mozhher
start-end: -
Extrinsics calibration of multiple 3D sensors (DA/MA/BA)

Multiple 3D sensor setups are now increasingly used for a variety of computer vision applications, including rapid prototyping, reverse engineering, body scanning, automatic measurements. This project aims at developing a new approach for the calibration of the extrinsic parameters of multiple 3D sensors. The goal is to devise a technique which is simple, fast but also accurate in the estimation of the 3D pose of each sensor. The project will include study of the state of the art in the field, software development of the calibration technique (in C++) and experimental validation.
Supervisor:Christian Rupprecht, Federico Tombari
Director:Prof. Nassir Navab
Student:
start-end: -
Radiation Exposure Estimation of full surgical procedures using CamC (Master Thesis)

Supervisor:Séverine Habert, Ulrich Eck
Director:Nassir Navab
Student:
start-end: -
A New Computational Algorithm for Treatment Planning of Targeted Radionuclide Therapy (DA/MA/BA)

Supervisor:Dr. Kuangyu Shi, Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Comparing light propagation models in Light Field Microscopy (DA/MA/BA)

Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample.
Supervisor:Anca Stefanoiu
Director:Tobias Lasser
Student:
start-end: -
[[Students.MaRecommenderVenn][]] ()

Supervisor:
Director:
Student:
start-end: -
Deep Learning for Semantic Segmentation of Human Bodies (Master Thesis)

In this project we want to inspect a deep learning approach, to tackle the challenging problem of semantic segmentation of human bodies. This task will be one of the core modules of a 3D reconstruction framework we are currently developing. Given a set of depth maps of the target object from multiple views, the goal is to develop a method that identifies the body parts in each of them. Specifically, we intend to explore the potential of Convolutional Neural Networks (CNNs) in this scenario. Previous work has been done in using CNNs to infer semantic segmentation from RGB data. In absence of color information, the semantic segmentation becomes more challenging. With this in mind, we utilize the approach in [1], where a dense correspondence is found between two depth images of humans. In our task, having a known segmentation map for a reference depth image and assigning such correspondence, it is possible to infer the segmentation for new target depth maps. [1] Lingyu Wei, Qixing Huang, Duygu Ceylan, Etienne Vouga, Hao Li. Dense Human Body Correspondences Using Convolutional Networks. CVPR. 2016.
Supervisor:Helisa Dhamo, Federico Tombari
Director:Prof. Nassir Navab
Student:Manuel Nickel
start-end: -
Reconstructing the MI. Building in a Day (Project)

Supervisor:Yanyan Li
Director:Federico Tombari
Student:Chan, Tin Chon
start-end: -
Computational image refocus in Multi-focused Light Field Microscopy (DA/MA/BA)

Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to computationally refocus at a different depth post-acquisition.
Supervisor:Anca Stefanoiu
Director:Tobias Lasser
Student:
start-end: -
Robot-Assisted Vitreo-retinal Surgery (Master Thesis)

Pars Plana Vitrectomy surgery is a minimal invasive intra- retinal surgery that has revolutionized retinal surgery since it was rst proposed in the 1970's. This sutureless technique involves the use of smaller surgical instruments (25 gauge, 51mm in diameter) and was used in the cure of conditions not treatable before. Moreover, it proved to have lower complication rate and shorter healing period than standard vitreo- retinal surgery. The barrier towards an improvement of the outcomes of this technique lies, however, in the surgeon's abilities and dexterity. In this line, an assisting robotic master-slave device could enhance the surgeon's skills when manipulating the surgical tools. The slave device is in charge of manipulating the tools whereas the master device is ma- nipulated by the surgeon to control the master device. A master input device for the controlling of the existing master robot is to be designed. Several aspects concerning the operating room environment, surgery req- uisites, surgeon's manipulation intuition and compatibility with slave device should be taken into account when nding the optimal solution.
Supervisor:Dr. Abouzar Eslami, Ali Nasseri
Director:Nassir Navab, Prof. Alois Knol
Student:Monica Azqueta
start-end: -
[[Students.MaRoboticCatheterUS][]] ()

Supervisor:
Director:
Student:
start-end: -
Optimal planning and data acquisition for robotic ultrasound-guided spinal needle injection (DA/MA/BA)

Facet-joint syndrome is the one of the main causes of back pain that at least around eighty percent of the population has suffered during their lifetime. Current clinical practice requires injections of analgesics in the lumbar region of the spine as this is the main area of pain suffering. This procedure is done under CT guidance for which in every injection around ten control images are required if no perfect initial placement is achieved. As a consequence, not only patients but also medical staff is exposed to dangerous amounts of radiation over time. In this work we propose and evaluate an ultrasound-guided visual servoing technique using a robotic arm equipped with an ultrasound probe and needle guide. Based on first results demonstrated with a proof of concept, an initial panoramic 3D-ultrasound scan is registered to existing CT data. As this step was limited to rigid alignments between CT and 3D-US data, this work specifically focuses on the optimal acquisition of panoramic robotic ultrasound scans allowing for accurate surgical pre-planning, as well as the intra-operative registration of CT- and ultrasound datasets in a deformable way. The project will be integrated within an intuitive visualization tool for both the acquired ultrasound datasets as well as planned trajectories, and also focus on required servoing techniques to directly approach the planned target using facet joint injection needles.
Supervisor:Dr. Christoph Hennersperger, Salvatore Virga, Javier Esteban
Director:Prof. Dr. Nassir Navab
Student:Sebastian Raquena Witzig
start-end: -
Towards More Robust Machine Learning Models (Master Thesis)

Supervisor:Azade Farshad, Dr. Alexander Urich
Director:Prof. Dr. Nassir Navab
Student:Mariem Kthiri
start-end: -
Meta-learning for Image Generation/Manipulation using Scene Graphs (Master Thesis)

Image generation using scene graphs in natural scenes is a challenging task in high image resolutions. In this project, we aim to improve the quality of image generation and manipulation from scene graphs using meta-learning approaches used for the few-shot learning problem. We plan to incorporate domain adaptation and information from synthetic images to learn a well-generalized model that adapts fast to new scenes.
Supervisor:Azade Farshad, Helisa Dhamo
Director:Prof. Dr. Nassir Navab
Student:Sabrina Musatian
start-end: -
Surgical Workflow Analysis under Limited Annotation (Master Thesis)

Surgical workflow analysis is of importance for understanding the onset and persistence of surgical phases and individual tool usage across surgery and in each phase. It is beneficial for clinical quality control and to hospital administrators for understanding surgery planning. To automatize this process, automatic surgical phase recognition from the video acquired during the surgery is very important. As the success of deep learning, various architectures have been also reported in video understanding [1-3]. While it has been very successful to classify short trimmed videos, temporally locating or detecting action segments in long untrimmed videos is still very challenging. Surgical scenes are usually represented as high intra-phase variance but limited inter-phase variance. Moreover, annotating the surgical video for training deep neural networks is a very expensive task because the videos are usually long and the frame-level annotation is required to train the models with traditional approaches. In this project, to address this issue, we would like to explore a new surgical phase recognition model which could be trained under training data with limited annotation.
Supervisor:Dr. Seong Tae Kim, Tobias Czempiel
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Semi-supervised Active Learning (Master Thesis)

Deep neural network training generally requires a large dataset of labeled data points. In practice, large sets of unlabeled data are usually available, but acquiring labels for these datasets is time-consuming and expensive. Active Learning (AL) is a training protocol that aims at minimizing labeling effort in machine learning applications. Active learning algorithms try to sequentially query labels for the most informative data points of an unlabeled data set. Semi-supervised learning (SSL) is a method that uses unlabeled data for model training in order to improve performance. In this thesis, we will explore the combination of these two promising approaches for the efficient training of deep neural networks. In particular, we explore how query selection criteria of AL algorithms have to be designed when used in conjunction with SSL algorithms.
Supervisor:Dr. Seong Tae Kim
Director:Prof. Dr. Nassir Navab
Student:Felix Buchert
start-end: -
Computer-aided survival, grading prediction and segmentation of soft tissue sarcomas in MRI. (DA/MA/BA)

This project will investigate three-main task in soft-tissue sarcomas. The first is the prediction of disease progression or survival given different time point scans of the patient. The second consist of grading the aggressiveness of the sarcomas in a three-class classification task. The third task is related to the medical image segmentation to automate the treatment planning.
Supervisor:Fernando Navarro
Director:Prof. Bjoern Menze
Student:
start-end: -
Scene graph generation (Master Thesis)

We are looking for a motivated student to work on a research topic that involves deep learning and scene understanding. The project consists on generating scene graphs which is a compact data representation that describes an image or 3D model of a scene. Each node of this graph represents an object, while the edges represent relationships/interactions between these objects, e.g. "boy - holding - racket" or "cat - next to - tree". The application of scene graphs involve image generation content-based queries for image search, and sometimes serve as additional context to improve object detection accuracy. Preferably master thesis. Also possible as guided research.
Supervisor:Helisa Dhamo Azade Farshad
Director:Federico Tombari
Student:Sarthak Garg
start-end: -
[[Students.MaSchwarzmann][]] ()

Supervisor:
Director:
Student:
start-end: -
Segmentation of Fractured Bones (Master Thesis)

With the advent of computer aided surgery and planning, automatic post-processing of the acquired imaging data becomes more and more important but also challenging. Image segmentation is in many cases the first, very important but difficult step. A fully automatic method for segmenting bones would be highly desirable. However a few factors hamper the development of such a fully automated segmentation method. The quality of the datasets differs to a large extent in terms of contrast and resolution. The voxel intensity of the bones varies according to scan parameters and condition of the bone. The intensity of cortical and trabecular bone may be very similar in some cases. The density of osteoporotic bones is low and thus the contrast between osteoporotic bones and soft tissues is very small, in particular in fractured bones, where the trabecular bone directly adjoins to the soft tissues. In addition, it is very time consuming, inconsistent and hard to do a complete manual segmentation of the fractured bones. We look forward to propose a fast and efficient segmentation tool that effectively segments the fractures and at the same time is robust for using the output model in FEM analysis. Though the objective is to develop a segmentation tool that is as fully automated as possible, the idea is also to have the following features incorporated into the segmentation tool: semi-automatic segmentation, manual correction and output generation. The evaluation is done on CT datasets of various types of fractures in Department of Diagnostic and Interventional Radiology of Klinikum rechts der Isar in Munich.
Supervisor:Dr. Abouzar Eslami, Dr. Jan Bauer, Dr. Peter Noël
Director:Nassir Navab
Student:Deepak Murali
start-end: -
Self-supervising monocular 6D object pose estimation (DA/MA/BA)

We offer a Master thesis project in collaboration with researchers from Google Zurich. We are looking for a motivated student interested in 3D computer vision and deep learning. The project involves in particular 6D object pose estimation [1,2], and self-supervised learning [3,4]. 6D object pose estimation describes the tasks of localizing an object of interest in an RGB image and subsequently estimating its 3D properties (i.e. 3D rotation and location). Example datasets and an online benchmark suite are hosted by [5]. While the field has recently made a lot of progress in accuracy and efficiency [5], many approaches rely on real annotated data. Nonetheless, obtaining annotated data for the task of pose estimation is often very time consuming and error prone. Moreover, when lacking appropriately labeled data the performance of these methods drops significantly [6]. Therefore, following recent trends in self-supervised learning, we want to investigate if we can train a deep model to learn purely from data without requiring any annotations, similar to [4] and [5]. Prerequisites: The candidate should have interest and knowledge in deep learning, be comfortable with Python and preferably have some experience with a deep learning framework, such as PyTorch? or TensorFlow?. Also, the candidate should have relevant prior experience with 3D computer vision, in terms of relevant university courses and/or projects.
Supervisor:Fabian Manhardt
Director:Federico Tombari
Student:
start-end: -
Self-supervised learning in arbitrary image sequences (DA/MA/BA)

Self-supervised depth estimation shows the promising result in the outdoor environment. However, there are few works target on the indoor or more arbitrary scenario. After a series of experiments, we found that one reason may be the current architecture is not able to train the network with high variation ego-motion sequences. The existing methods usually rely on Kitti, Cityscape training dataset which mostly consists of onward motion. When testing existing methods in the indoor dataset (such as TUM-RGBD, NYU ), on most of them, the training simply failed by outputting zero-depth image.

The goal of this project is to investigate this issue and try to find a solution for training a self-supervised indoor depth estimation.
Possible directions:
1. Try to improve the pose network by using pre-trained pose network, such as Sfmleaner, Flownet2.0, and fine-tuning.
2. Find a way to end-to-end train pose network correctly by improving the poseNet architecture, designing a good loss function or a structuring a good training method.

Supervisor:Shun-Cheng Wu
Director:Federico Tombari
Student:
start-end: -
Improving high dimensional prediction tasks by leveraging sparse reliable data (Master Thesis)

Supervisor:Federico Tombari, Matthias Niessner
Director:
Student:
start-end: -
Development of spatio-temporal segmentation model for tumor volume calculation in micro-CT (Master Thesis)

To develop a spatio-temporal segmentation model where the network is exposed to previous temporal information and builds this complex mapping to segment a given mouse micro-CT image to allow accurate tumor volume calculations. A dataset with micro-CT scans of over 69 mice with repeat imaging is available with ground truth annotations. Mice were either treated with radiotherapy or left untreated. The small animal data act as a surrogate for clinical datasets treated with MR-linac technology, which requires automatic spatio-temporal segmentation.
Supervisor:Dr. Shadi Albarqouni, Dr. Seong Tae Kim, Dr. Guillaume Landry
Director:Prof. Dr. Nassir Navab
Student:Tetiana Klymenko
start-end: -
Automatic segmentation of the Spinal Cord and Multiple-Sclerosis lesions in MRI scans (DA/MA/BA)

Supervisor:Prof. Bjoern Menze
Director:Malek El Husseini
Student:
start-end: -
Siemens AG: Detection of Complex Stents in Live Fluoroscopic Images for Endovascular Aneurysm Repair (Master Thesis)

The abdominal aortic aneurysm (AAA) is a dilation of the aorta that may result in rupture, and is one of the most common aortic diseases. An AAA may be repaired by open surgery or by endovascular aneurysm repair (EVAR) to prevent rupture, and in recent years EVAR has become predominant. During the EVAR procedure, a stent is placed at the position of the aneurysm to exclude it from direct blood flow. The accuracy of stent placement is critical to prevent occlusion of the branching arteries, e.g., renal arteries. This master thesis implements methods to detect complex deployed stents in live 2D fluoroscopic images during EVAR. The proposed learning-based method trains fully convolutional neural network (FCN) models [1, 2, 3] to detect the stent. The training set consists of labelled 2D fluoroscopic image patches that contain the stent. The detection result can be further improved by integrating prior knowledge, e.g., the overlay of the registered pre-operative CT segmentation. Possible evaluation metrics include DICE coefficient (to measure repeatability), true positive rate (sensitivity), positive predictive value (precision) and Hausdorff distance. Previous research [4, 5, 6] has improved the accuracy of stent detection for EVAR. However, the methods are limited to infra-renal, abdominal EVAR cases. In contrast, stents with fenestrations/scallops are necessary for supra-renal cases and for complex AAA anatomy. Methods of this work aim to detect the complexity of the stent. The expected result qualitatively and quantitatively describes which portion/branch of the stent corresponds to which artery. For example, in a supra-renal case, the result describes whether a portion/branch of the stent covers the aorta, the left/right external iliac artery, the left/right renal artery, etc.
Supervisor:Shadi Albarqouni, Stefanie Demirci
Director:Prof. Dr. Nassir Navab
Student:Wei Ni
start-end: -
Interventional Stent Deformation Tracking (Master Thesis)

Despite the incredible improvement of treating abdominal aortic aneurysms (AAA), the minimally invasive deployment of a stent graft within a diseased vessel may cause comorbidities induced by the stent itself or a malfunction. So far, only ex-vivo or in-vitro experiments analyzing the interaction between stent graft and weakened vessel wall have been performed. This project aims at finding a solution for the extraction of in-vivo stent graft deformation that is to be further used for quantitaive analysis of vessel wall deformation.
Supervisor:Stefanie Demirci
Director:Nassir Navab
Student:
start-end: -
Super Resolution Depth Maps (DA/MA/BA)

This project aims at single image super-resolution applied to depth maps. Many available depth sensors have a relatively low resolution compared to the accompanied color image. We would like to recover a high-resolution depth map from the combination of low-res depth and high-res RGB image. This problem is inherently ill-posed since a multiplicity of solutions exist for any given input. Nonetheless it is a highly studies topic in computer vision. Starting with the work of Dong et. al. "Image Super-Resolution Using Deep Convolutional Networks" (TPAMI 2015) we would like to expore the application of deep neural networks to this field of study.
Supervisor:Christian Rupprecht, Federico Tombari
Director:Prof. Nassir Navab
Student:
start-end: -
Phase Recognition in Surgical Workflow (Master Thesis)

In recent years, with advancements in technology and medicine, the operating room has evolved into a complex and technologically rich environment. In this environment, methods to monitor surgical workflows have gained particular interest [1] with potential applications such as the evaluation of surgeons, or the creation of context-sensitive user interfaces to provide available information only when necessary. Different approaches in the field of surgical workflow recognition [1] include approaches to extract a structured model from recorded surgeries [2], to recognize the surgical phases or activities through instrument and sensor data [3-5], laparoscopic video [6-8], kinematics information [9], or a mixture thereof [10]. Very recently, also methods using deep learning have been introduced [6, 11]. This master thesis focuses on the recognition of surgical phases and the derivation of actionable information from surgical videos.
Supervisor:Dr. Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:Ghazal Ghazei
start-end: -
Surgical Workflow Software Infrastructure Based on Business Workflow Modeling Standards (IDP)

Surgical procedures can be described and structured by their workflow, phases, and hierarchical order of tasks and activities. This enables further analysis and comparison both of ongoing surgeries as well as for recorded datasets. Several methods for modeling of business processes have already been established, though due to the different methodical approach in this case, the available methods are not necessarily directly applicable to surgical process modeling. The aim of this work is to implement a common class structure suitable for surgical workflows and develop import and export functions to and from several known languages describing business process models (e.g. YAWL and BPMN). During the course of the implementation these languages should be evaluated for their fitness to describe surgical workflows with their specific requirements (e.g. variability in the process and probabilistic phase transitions).
Supervisor:Ralf Stauder, Daniel Ostler (MITI)
Director:Prof. Nassir Navab
Student:
start-end: -
[[Students.MaTRUSMROrganMotion][]] ()

Supervisor:
Director:
Student:
start-end: -
Mobile Telephony Management in the Surgery Room (DA/MA/BA)

DECT phones usually increase the reachability of the surgeons and assistants. However they introduce noise and disturbance in the operating room. To solve this problem, the "situational awareness" of the system is to be used. With a system that analyzes the current situation in the surgery and aware of what’s happening can reduce the disturbance while handling phones more effectively. MITI research group is willing to get a student to research and implement a solution for a system that take the responsibility to reply on the phones while the surgeon is performing a surgical operation. The system should be intelligent enough to categorize the received phone calls according to their importance and prioritize them according to the analysis of the current situation in the operating room – and forward the call to the surgeon if it is really needed. Also it’s important that the system store all the information needed to recognize the call (e.g. phone number, importance, and maybe the subject of the call). Project includes research opportunities in selecting the most appropriate technology and intelligent algorithm to reduce disturbance while not missing the important information of the caller. Student has the opportunity to include his or her creative ideas in how to solve and implement the solution.
Supervisor:Dr. Armin Schneider (MITI), Ralf Stauder
Director:Prof. Hubertus Feußner (MITI), Prof. Nassir Navab
Student:
start-end: -
[[Students.MaThompson][]] ()

Supervisor:
Director:
Student:
start-end: -
[[Students.MaTrackGamUs][]] ()

Supervisor:
Director:
Student:
start-end: -
Trajectory Validation using Deep Learning Methods (Master Thesis)

Supervisor:Nikolas Brasch
Director:Federico Tombari
Student:
start-end: -
Transformer-Based Pipeline for Pre-Processing MR Spectroscopic Imaging Data (Master Thesis)

MRS data is composed of 1D spectra that can quantitatively characterize the metabolic composition of in-vivo tissue. This is especially useful for characterizing brain tumors. The primary drawback to this data type is the extensive and costly pre-processing and analysis necessary to prepare and annotate the data. Attempts to accelerate this work using deep learning is a budding, active research field. Transformers were initially developed for NLP tasks. However, recent research has shown them to be highly effective for image classification in computer vision as well. Due to the spatial nature of MRS data, CV CNN models and MLPs have been effective in this pre-processing task. Therefore, we would like to explore the use of transformers to assess their potential for automating the MRSI pre-processing pipeline. Steps to be evaluated would include things like phase and frequency correction, baseline estimation, and eddy current corrections.
Supervisor:John LaMaster
Director:Prof. Bjoern Menze; PD Tobias Lasser
Student:
start-end: -
Transformer-Based Regression Model for Metabolite Quantification in MR Spectroscopic Imaging (Master Thesis)

MRS data is composed of 1D spectra that can quantitatively characterize the metabolic composition of in-vivo tissue. This is especially useful for characterizing brain tumors. The primary drawback to this data type is the extensive and costly pre-processing and analysis necessary to prepare and annotate the data. Attempts to accelerate this work using deep learning is a budding, active research field. Transformers were initially developed for NLP tasks. However, recent research has shown them to be highly effective for image classification in computer vision as well. Due to the spatial nature of MRS data, CV CNN models have been effective for this quantification task. Therefore, we would like to explore the use of transformers to assess their potential for this computer vision-based regression task.
Supervisor:John LaMaster
Director:Prof. Bjoern Menze; PD Tobias Lasser
Student:
start-end: -
Automatic acoustic coupling for robotic ultrasound imaging (Master Thesis)

Medical ultrasound (US) is already used today as the primary modality of choice for many clinical indications. Robotic ultrasound systems could potentially provide an automation of current ultrasound acquisitions, which would be especially helpful for interventional and screening applications. While the feasibility of automatized ultrasound imaging systems was shown by various research groups up to now, the application of ultrasound gel still needs to be performed manually today. This project focuses on ultrasound coupling and aims at overcoming this issue based on developments in other research areas to allow for automatic robotic ultrasound acquisitions.
Supervisor:Salvatore Virga and Christoph Hennersperger
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Uncertainty in Deep Learning for Medical Application (Master Thesis)

Deep learning has become the default tool to approximate nonlinear functions. The results are usually measured by accuracy or AUC. However, neither of them captures the uncertainty of the result. Dealing with systems that decide about human life, it is crucial to know to what extent the outcome can be trusted. Attempts have been made to tackle this problem, both in the computer vision and medical community. It was established that a model can be uncertain of its decision because of its parameters or because of the data that was fed. Nevertheless, none of these completely captures the uncertainty that can be caused by labels provided by multiple experts. In this work, a model is proposed to quantify this specific type of uncertainty. Its behavior will be studied under different conditions and it will be compared to the already known types of uncertainty. Finally, it will be shown that this information can be used to improve the overall performance of the model.
Supervisor:Dr. Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
Explainability of Artificial Intelligence (XAI): Taxonomy (Project)

Recently, Artificial Intelligence, Machine Learning, Deep Learning have shown positive results in various domains: recommender systems, autonomous driving, speech recognition, etc. The demand for AI is increasing exponentially, leading to a lot of startups and investment in AI. This brings in a lot of considerations, like certification and the fear of ‘Singularity’. The main reason for this is because Deep Learning is a black box. Most of the times to get something to work, we do a lot of tweaking and trying. Once we get it working, we are then able to interpret the reason for the functionality. But it is very difficult to know what the effect of a change would be before even trying it. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th, 2018 will make black-box approaches difficult to use in business. This requires a possibility to make the results re-traceable on demand. To achieve this, we need to generate the underlying explanatory structures of models, which explains the cause of the result of the model, Explainable AI. This is required in every field, only then will everyone trust machines completely. Medical domain being one of the fields which needs precision and explainable ability the most. The goal is to categorize the different possibilities for explainable AI. Based on the results of the research, some of the approaches will be implemented on an available dataset, to verify the findings.
Supervisor:Dr. Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:Sukanya Raju
start-end: -
Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images (Project)

Reliably modeling normality and differentiating abnormal appearances from normal cases is a very appealing approach for de- tecting pathologies in medical images. A plethora of such unsupervised anomaly detection approaches has been made in the medical domain, based on statistical methods, content-based retrieval, clustering and re- cently also deep learning. Previous approaches towards deep unsuper- vised anomaly detection model local patches of normal anatomy with variants of Autoencoders or GANs, and detect anomalies either as out- liers in the learned feature space or from large reconstruction errors. In contrast to these patch-based approaches, we show that deep spatial au- toencoding models can be efficiently used to capture normal anatomical variability of entire 2D brain MR slices. A variety of experiments on real MR data containing MS lesions corroborates our hypothesis that we can detect and even delineate anomalies in brain MR images by sim- ply comparing input images to their reconstruction. Results show that constraints on the latent space and adversarial training can further im- prove the segmentation performance over standard deep representation learning.
Supervisor:
Director:
Student:
start-end: -
Calculation of signal intensity-time curves from MR images of the beating heart (Master Thesis)

In order to assess whether a patient with a narrowing of a coronary artery has an increased risk for a heart at-tack, one is interested in quantitative parameters of the blood supply to the heart muscle. Currently those parameters could be judged only semi quantitative by visual analysis of a sequence of MRI im-ages: For that approximately 300 consecutive dynamic images are acquired by a MRI (magnetic resonance imaging) scanner. During the acquisition an intravasal CM (contrast medium) is injected and the distribution of the signal intensity increase caused by the CM is visually observed by a radiologist. The higher the increase of the observed signal intensity, the lower the risk of the patient is to suffer a hard attack in the future. To derive reliable statistical data from these measurements, the increase of the signal intensity must be measured quantitatively in each pixel of the image. That means that the signal intensity-time curves must be derived automatically for each pixel for a time period of approximately 1 minute. Due to the breathing and small movements of the patients during the acquisition period, the measured images could not be evaluated directly. The main objective of this work is to develop a mathematical method to “freeze” the movement of the heart. The second object should be to calculate the signal intensity-time curve in each pixel of the frozen sequence and visualize the result as a parameter image of the signal intensity peak.
Supervisor:Dr. Abouzar Eslami from CAMP
affil. Prof. Dr. Michael Friebe as liasion from CAMP and TUM-IAS
Director:Prof. Nassir Navab
Student:Radhika Tibrewal
start-end: -
Mesh CNN for vessel annomaly detection (DA/MA/BA)

The goal of this project is to develop a novel neural architecture for mesh-based anomaly detection in the blood vessels.
Supervisor:Suprosanna Shit
Director:Bjoern Menze
Student:
start-end: -
3D vessel tracking using transformer (Master Thesis)

The goal of this project is to develop a novel deep learning-based method leveraging transformer to track multiple vessel instances in 3D medical image volumes.
Supervisor:Suprosanna Shit
Anjany Kumar Sekuboyina
Director:Bjoern Menze
Student:
start-end: -
A Deep Learning Approach to Synthesize Virtual CT based on Transmission Scan in hybrid PET/MR (Master Thesis)

Computed Tomography (CT) is a mandatory imaging modality for radiation treatment planning while magnetic resonance imaging (MRI) and positron emission tomography (PET) have advantages in tumor delineation and dose prescriptions. With the advent of PET/MRI, this hybrid imaging modality has advantages of simultaneous acquisition of soft tissue morphological imaging and molecular imaging, which provides advanced information supporting clinical diagnosis and therapy planning (1). To avoid multiple scanning and additional high radiation doses, a new concept was proposed to integrate low dose transmission scan (TX) into a PET/MRI machine for the synthesis of virtual CT (VCT) for treatment planning (2). However, TX is usually extremely noisy with artifact spots and it is necessary to smooth the sinogram to obtain interpretable images. This leads to consequently blurred low-resolution images. The proposed Master thesis project will aim to synthesize high-resolution virtual CT planning based on low-resolution transmission scan in integrated PET/MRI. This VCT is aimed to substitute CT scans in several applications such as radiotherapy treatment planning and attenuation correction. In particular, this project will develop advanced deep learning approach to achieve imaging super-resolution.
Supervisor:Sailesh Conjeti, Kuangyu Shi, Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:Deepa Gunashekar
start-end: -
Weakly-Supervised Action Segmentation (DA/MA/BA)

Activity understanding in videos became a popular research topic in the computer vision community because of its application to video analysis. Thanks to large-scale labeled video datasets[1,2], classifying activities in trimmed videos made significant progress in recent years. In contrast, action segmentation, which requires finding boundaries and action labels in untrimmed videos, is still a challenging problem. The key issue is untrimmed videos are usually quite long and contain multiple sub-activities, therefore gathering a large scale video dataset for action segmentation is time-consuming and cumbersome. To address these issues, recent research on this direction focuses on training deep architectures with weak labels. This project will focus on designing a new method for action segmentation with weak labels. The proposed approach will be compared against new SOTA methods [3,4,5] on action segmentation datasets.
Supervisor:Huseyin Coskun, Prof. Nassir Navab
Director:Federico Tombari
Student:
start-end: -
Weakly-Supervised Anomaly Detection assisted by Attention Models (Master Thesis)

Localization of anatomical regions of interest (ROIs) is a natural pre-processing steps in many medical image analysis tasks, for example in diagnosis. While it is sometimes trivial for physicians, it turns to be tedious and very time consuming for them. Convolutional Neural Networks (CNNs) have proven to be very successful in computer vision tasks as object detection and image classification, due to the ability of extracting rich and hierarchical features that are useful for both: localization and classification. In this thesis, we investigate the concepts of Self-Transfer Learning (STL) and Spatial Transformer Network (STN). We want to exploit STL which jointly optimizes classification and localization only with weak labels, no localization information is provided, and STN that allows us to find a canonical representation by means of learning invariance to scale, rotation, translation and more generic warping. We want to explore whether the use of STN in combination with STL will improve the classification performance and if STL could assist in the anomaly detection task. We evaluate our model using three medical image datasets, chest X-rays, femur X-rays and mammograms, and compare them to previous weakly supervised approaches.
Supervisor:Shadi Albarqouni, Diana Mateus
Director:Prof. Nassir Navab
Student:Amelia Jimenez Sanchez
start-end: -
Below The Surface Exploration (Master Thesis)

Interaction in an immersive virtual environment (IVE) such as a CAVE or in our case FRAVEis an important issue to investigate. Whithin the scope of the project Virtual Arabia, we need to perform Below surface exploration. For that purpose we would like to use the magic-lens solution with the Clearview solution(TUM3D)
Supervisor:AmalBenzina and Marc Treib
Director:Prof. Gudrun Klinker
Student:Sandro Weber
start-end: - 2014/02/17
Webly supervised human activity recognition (Master Thesis)

Supervisor:Federico Tombari, Lamberto Ballan; Christian Rupprecht
Director:Prof. Nassir Navab
Student:Ansh Kapil
start-end: -
[[Students.MaWesemann][]] ()

Supervisor:
Director:
Student:
start-end: -
Robustness of Knowledge Transfer Methods (Project)

Neural networks have been the solution to many of the problems in topics such as computer vision, natural language processing, etc. in recent years. They can achieve superhuman performance in many tasks, However they are black boxes that makes them difficult to rely on in sensitive cases such as medical imaging. Interpreting the internal state of neural networks helps us to improve their performance or prevent errors by finding the reason behind their happening. There has been different works on interpretability of neural networks which can generally be grouped into three categories[1], manual visual inspection[2, 3], saliency analysis[4, 5] and statistical analysis[6, 7]. The first two categories rely on human evaluation, while TCAV[6] and NetDissect?[7] can be quantitatively evaluated. Knowledge transfer objective is to train a student network from one or multiple teacher networks[8, 9]. Generally, the student network is small and fast while the teacher network is large and accurate. Tasks and goal In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7]. This study can be useful for comparing the internal state of teacher and student networks or comparing the distilled student network with the same student trained in a supervised manner with labels. The experiments in this project will be done with limited amount of data to imitate the real world conditions. In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7].
Supervisor:Dr. Shadi Albarqouni, Azade Farshad
Director:Prof. Dr. Nassir Navab
Student:Yousef Yeganeh
start-end: -
X-ray In-Depth Decomposition (IDP)

In this project, we would like to work further on X-ray In-Depth decomposition presented in [1] modelling the physics of X-ray to recover the depth information.
Supervisor:Shadi Albarqouni
Director:Prof. Dr. Nassir Navab
Student:
start-end: -
3D augmented virtuality sketching (Master Thesis)

Within the interdisciplinary project "Collaborative Design Platform" http://cdp.ai.ar.tum.de/ a large-scale multitouch table was developed to aid urban planning and prototyping. The table interacts with the user over a multitouch surface, and can also scan objects positioned on top of it in 2.5 dimensions. In order to enhance its presentation cababilities, an additional display will be added, which offers a 3D augmented virtuality perspective on the workspace, which reflects both the physical and the virtual aspect of the scene. The view can be controlled using a special 3D mouse. In addition, the user can sketch and draw directly on the virtual world presented on the new display.
Supervisor:Eva Artinger
Director:Gudrun Klinker
Student:Violin Yanev
start-end: -

Finished Theses



Edit | Attach | Refresh | Diffs | More | Revision r1.14 - 02 Jan 2019 - 11:39 - TobiasLasser

Lehrstuhl für Computer Aided Medical Procedures & Augmented Reality    rss.gif