Open topics | Type of thesis | Image | Master Thesis | , ) $IF( 0, |
---|---|---|
Deep Intrinsic Image Decomposition In this project we aim at decomposing a simple photograph into layers of material properties like reflectance, albedo, materials etc. This a very challenging but important topic in bot Computer Vision and Computer Graphics as it improves tasks like scene understanding, augmented reality and object recognition. We want to tackle this problem using the human annotated OpenSurfaces? dataset and our recent advances in deep learning especially fully connected residual networks. | Master Thesis | |
Invariant Landmark Detection for highly accurate positioning An essential task to enable highly autonomous driving is the self-localization and ego-motion estimation of the car. Together, they enable accurate absolute positioning and reasoning about the road ahead for e.g. path planning. If a single camera is used as sensor to measure the current vehicle location, positioning is based on visual landmarks and is related to the problem of visual odometry. Current navigation systems rely solely on GPS and vehicle odometry. Newer systems use object detections in the image, like traffic signs and road markings to triangulate the vehicle position within a map. To do so, visual landmarks are detected in the camera image and their position relative to the vehicle is computed. Given the landmark positions, the most likely position of the vehicle with respect to the landmarks in the map can be deduced. Current approaches use detected objects as landmarks (e.g. traffic signs/lights, poles, reflectors, lane markings), but often there are not enough of these objects to localize accurately. In the scope of this project, a method to detect more generic landmarks should be developed. This method, that will be focusing on deep learning, should extract features that are more invariant to different invariances (e.g. illumination) and also provide a good matchability. Challenges: • Create a dataset from different sources: already existing datasets, public webcam streams and synthetic datasets. • Design and implement a method/network to extract robust generic landmarks/features in different environments (highway, city, country roads) and match them to previously extracted landmarks. • Leverage deep learning in order to achieve a high invariance to different environment conditions. Tasks: • Literature review of methods to extract robust landmarks with focus on: o Robustness to changes in appearance, viewpoint. o Uniqueness to match them corretly to already extracted landmarks. • Implementation and evaluation of a deep neural network that is capable of extracting invariant features, that offer the possibility for robust matching. • Application of the feature in a state-of-the-art SLAM algorithm. Literature : [1] LIFT: https://arxiv.org/abs/1603.09114 [2] TILDE: https://infoscience.epfl.ch/record/206786/files/top.pdf [3] Playing for Data: https://download.visinf.tu-darmstadt.de/data/from_games/data/eccv-2016-richter-playing_for_data.pdf [4] ORB_SLAM: http://webdiis.unizar.es/~raulmur/orbslam/ | DA/MA/BA | |
Inverse Problems in PDE-driven Processes Using Deep Learning We are looking for extremely motivated student to work on the topic "Inverse Problems in PDE-driven Processes Using Deep Learning". The scope of this project is the intersection of numerical methods and machine learning. The objective is to develop theoretical framework and efficient algorithms that can be applied to broad class of PDE-driven systems. However, we can tailor the focus and scope of the project to your preferences. | IDP | |
Evaluation of iterative solving methods for the statistical reconstruction of Light Field Microscopy data Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. Once the forward light transfer is determined based on the optical system response, the reconstruction process is an inverse problem. In fluorescence microscopy, besides the read-out noise, Poisson noise is present due to the low photon count. Hence a Poisson-Gaussian mixture model would be an appropriate approach for likelihood-based statistical reconstruction. Various iteration schemes may result from different likelihoods coupled with regularization. | DA/MA/BA | |
Fast image fusion on foveated images The goal of this thesis is to combine images from an intesified CCD (EMCCD) camera and an long wave infrared (LWIR) camera. The fused image should contain all salient features of the individual modalities. As the fused image is thought as an replacement for legacy analogous night vision, the fusion algorithm should provide at least 25 frames per second. To achieve high performance the images are to be foveated. A foveated image is an image which has been compressed by taking advantage of the perceptual properties of the human visual system, namely the decreasing resolution of the retina by increasing eccentricity. The thesis should elaborate how image fusion and foveation methods can be combined efficently while providing a good fused image. | Master Thesis | |
Automatic Early Detection of Keratoconus Keratoconus (KCN) is a bilateral, non-inflammatory, and degenerative disorder of the cornea in the eye with an incidence of approximately 1 per 2,000 in the general population [1,2], It is characterized by progressive thinning and cone-shaped bulge of the cornea (fig.1) leading to substantial distortion of the vision [2]. The early diagnosis of keratoconus is of a great importance for patients seeking eye surgery (i.e. LASIK), which can prevent the progression of pathology after surgery [3-4]. Rabinowitz [5] shows that his preliminary research using a Wavefront analysis together with Corneal topography demonstrates a good classification between early KCN subtypes and normals. Further, Jhanji et. al. [6] concluded that swept-source OCT may provide a reliable alternative for the parameters of corneal topography (fig.2). On the other hand, Pérez et. al. [3] shows that all of these surveyors including videokeratography, Orbscan, and Pentacam together with the indices can lead to early KCN detection, however, with an increase in false positive detection. Therefore, developing a highly specific diagnostic tool for KCN detection with few false positive is highly desirable. In this IDP/MA project, the objective is to analysis data from approximately 200 patients, treated at the ophthalmology department at Klinikum Rechts der Isar / TUM. Using a retrospective corneal topographic data (fig.2) collected during follow-ups, the aim is to build an early predictive model for KCN detection. | DA/MA/BA | |
GPU Ultrasound Simulation and Volume Reconstruction Medical ultrasound imaging has been in clinical use for decades, however, acquisition and interpretation of ultrasound images still requires experience. For this reason, ultrasound simulation for training purposes is gaining importance. Additionally, simulated images can be used for the multimodal registration of Ultrasound and Computed Tomography (CT) images. The simulation process is a computationally demanding task. Thus, in this thesis a simulation method, accelerated by modern graphics hardware (GPU), is introduced. The accelerated simulation utilizes a ray-based simulation model in order to provide real-time high-throughput image simulation for training and registration purposes. Wave-based simulation methods are computationally even more demanding and have been considered unsuitable for real-time applications. In the scope of this thesis, wave-based models have been investigated, including the Digital Waveguide Mesh and the Finite-Difference Time-Domain method for solving Westervelt's equation. Initial results demonstrate the feasibility of performing near real-time wave-based ultrasound simulation using graphics hardware. Furthermore, a new algorithm is introduced for volumetric reconstruction of freehand (3D) ultrasound. The proposed algorithm intelligently divides the work between CPU and GPU for optimal performance. The results demonstrate superior performance and equivalent reconstruction quality compared to existing state of the art methods. GPU accelerated ultrasound simulation and freehand volume reconstruction are key components for fast 3D-3D (dense deformable) multimodal registration of Ultrasound and CT images, which is subject of current ongoing work. | Master Thesis | |
[[Students.MaKeil][]] | ||
Keypoint Learning | Master Thesis | |
Kyphoplasty balloon simulation Kyphoplasty, a percutaneous, image-guided minimally invasive surgery, is a recently introduced treatment of painful vertebral fractures which is being performed extensively worldwide. The objective of kyphoplasty is to inject polymethylmethacrylate (PMMA) bone cement under radiological image guidance into the collapsed vertebral body to stabilize it. Before injecting the cement, an inflatable balloon is placed in the vertebral body and subsequently inflated in order to restore the vertebral height and correct the kyphotic deformity caused by the compression fracture. After the balloon is deflated and removed from the vertebral body, the created cavity is filled with PMMA bone cement. The goal of the project is the implementation and validation of a kyphoplasty balloon simulation. | DA/MA/BA | |
Left Atrium Segmentation in 3D Ultrasound Using Volumetric Convolutional Neural Networks Segmentation of the left atrium and deriving its size can help to predict and detect various cardiovascular conditions. Automation of this process in three-dimensional Ultrasound image data is desirable, since manual delineations are time-consuming, challenging and operator-dependent. Convolutional neural networks have made improvements in computer vision and in medical image processing. Fully convolutional networks have successfully been applied to segmentation tasks and were extended to work on volumetric data. This work examines the performance of a combined neural network architecture of existing models on left atrial segmentation. The loss function merges the objectives of volumetric segmentation, incorporation of a shape prior and the unsupervised adaptation to different Ultrasound imaging devices. | Master Thesis | |
Deep Generative Model for Longitudinal Analysis Longitudinal analysis of a disease is an important issue to understand its progression as well as to design prognosis and early diagnostic tools. From the longitudinal sample series where data is collected from multiple time points, both the spatial structural abnormalities and the longitudinal variations are captured. Therefore, the temporal dynamics of a disease are more informative than static observations of the symptoms, in particular for neuro-degenerative diseases whose progression span over years with early subtle changes. In this project, we will develop a deep generative method to model the lesion progression over time. | Master Thesis | |
Laparoscopic Freehand SPECT | Master Thesis | |
Instrument Tracking for Safety and Surgical View Optimization in Laparoscopic Surgery Laparoscopic (minimally invasive, key hole) surgery involves usage of a laparoscope (camera), and laparoscopic instruments (graspers, scissors, monopolar and bipolar devices). First the abdomen is insufflated with carbon dioxide to create a space between the abdominal wall and organs. The laparoscope and laparoscopic instruments are then inserted through small 5 or 10 mm incisions in the abdomen. The laparoscope projects the image within the abdomen onto a screen. The surgeon can therefore visualise the inside of the abdomen and the operating instruments to carry the surgical procedure. At present, there is increasing interest in surgical procedures using a robot-assisted device. The advantages of using such a device include a steady, tremor-free image, the elimination of small inaccurate movements and decreased energy expenditure by the assistant. A number of studies have evaluated the advantages of robotic camera devices compared with manually controlled cameras or different types of devices. The possibility of developing a laparoscope with a tracking system that will automatically identify and follow the operating surgeon’s instruments does provide significant benefit without requiring bulky robotic systems. Firstly by withdrawing the need to always have an assistant will reduce cost. With an instrument tracking system, there is no need for additional pedals and headband to move the camera, which can be confusing, uncomfortable, unsafe and may actually increase the length of surgery. Besides that, an increased safety of the procedure will be achieved by providing a steadier image and with incorporated safety mechanisms. The current project aims at developing a laparoscopic camera system mounted on the operating bed. The proposed system will track the primary surgeon’s instruments without the need for any constant input. The aim is thereby to recognise key tools with priority (sharp tool 1st), and track their movement in situ to move a camera accordingly. With safety features being one priority, the camera will by default be focused on the instrument with higher priority (i.e scissors, monopolar and bipolar devices) in view. Whenever e.g. the monopolar or bipolar device is out of view, this will allow in future to disable the energy source of those instruments, which will greatly reduce one of the commonest cause of injuries during laparoscopic surgery. | DA/MA/BA | |
Real-time large-scale SLAM from RGB-D data You will extend an existing RGB-D reconstruction system to support large-scale scenes. In a first step you will evaluate and implement state-of-the-art algorithms for tracking and reconstruction from RGB-D data. Secondly, you will also evaluate and implement algorithms for texturing the obtained reconstruction from camera images. The real-time critical components will be implemented on a GPU. | Master Thesis | |
[[Students.MaLatein][]] | ||
Comparison of methods to produce a two-layered LDI representation from a single RGB image One of the major drawbacks of the visualizations used in computer vision is the lack of information about the portion of scene that has been occluded by the foreground objects. Depth maps store the results of a mapping from each pixel to its distance from the camera. Since the pair of RGB image and the depth map store more information than a RGB image itself,they are considered 2.5D. However, a simple depth map fails to alleviate the problem as it stores the values for only the visible part of an image. Unlike human beings who are able to perceive the information even if it has been hidden by confidently extrapolating from what is visible, computer vision models are stymied at only what is immediately visible. This has been resolved with other forms of representations of 2D images, one of which is LDI. However getting better LDI predictions from a single RGB image is challenging and we compare two methods in this work and further experiment with them to see if they they can be made better. | Bachelor Thesis | |
Learning to learn: Which data we have to annotate first in medical applications? Although the semi-supervised or unsupervised learning has been developed recently, the performance of them is still bound to the performance of fully-supervised learning. However, the cost of the annotation is extremely high in medical applications. It requires medical specialists (radiologists or pathologists) required to annotate the data. For those reasons, it is almost impossible to annotate all available dataset and sometimes, the only a subset of a dataset is possible to be selected for annotation due to the limited budget. Active learning is the research field which tries to deal with this problem [1-5]. Previous studies have been conducted in mainly three approaches: an uncertainty-based approach, a diversity-based approach, and expected model change [3]. These studies have been verified that active learning has the potential to reduce annotation cost. In this project, we aim to propose a novel active learning method which learns a simple uncertainty calculator to select more informative data to learn the current deep neural networks in medical applications. | Master Thesis | |
Learning-based Surgical Workflow Detection from Intra-Operative Signals The goal of this project will be to apply methods from Machine Learning (ML) to medical data sets in order to deduct the current workflow phase. These data sets were recorded by our medical partners during actual laparoscopic cholecystectomies and will contain binary values (like the usage vector of all possible surgical instruments) as well as analog measurements (e.g. intra-abdominal pressure). By learning from labeled data, methods like Random Forests or Hidden Markov Models should be able to detect which of the known phases is the most probable, given the data at hand. | Master Thesis | |
Continual and incremental learning with less forgetting strategy Recently, deep learning has great success in various applications such as image recognition, object detection, and medical applications, etc. However, in the real world deployment, the number of training data (sometimes the number of tasks) continues to grow, or the data cannot be given at once. In other words, a model needs to be trained over time with the increase of the data collection in a hospital (or multiple hospitals). A new type of lesion could be also defined by medical experts. Then, the pre-trained network needs to be further trained to diagnose these new types of lesions with increased data. ‘Class-incremental learning’ is a research area that aims at training the learned model to add new tasks while retaining the knowledge acquired in the past tasks. It is challenging because DNNs are easy to forget previous tasks when learning new tasks (i.e. catastrophic forgetting). In real-world scenarios, it is difficult to store all training data which was used when training DNN at the previous time due to the privacy issues of medical data. In this project, we will develop a solution to this problem in medical applications by investigating an effective and novel learning method. | Master Thesis | |
Depth estimation in Light Field Microscopy Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera, allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be arranged into multi-views and used to retrieve the depth map of the imaged scene. | DA/MA/BA | |
A light field renderer for Light Field Microscopy data visualization Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to retrieve angular perspectives of the imaged sample. | DA/MA/BA | |
Investigation and Implementation of Lightfield Forward Models Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera, allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. | Master Thesis | |
Real-Time Volumetric Fusion for iOCT Microscope-integrated Optical Coherence Tomography (iOCT) is able to provide live cross-sectional images during an ophthalmic intervention. Current OCT engines have a limited acquisition rate, allowing to either image cross-sectional 2D images at high frame rate, low resolution and low field-of-view volumes at medium update rate or high-resolution volumes with low update rate. In order to provide full field of view visualization during a surgical intervention, a high resolution scan can be acquired at the start of the intervention which is then tracked with an optical retina tracker to compensate for movement. Goal of this thesis is to devise a method to dynamically update this high resolution volume with the live data acquired during the ongoing intervention, in order to provide a responsive visualization of the surgeon's working environment. Integration of the live data into the volume requires compensation for deformation of the tissue as well as incorporation of motion data from the optical tracker, to accurately find the correct region to update in the volume. | Bachelor Thesis | |
Understanding and optimization of a low energy X-ray generator for intra-operative radiation therapy The research activity is focused on the combination of minimal-invasive therapy techniques with diagnostic imaging and navigation modalities, e.g. application of intra-operative radiation therapy, MRI guided high focused ultrasound or intra-operative SPECT with MRI guidance. An initial development goal is to use advanced MRI imaging to find and localize small pathologies and subsequently perform minimal-invasive therapy with a small and lightweight X-ray generator and continuous intra-operative imaging for visualization and navigation. An in-vitro setup will be created to apply the low-energy X-rays to real cancer cells and study their biological effectiveness. | IDP | /twiki/pub/Main/AmalBenzinaStudentProjects/xraygenerator.jpg |
Computational Modeling of Respiratory Motion Based on 4D CT Respiratory lung motion has a serious impact on the quality of medical imaging, treatment planning and intervention, and radiotherapy. This motion not only reduces imaging quality, especially for positron emission tomography (PET), but also inhibits the determination of the exact position and shape of the target during radiotherapy. Based on prior knowledge of average tissue properties, patient-specific imaging (4D CT) and a surrogate signal, a computational motion model can be created. This enables researchers and developers to simulate and generate information about a respiratory phase not covered by the imaging procedure. Therefore, the internal deformation of the lung and its containing cancerous tissue can be computed and taken into account during further imaging acquisitions or radiotherapy. | Master Thesis | |
Diverse Anomaly Detection Projects @deepc | Master Thesis | |
Multiple sclerosis lesion segmentation from Longitudinal brain MRI Longitudinal medical data is defined that imaging data are obtained at more than one time-point where subjects are scanned repeatedly over time. Longitudinal medical image analysis is a very important topic because it can solve some difficulties which are limited when only spatial data is utilized. Temporal information could provide very useful cues for accurately and reliably analyzing medical images. To effectively analyze temporal changes, it is required to segment region-of-interest accurately in a short time. In the series of images acquired over multiple times of imaging, available cues for segmentation become richer with the intermediate predictions. In this project, we will investigate a way to fully exploit this rich source of information. | IDP | |
MS Lesion Segmentation in multi-channel subtraction images | IDP | |
Gradient Surgery for Multitask Longitudinal CT Analysis Longitudinal changes of pathology in CT images is an important indicator for analyzing patients from COVID19. In the clinical setting, clinicians read longitudinal images to get various information such as disease progression, needs for ICU admission, the severity of the disease. They are important to increase the survival rate. However, reading longitudinal 3D CT scans takes a long time which might decrease the efficiency of the clinician's performance. In this project, we will explore a method to automatically analyze longitudinal CT scans to help the radiologist's reading. In particular, we will explore a multitask learning method to fully exploit the relation between different tasks and adaptively balancing the gradients from different objective functions (i.e. Gradient surgery). | Master Thesis | |
Simulation of Muscle Activity for an Augmented Reality Magic Mirror We have previously shown an augmented reality (AR) magic mirror. We create the illusion that a user standing in front of the system can look inside the own body. The video of this system received a lot of attention and has been seen over 200.000 time on Youtube. We now want to build a system for education of human anatomy using augmented reality visualization. We want to use the system to visualize muscle activity. | DA/MA/BA | |
Tracking using Autoencoders and Manifolds In this project we want to explore the possibilities of using autoencoders to perform object tracking in video sequences. The object's bounding box is given in the first frame and needs to be tracked thoughte the sequence. We would like to use autoencoders to encode the appearance of the object and to predict its future (location and appearance). | DA/MA/BA | |
Marker-based inside-out tracking for medical applications using a single optical camera Nowadays, tracking and navigation for small imaging systems are performed mainly by devices based on infrared cameras or electromagnetic fields. These systems impose some disadvantages for the use with freehand devices such as gamma cameras or ultrasound probes: a separate system for “Outside-in” tracking is needed, which causes the main issue of a required line-of-sight between the tracking system and the tool to be tracked in the surgical environment. To solve this problem, the idea is to have a small add-on system attached to the devices being tracked. The add-on system contains of an optical camera to track several markers that are attached to the patient and calculate the inverse trajectory, i.e. the movement of the device. The idea of this project is to develop a tracking software, the “inside-out” tracking technique, with the required data set to have more accurate tracking and image fusion process. An algorithm for multi marker tracking and calculation of "best pose” will be implemented and the problems of illumination, occlusion, and stability will be addressed. Finally, the accuracy will be evaluated and compared to other tracking modalities, especially optical and electro-magnetical tracking. | Master Thesis | |
Class-Level Object Detection and Pose Estimation from a Single RGB Image Only 2D Object Detection has seen some great advancements over the last years. For instance, detectors like YOLO or SSD are capable of performing accurate localization and classification on a large amount of classes. Unfortunately, this does not hold true for current pose estimation techniques, as they have trouble to generalizing to a variety of object categories. Yet, most pose estimation datasets are comprised out of only a very small number of different objects to accommodate for this shortcoming. Nevertheless, this is a severe problem for many real world applications like robotic manipulation or consumer grade augmented reality, since otherwise the method would be stronlgy limited to this handful number of objects. Therefore, we would like to propose a novel pose estimation approach for handling multiple object classes from a single RGB image only. To this end, we would like to extend a very common 2D detector i.e. Mask R-CNN[1], to further incorporate 6D pose estimation. Eventually, the overall architecture might also involve fully regressing the 3D shapes of the detected objects. | Master Thesis | |
Radiation Dose Reduction for Trabecular Bone Structure Analysis in Osteoporosis Diagnostics by Using Iterative Reconstruction | Master Thesis | |
Medical Augmented Reality with SLAM-based perception | IDP | |
StainGAN: Stain style transfer for digital histological images Digitized Histopathological diagnosis is in increasing demand, but stain color variations due to stain preparation, differences in raw materials, manufacturing techniques of stain vendors and use of different scanner manufacturers are imposing obstacles to the diagnosis process. The problem of stain variations is a well-defined problem with many proposed methods to overcome it each depending on the reference slide image to be chosen by a pathologist expert. We propose a deep-learning solution to that problem based on the Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks eliminating the need for an expert to pick a representative reference image. Our approach showed promising results that we compare quantitatively and quantitatively against the state of the art methods. | Master Thesis | |
Understanding Medical Images to Generate Reliable Medical Report The reading and interpretation of medical images are usually conducted by specialized medical experts [1]. For example, radiology images are read by radiologists and they write textual reports to describe the findings regarding each area of the body examined in the imaging study. However, writing medical-imaging reports requires experienced medical experts (e.g. experienced radiologists or pathologists) and it is time-consuming [2]. To assist in the administrative duties of writing medical-imaging reports, in recent years, a few research efforts have been devoted to investigating whether it is possible to automatically generate medical image reports for given medical image [3-8]. These methods are usually based on the encoder-decoder architecture which has been widely used for image captioning [9-10]. In this project, a novel automatic medical report generation method is investigated. It is challenging to generate accurate medical reports with large variation due to the high complexity in the natural language [11]. So, the traditional captioning methods suffer a problem where the model duplicates a completely identical sentence of the training set. To address the aforementioned limitations, this project focuses on the development of a reliable medical report generation method. | Project | |
Creating Diagnostic Model for Assessing the Success of Treatment for Eye Melanoma An eye melanoma, also called ocular melanoma, is a type of cancer occurring in the eye. Patients having an eye melanoma typically remain free of symptoms in early states. In addition, it is not visible from the outside, which makes early diagnosis difficult. The choroidal melanoma, which is located in the choroid layer of the eye, is the most common primary malignant intra-ocular tumor in adults. At the same time, intra-ocular cancer is relatively rare – only an estimated 2,500 - 3,000 adults were diagnosed in the United States in 2015. Treatment usually consists of radiotherapy or surgery if radiotherapy was unsuccessful. For larger tumors, radiation therapy maybe associated with some loss of vision. Currently, it is unknown which factors lead to the development of such cancer and which factors determine whether a patient is responding to radiotherapy In this master thesis project, the objective is to analyze data from approximately 200 patients, treated at the ophthalmology department at Ludwig-Maximilians University hospital. Treatment consisted of a single-session, frameless outpatient procedure with the Cyberknife System by Accuray. Using pre-procedural data and information collected during follow-up, the aim is to identify factors predictive of a patient's response to treatment and the impact on a patient's visual acuity, measured by the so-called Visus. | Project | |
3D Mesh Analysis and Completion During a scanning process, it is not possible to acquire all parts of the scanned surface. Data are inevitably missing due to the complexity of the scanned part or imperfect scanning process. This create holes in the mesh, bad triangles, and numerous problems and issues.The goal of the project is to use available libraries to a) compute a quality measure and characteristic for a given 3D mesh, b) identify problems/issues and c) fix it. | IDP | |
Meta-clustering | Master Thesis | |
Meta-Optimization | Project | |
Glass/Mirror Detection Mirror and transparent-objects have been an issue for simultaneous re-localization and mapping (SLAM). Mirrors reflect light rays which cause the wrong reconstruction and windows are hard to be observed by cameras. This is especially dangerous for robotics since robots may try to go into a mirror or go through a window. The main goal of this work is to solve this issue by detecting mirrors/windows and reconstructing a correct map. The potential approach is to use an object detection network, such as YOLO, to detect possible mirrors and windows. Then designing a function to correctly reconstruction the reflected region in the map. This work involves knowledge in deep learning and SLAM. | DA/MA/BA | |
Modeling brain connectivity from multi-modal imaging data | Master Thesis | |
MR integration of an intra-operative gamma detector and evaluation of its potential for radiation therapy | DA/MA/BA | |
MR-CT Domain Translation of Spine Data The goal of this project is to synthesise MR images from CT scans of the spine and vive versa in an unpaired setting. | DA/MA/BA | |
Multi-modal Deformable Registration in the Context of Neurosurgical Brain Shift Registration of medical images is crucial for bringing data obtained by different sources or at a different time into a common reference frame. Adding real-time requirements to 3D multi-modal registration allows physicians to analyze the combination of medical data both preoperatively as well as intraoperatively, providing additional benefits for the patient and helping to achieve a desirable procedure outcome. Different applications usually induce several underlying geometrical transformations ranging from global rigid movements to local nonlinear deformations such as brain shift in neurosurgery or compression of liver tissue during respiratory motion. Using a deformable registration to correct local tissue distortions allows for a transformation of preoperative data into an intraoperatively acquired local reference frame. Preoperative X-ray Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) data are commonly available for diagnostics and procedure planning, but a multi-modal deformable registration with intraoperative Ultrasound (US) is needed for the successful guidance of minimally invasive procedures. Within the scope of this thesis, existing registration techniques have been researched and compared and new cost functions for multi-modal deformable registration of 3D US with preoperative CT and MRI data are proposed. | Master Thesis | |
Iterative Iodine Detection and Enhancement Algorithm for Dual-Energy CT | Master Thesis | |
Registration of Multi-View Ultrasound with Magnetic Resonance Images | Master Thesis | |
Multiple Screen Detection for Eye-Tracking Based Monitor Interaction A modern operating room usually offers multiple different monitors to present various information to the surgical staff. With the trend to go from highly invasive open surgery to minimally invasive techniques such as laparoscopic surgery, Single-Port surgery or even NOTES, the amount of additional monitors is likely to increase. Knowing which monitor the surgeon is looking at, and on which part of the monitor they are focused allows for a wide variety of supporting systems, such as automatic adjustments of the endoscopic camera position through a robotic system. The goal of this work is to develop a method to recognize monitors with changing content through the cameras of head-mounted eye-tracking systems and translate the detected gaze point to the coordinate frame of the detection monitor. The work will be based on an existing framework (developed in C#) that is able to detect a single monitor, which should be extended to an arbitrary number of monitors and distinguish between them. | DA/MA/BA | |
Neural solver for PDEs | Hiwi | |
New Mole Detection | Master Thesis | |
Robust training of neural networks under noisy labels The performance of supervised learning methods highly depends on the quality of the labels. However, accurately labeling a large number of datasets is a time-consuming task, which sometimes results in mismatched labeling. When the neural networks are trained with noisy data, it might be biased to the noisy data. Therefore the performance of the neural networks could be poor. While label noise has been widely studied in the machine learning society, only a few studies have been reported to identify or ignore them during the process of training. In this project, we will investigate the way to train the neural network under noisy data robustly. In particular, we will focus on exploring effective learning strategies and loss correction methods to address the problem. | Master Thesis | |
Organs at Risk Detection and Localization for Radiation Therapy Planning using Transformers. In this project, we explore 3D transformers for organs at risk (localization) in volumetric medical imaging relevant for radiation therapy planning. The student will apply and develop state-of-the-art transformers for medical image detection and localization. | Master Thesis | |
Real-Time Simulation of 3D OCT Images Optical Coherence Tomography (OCT) is widely used in diagnosis for ophthalmology and is also gaining popularity in interventional settings. OCT generates images in an image formation process similar to ultrasound imaging: Coherent light waves are emitted into the tissue and this light signal is partially reflected at discontinuities of optical density. The reflected light waves are then used to reconstruct a depth slice of the tissue. The ability to simulate such a modality in real time has many potential applications: For example, it can greatly help to evaluate image processing algorithms where ground truth is not easily obtainable. It is also a crucial part of a fully virtual simulation environment for ophthalmic interventions, which can be used for training as well as prototyping of visualization concepts. As a first step, existing simulation algorithms shall be reviewed and evaluated in terms of computational efficiency when adapted to 3D. A new or adapted algorithm shall be proposed to support simulation of OCT images from a volumetric model of the eye. This should consider efficient implementation on the GPU and consider realistic simulation of the modality's artifacts, such as speckle noise, reflections and shadowing. | Master Thesis | |
Self-supervised learning for out-of-distribution detection in medical applications Although recent neural networks have achieved great successes when the training and the testing data are sampled from the same distribution, in real-world applications, it is unnatural to control the test data distribution. Therefore, it is important for neural networks to be aware of uncertainty when new kinds of inputs (which is called out-of-distribution) are given. In this project, we consider the problem of out-of-distribution detection in neural networks. In particular, we will develop a novel self-supervised learning approach for out-of-distribution detection in medical applications. | Master Thesis | |
Seamless stitching for 4D opto-acoustic imaging Optoacoustic tomography enables high resolution biological imaging based on the excitation of ultrasound waves due to the absorption of light. A laser pulse penetrates soft tissue up to a few centimeters in depth and provides 3D-visualization of biological tissues. With its rich contrast and high spatial and temporal resolution optoacoustic tomography is especially preferred for vasculature imaging. The size of a single optoacoustic volume is limited by the size and the field of view of the scanner. In order to get a good general view of the finer biological structure, however, greater field of views are necessary. During this master thesis we will investigate several methods for combining multiple volumes into one larger volume and evaluate which of those existing methods are adaptable for optoacoustic scans. Therefore, we need to find a way to align volumes to each other without having their positions tracked. Additionally, the voxels in the overlapping areas have to be blended in a way that any abrupt transitions or resolution losses are avoided, even though the resolution of each scan decreases around the volume edges and with distance to the scanner. Finally, we aim to propose a method to seamlessly stitch several optoacoustic scans into one high resolution volume without any additional information on the position of the single volumes. | DA/MA/BA | |
Image based tracking for medical augmented reality in orthodontic application This Master Thesis suggests a low-cost Augmented Reality system, termed OrthodontAR?, for orthodontic applications and examines image-based tracking techniques specific to orthodontic use. The procedure addressed is guided bracket placement for orthodontic correction using dental braces. Related research has developed FEM simulations based on cone-beam CT reconstructions of teeth and bone. Such simulations could be used in the planning of optimal bracket placement and wire tension, such that patient teeth move in an optimal manner while minimizing rotation. The benefits would include reduced overall chair time due to fewer corrections and reduced likelihood of relapse due to reduced twisting. The system suggested in this thesis tackles the guided placement of brackets on the teeth, which is required to realize pre-procedure planning. Augmentation of a patient video with a newly placed bracket with its planned position would suffice. The surgeon could visually align planned and actual position in a video see-through head mounted display (HMD). To reduce technical complexity, the system shall be fully image guided. It shall rely on information from both CT and video images to track the patient's jaw. The goal of this thesis is to develop and evaluate image-based methods to overlay the CT of the patient with the video image. A prototype system shall be evaluated in terms of robustness and accuracy to determine if it meets practical requirements. | DA/MA/BA | |
Computer-aided Early Diagnosis of Pancreatic Cancer based on Deep Learning Pancreatic ductal adenocarcinoma (PDAC) remains as the deadliest cancer worldwide and most of them are diagnosed in the advanced and incurable stage (1). For the year 2020, it is estimated that the number of cancer deaths caused by pancreatic ductal adenocarcinoma (PDAC) will surpass colorectal and breast cancer and will be responsible for the most overall cancer deaths after lung cancer (2). This lethal nature of PDAC has led to the consensus of screening high-risk individuals (HRIs) at early curable stage to improve the survival (3-6). The lethal nature of pancreatic ductal adenocarcinoma (PDAC) has led to the consensus of screening high-risk individuals at early curable stage. However, there is no non-invasive imaging method available for effective screening of PDAC at the moment. Strong evidence has shown that the pathological progression from normal ductal tissue to PDAC is via paraneoplastic lesions, such as pancreatic intraepithelial neoplasia (PanIN?), intraductal papillary mucinous neoplasm (IPMN) and mucinous cystic neoplasm (MCN) (7). Pancreatic carcinogenesis progresses for years from precursors to invasive cancer, indicating a long window of opportunity for early diagnosis in the curative stage (8). Deep learning technologies extend the human perception of information from digital data and its implementation has led to record-breaking advancements in many applications. The proposed master thesis will employ deep learning methods on CT or PET imaging for the early diagnosis of the precursor lesion IPMN. The student is expected to have good knowledge in medical imaging. Advanced skill in python programming is required. | Master Thesis | |
PET-Histology Prostate Cancer Segmentation In this project, prostate cancer segmentation will be studied leveraging PET and Histology modalities in a voxel to voxel correspondence. The student will apply state-of-the-art deep learning architectures for medical image segmentation. | Project | |
Automatic Detection of Probe Count and Size in Digital Pathology In recent time, an increasing trend towards an automatic sample preparation process can be observed in histopathology. In conjunction with Ithe Startup Inveox, laboratory automation enhances efficiency, increases process safety and eliminates potential errors. Tracking and processing of incoming probes is currently still done manual - and this is where the Inveox technology provides a fundamentatl step forward. The goal of this project is to develop a solution for an automatic detection of the size and number of tissue probes within the automatic processing system of Inveox. This includes the selection of appropriate hardware and its arrangement within the automation system. On this foundation, a method to automatically analyze the size (area) and number of samples should be developed. | DA/MA/BA | |
Persistent SLAM | Master Thesis | |
Investigation of Interpretation Methods for Understanding Deep Neural Networks Machine learning and deep learning has made breakthroughs in many applications. However, the basis of their predictions is still difficult to understand. Attribution aims at finding which parts of the network’s input or features are the most responsible for making a certain prediction. In this project, we will explore the perturbation-based attribution methods. | Project | |
Improving photometric quality of SLAM Existing incremental scene reconstruction approaches rely on different fusion methods to integrate sensor data from different view angles in order to reconstruct a scene. For example, KinectFusion?[1] uses running average on TSDF[3] and RGB values on each voxel. Similar aggremetion methods are also used in other works[2]. Accurate geometry is possible to be reconstructed by using this approach. However, the reconstructed texture is usually blurry and is less realistic (See Figure1). | DA/MA/BA | |
Photorealistic Rendering of Training Data for Object Detection and Pose Estimation with a Physics Engine 3D Object Detection is essential for many tasks such a Robotic Manipulation or Augmented Reality. Nevertheless, recording appropriate real training data is difficult and time consuming. Due to this, many approaches rely on using synthetic data to train a Convolutional Neural Network. However, those approaches often suffer from overfitting to the synthetic world and do not generalize well to unseen real scenes. There are many works that try to address this problem. In this work we try to follow , and intend to render photorealistic scenes in order to cope with this domain gap. Therefore, we will use a physics engine to generate physically plausible poses and use ray-tracing to render high-quality scenes. | Bachelor Thesis | |
Photorealistic Rendering of Training Data for Object Detection and Pose Estimation with a Physics Engine 3D Object Detection is essential for many tasks such a Robotic Manipulation or Augmented Reality. Nevertheless, recording appropriate real training data is difficult and time consuming. Due to this, many approaches rely on using synthetic data to train a Convolutional Neural Network. However, those approaches often suffer from overfitting to the synthetic world and do not generalize well to unseen real scenes. There are many works that try to address this problem. In this work we try to follow , and intend to render photorealistic scenes in order to cope with this domain gap. Therefore, we will use a physics engine to generate physically plausible poses and use ray-tracing to render high-quality scenes. In this particular work, we will extend another thesis to improve the renderings' quality as e.g. enhance the rendering realism in terms of lightning and reflection. | Bachelor Thesis | |
Deep Learning to Solve sedimentation diffussion | Master Thesis | |
Planning on Dense Semantic Reconstructions | DA/MA/BA | |
3D Object Detection and Segmentation from Point Clouds With the success of CNN architectures in computer vision tasks such as object detection and semantic segmentation on 2D data and images, there has been ongoing research on how to apply such deep learning models on 3D data. In fields such as robotics and autonomous driving, one can use 3D depth sensors to encapsulate 3D data. However, these data are sparse and computationally challenging to process. In this project, we want to process 3d data, namely, point clouds, segment them semantically and predict the bounding boxes around them. | DA/MA/BA | |
Integration of a Component Based Driving Simulator and Experiments on Multimodal (= Haptical and Optical) Driver Assistance Soon come | Master Thesis | |
Siemens AG: X-ray PoseNet - Recovering the Poses of Portable X-Ray Device with Deep Learning For most CT setups usually the systems geometric parameters are known. This is necessary to compute an accurate reconstruction of the scanned object. Unfortunately for a Mobile CT this might not be the case. However to enable the reconstruction of an object given its projections from unknown geometric parameters, this master thesis explores the possibility of using Convolutional Neural Networks to train a model and estimate the necessary geometric parameters needed for tomographic reconstruction. | Master Thesis | |
Memory-enhanced Categrory-Level Pose Estimation Category-level pose estimation jointly estimates the 6D pose: Rotation and translation, and object size for unseen objects with known category labels. Currently, the SOTA methods in 9D are FS-Net [1] and DualPoseNet? [2]. And one straightforward idea to improve the performance is to introduce priors into the network. ShapePrior? [3] and CPS [4] leverage the point cloud to represent the mean shape of each category. FS-Net adopts the average size of each category. We, instead, can use a memory module to store typical shapes of each category, similar to point cloud segmentation methods [5]. The way to establish the memory module: 1 First we train the network to extract features and then utilize the feature to reconstruct observed points, as in FS-Net. 2 Assume the features follow GM distribution, we can use a K-means to build the module, or some other unsupervised learning methods may be doable. The way to train the network: Our network structure is similar to FS-Net and ShapePrior?, thus the training procedures may be similar too. References: [1] Chen, W., Jia, X., Chang, H. J., Duan, J., Shen, L., & Leonardis, A. (2021). FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose Estimation with Decoupled Rotation Mechanism. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1581-1590). [2] Lin, J., Wei, Z., Li, Z., Xu, S., Jia, K., & Li, Y. (2021). DualPoseNet?: Category-level 6D Object Pose and Size Estimation using Dual Pose Network with Refined Learning of Pose Consistency. arXiv preprint arXiv:2103.06526. [3] Tian, M., Ang, M. H., & Lee, G. H. (2020, August). Shape Prior Deformation for Categorical 6D Object Pose and Size Estimation. In European Conference on Computer Vision (pp. 530-546). Springer, Cham. [4] Manhardt, F., Wang, G., Busam, B., Nickel, M., Meier, S., Minciullo, L., ... & Navab, N. (2020). CPS++: Improving Class-level 6D Pose and Shape Estimation From Monocular Images With Self-Supervised Learning. arXiv preprint arXiv:2003.05848. [5] He, T., Gong, D., Tian, Z., & Shen, C. (2020). Learning and Memorizing Representative Prototypes for 3D Point Cloud Semantic and Instance Segmentation. arXiv preprint arXiv:2001.01349 | Master Thesis | |
Dealing with the ambiguity induced by object symmetry in pose estimation tasks The task of 3D pose estimation from 2D camera views is currently a very popular research topic. However, most approaches do not take into account the fact that a single view can result in several equally valid outcomes due to symmetries in an object. This uncertainty can lead to inaccurate results and ignores useful information in the image. In this thesis we intend to explicitly model this pose ambiguity as a multiple hypothesis prediction problem reformulating existing single-prediction approaches. The resulting model should also collapse to a single outcome when there is only one valid pose. Additionally, we want to estimate the potential symmetry axes of an object based on the predicted poses. The final pipeline will also include a object detection system and should work in real time on standard hardware. | Master Thesis | |
Predicate-based PET-MR visualization The combined visualization of multi-modal data such as PET-MR is a challenging task. The recently introduced predicate paradigm for visualization offers a promising approach to reduce the dimensionality and complexity of the classification (transfer function) domain and provides the clinician with an intuitive user interface. The goal of this project is to extend this technique to multi-modal visualization and apply it to for instance to PET-MR scans of prostate. | Master Thesis | |
Full color surgical 3D printed tractography In preparation for surgery, many surgeons use visualization tools to plan the surgery and to find the possible challenges that they might face in the operation room. 3D volume rendering is essential to visualize certain structures that are not easily understandable in 2D reconstructions. One example is tractography, the representation of neural tracts, which is used for planning of brain tumor resection. It is possible to display tractography in 2D with color coding but it is more comprehensible in 3D. Medical 3D printing use is increasing and it is applicable for a variety of use cases. It is used for printing implants but also to visualize complex structures prior to surgery. The currently available results are often monochrome or use a reduced set of colors. However the newest generation of 3D printers supports a large number of colors and materials that enable high quality prints. They also support printing of transparent materials. The scope of this master thesis it to evaluate how to efficiently map colors and textures in volume rendered medical image views to one or several surface representation files (e.g. VRML) used in 3D printing, and in particular, to get an illustrative 3D printed tractography of the brain. The main focus of the master thesis is to evaluate different methods of transforming a volume rendered surface area (with semitransparent voxels adjacent to opaque voxels and their perceived color) to a surface representation. | DA/MA/BA | |
Development and Evaluation of A Freehand SPECT Scanning Simulator | Master Thesis | |
Extrinsics calibration of multiple 3D sensors Multiple 3D sensor setups are now increasingly used for a variety of computer vision applications, including rapid prototyping, reverse engineering, body scanning, automatic measurements. This project aims at developing a new approach for the calibration of the extrinsic parameters of multiple 3D sensors. The goal is to devise a technique which is simple, fast but also accurate in the estimation of the 3D pose of each sensor. The project will include study of the state of the art in the field, software development of the calibration technique (in C++) and experimental validation. | DA/MA/BA | |
Radiation Exposure Estimation of full surgical procedures using CamC | Master Thesis | |
A New Computational Algorithm for Treatment Planning of Targeted Radionuclide Therapy | DA/MA/BA | |
Comparing light propagation models in Light Field Microscopy Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. | DA/MA/BA | |
[[Students.MaRecommenderVenn][]] | ||
Deep Learning for Semantic Segmentation of Human Bodies In this project we want to inspect a deep learning approach, to tackle the challenging problem of semantic segmentation of human bodies. This task will be one of the core modules of a 3D reconstruction framework we are currently developing. Given a set of depth maps of the target object from multiple views, the goal is to develop a method that identifies the body parts in each of them. Specifically, we intend to explore the potential of Convolutional Neural Networks (CNNs) in this scenario. Previous work has been done in using CNNs to infer semantic segmentation from RGB data. In absence of color information, the semantic segmentation becomes more challenging. With this in mind, we utilize the approach in [1], where a dense correspondence is found between two depth images of humans. In our task, having a known segmentation map for a reference depth image and assigning such correspondence, it is possible to infer the segmentation for new target depth maps. [1] Lingyu Wei, Qixing Huang, Duygu Ceylan, Etienne Vouga, Hao Li. Dense Human Body Correspondences Using Convolutional Networks. CVPR. 2016. | Master Thesis | |
Reconstructing the MI. Building in a Day | Project | |
Computational image refocus in Multi-focused Light Field Microscopy Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to computationally refocus at a different depth post-acquisition. | DA/MA/BA | |
Hough-based Similarity Measures in Intensity-based Image Registration | Master Thesis | |
Interventional Retina Tracking | Master Thesis | |
Robot-Assisted Vitreo-retinal Surgery Pars Plana Vitrectomy surgery is a minimal invasive intra- retinal surgery that has revolutionized retinal surgery since it was rst proposed in the 1970's. This sutureless technique involves the use of smaller surgical instruments (25 gauge, 51mm in diameter) and was used in the cure of conditions not treatable before. Moreover, it proved to have lower complication rate and shorter healing period than standard vitreo- retinal surgery. The barrier towards an improvement of the outcomes of this technique lies, however, in the surgeon's abilities and dexterity. In this line, an assisting robotic master-slave device could enhance the surgeon's skills when manipulating the surgical tools. The slave device is in charge of manipulating the tools whereas the master device is ma- nipulated by the surgeon to control the master device. A master input device for the controlling of the existing master robot is to be designed. Several aspects concerning the operating room environment, surgery req- uisites, surgeon's manipulation intuition and compatibility with slave device should be taken into account when nding the optimal solution. | Master Thesis | |
Robotic Anisotropic X-ray Dark-field Tomography: robot integration and collision detection Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics. | Master Thesis | |
[[Students.MaRoboticCatheterUS][]] | ||
System Architecture and Demonstration System for Robotic Natural Orifice Transluminal Endoscopic Surgery This thesis presents the design and implementation of a Robotic NOTES (R-NOTES) system, with particular emphasis on gross positioning tasks using an industrial robot and Magnetic Anchoring and Guidance System, and internal positioning using an embedded device, wireless communication protocol, and human-computer interface. A R-NOTES system is intended to be a teleoperated system, where the surgeon will operate the system from a surgical console. This thesis includes a reference integration architecture for further exploration of the NOTES concept. Furthermore, integration of the system components, and the development of a wirelessly controlled robotic surgical tool, the MicroBot, are presented. The wireless control of the MicroBot uses the ZigBee protocol, which enables the robot to be self-contained without any external wires. | DA/MA/BA | |
Optimal planning and data acquisition for robotic ultrasound-guided spinal needle injection Facet-joint syndrome is the one of the main causes of back pain that at least around eighty percent of the population has suffered during their lifetime. Current clinical practice requires injections of analgesics in the lumbar region of the spine as this is the main area of pain suffering. This procedure is done under CT guidance for which in every injection around ten control images are required if no perfect initial placement is achieved. As a consequence, not only patients but also medical staff is exposed to dangerous amounts of radiation over time. In this work we propose and evaluate an ultrasound-guided visual servoing technique using a robotic arm equipped with an ultrasound probe and needle guide. Based on first results demonstrated with a proof of concept, an initial panoramic 3D-ultrasound scan is registered to existing CT data. As this step was limited to rigid alignments between CT and 3D-US data, this work specifically focuses on the optimal acquisition of panoramic robotic ultrasound scans allowing for accurate surgical pre-planning, as well as the intra-operative registration of CT- and ultrasound datasets in a deformable way. The project will be integrated within an intuitive visualization tool for both the acquired ultrasound datasets as well as planned trajectories, and also focus on required servoing techniques to directly approach the planned target using facet joint injection needles. | DA/MA/BA | |
Towards More Robust Machine Learning Models | Master Thesis | |
Meta-learning for Image Generation/Manipulation using Scene Graphs Image generation using scene graphs in natural scenes is a challenging task in high image resolutions. In this project, we aim to improve the quality of image generation and manipulation from scene graphs using meta-learning approaches used for the few-shot learning problem. We plan to incorporate domain adaptation and information from synthetic images to learn a well-generalized model that adapts fast to new scenes. | Master Thesis | |
Surgical Workflow Analysis under Limited Annotation Surgical workflow analysis is of importance for understanding the onset and persistence of surgical phases and individual tool usage across surgery and in each phase. It is beneficial for clinical quality control and to hospital administrators for understanding surgery planning. To automatize this process, automatic surgical phase recognition from the video acquired during the surgery is very important. As the success of deep learning, various architectures have been also reported in video understanding [1-3]. While it has been very successful to classify short trimmed videos, temporally locating or detecting action segments in long untrimmed videos is still very challenging. Surgical scenes are usually represented as high intra-phase variance but limited inter-phase variance. Moreover, annotating the surgical video for training deep neural networks is a very expensive task because the videos are usually long and the frame-level annotation is required to train the models with traditional approaches. In this project, to address this issue, we would like to explore a new surgical phase recognition model which could be trained under training data with limited annotation. | Master Thesis | |
Semi-supervised Active Learning Deep neural network training generally requires a large dataset of labeled data points. In practice, large sets of unlabeled data are usually available, but acquiring labels for these datasets is time-consuming and expensive. Active Learning (AL) is a training protocol that aims at minimizing labeling effort in machine learning applications. Active learning algorithms try to sequentially query labels for the most informative data points of an unlabeled data set. Semi-supervised learning (SSL) is a method that uses unlabeled data for model training in order to improve performance. In this thesis, we will explore the combination of these two promising approaches for the efficient training of deep neural networks. In particular, we explore how query selection criteria of AL algorithms have to be designed when used in conjunction with SSL algorithms. | Master Thesis | |
Computer-aided survival, grading prediction and segmentation of soft tissue sarcomas in MRI. This project will investigate three-main task in soft-tissue sarcomas. The first is the prediction of disease progression or survival given different time point scans of the patient. The second consist of grading the aggressiveness of the sarcomas in a three-class classification task. The third task is related to the medical image segmentation to automate the treatment planning. | DA/MA/BA | |
Scene graph generation We are looking for a motivated student to work on a research topic that involves deep learning and scene understanding. The project consists on generating scene graphs which is a compact data representation that describes an image or 3D model of a scene. Each node of this graph represents an object, while the edges represent relationships/interactions between these objects, e.g. "boy - holding - racket" or "cat - next to - tree". The application of scene graphs involve image generation content-based queries for image search, and sometimes serve as additional context to improve object detection accuracy. Preferably master thesis. Also possible as guided research. | Master Thesis | |
[[Students.MaSchwarzmann][]] | ||
Segmentation of Fractured Bones With the advent of computer aided surgery and planning, automatic post-processing of the acquired imaging data becomes more and more important but also challenging. Image segmentation is in many cases the first, very important but difficult step. A fully automatic method for segmenting bones would be highly desirable. However a few factors hamper the development of such a fully automated segmentation method. The quality of the datasets differs to a large extent in terms of contrast and resolution. The voxel intensity of the bones varies according to scan parameters and condition of the bone. The intensity of cortical and trabecular bone may be very similar in some cases. The density of osteoporotic bones is low and thus the contrast between osteoporotic bones and soft tissues is very small, in particular in fractured bones, where the trabecular bone directly adjoins to the soft tissues. In addition, it is very time consuming, inconsistent and hard to do a complete manual segmentation of the fractured bones. We look forward to propose a fast and efficient segmentation tool that effectively segments the fractures and at the same time is robust for using the output model in FEM analysis. Though the objective is to develop a segmentation tool that is as fully automated as possible, the idea is also to have the following features incorporated into the segmentation tool: semi-automatic segmentation, manual correction and output generation. The evaluation is done on CT datasets of various types of fractures in Department of Diagnostic and Interventional Radiology of Klinikum rechts der Isar in Munich. | Master Thesis | |
Self-supervising monocular 6D object pose estimation We offer a Master thesis project in collaboration with researchers from Google Zurich. We are looking for a motivated student interested in 3D computer vision and deep learning. The project involves in particular 6D object pose estimation [1,2], and self-supervised learning [3,4]. 6D object pose estimation describes the tasks of localizing an object of interest in an RGB image and subsequently estimating its 3D properties (i.e. 3D rotation and location). Example datasets and an online benchmark suite are hosted by [5]. While the field has recently made a lot of progress in accuracy and efficiency [5], many approaches rely on real annotated data. Nonetheless, obtaining annotated data for the task of pose estimation is often very time consuming and error prone. Moreover, when lacking appropriately labeled data the performance of these methods drops significantly [6]. Therefore, following recent trends in self-supervised learning, we want to investigate if we can train a deep model to learn purely from data without requiring any annotations, similar to [4] and [5]. Prerequisites: The candidate should have interest and knowledge in deep learning, be comfortable with Python and preferably have some experience with a deep learning framework, such as PyTorch? or TensorFlow?. Also, the candidate should have relevant prior experience with 3D computer vision, in terms of relevant university courses and/or projects. | DA/MA/BA | |
Self-supervised learning in arbitrary image sequences Self-supervised depth estimation shows the promising result in the outdoor environment. However, there are few works target on the indoor or more arbitrary scenario. After a series of experiments, we found that one reason may be the current architecture is not able to train the network with high variation ego-motion sequences. The existing methods usually rely on Kitti, Cityscape training dataset which mostly consists of onward motion. When testing existing methods in the indoor dataset (such as TUM-RGBD, NYU ), on most of them, the training simply failed by outputting zero-depth image. The goal of this project is to investigate this issue and try to find a solution for training a self-supervised indoor depth estimation. Possible directions: 1. Try to improve the pose network by using pre-trained pose network, such as Sfmleaner, Flownet2.0, and fine-tuning. 2. Find a way to end-to-end train pose network correctly by improving the poseNet architecture, designing a good loss function or a structuring a good training method. | DA/MA/BA | |
Development and implementation of 3D parallelized series expansion methods for differential phase contrast tomography Advances in imaging hardware have enabled differential phase contrast tomography with conventional X-ray tube sources. So far, iterative series expansion methods have been applied in a weighted maximum likelihood framework to reconstruct absorption and phase contrast data jointly in 2D slices. However, the available hardware easily generates fully 3D cone-beam data sets with full projections. The expansion of this reconstruction approach to full 3D poses challenges both in terms of memory and computational power. This thesis project comprises the development and implementation of parallelized (multi- core) or massively parallelized (GPU) variants of the fully 3D reconstruction pipeline, including generation of the system equation and the implementation of several standard iterative solvers. | Master Thesis | |
Development and implementation of a regularization framework for differential phase contrast tomography Advances in imaging hardware have enabled differential phase contrast tomography with conventional X-ray tube sources. Here, iterative series expansion methods are applied in a weighted maximum likelihood framework to reconstruct absorption and phase contrast data jointly. To fully utilize the additional information content provided by the hardware setup, which measures absorption, phase contrast and darkfield information all at once, regularization terms incorporating this information have to be introduced in the maximum likelihood framework. The aim of this thesis is the development and implementation of several regularizers integrated into the existing reconstruction pipeline, as well as an evaluation of the different strategies in terms of imaging performance. | Master Thesis | |
Semantic Simultaneous Localization And Mapping | Master Thesis | |
Semantically Consistent Sim2Real Domain Adaptation for Autonomous Driving. A broad variety of real-world scenarios require autonomous systems to rely on machine learning-based perception algorithms. Such algorithms are knowingly data-dependent, yet data acquisition and labeling is a costly and tedious process. One of the common alternatives to real data acquisition and annotation is represented by simulation and synthetic data. Despite being a powerful research tool, synthetic data typically reveals a significant domain gap with respect to target real data. The underlying phenomenon has been defined as a covariate shift. This problem is normally a subject for domain adaptation methods such as sim2real domain transfer. | Master Thesis | |
Skin Lesion Segmentation on 3D Surfaces The use of computers in the analysis of skin lesions has always been an interesting and developing application of computer vision in dermatology. Identifying lesion borders, or segmentation, is one of the most crucial and active areas in computerized analysis of skin lesions. While most segmentation and analysis are done in 2 dimensions, utilizing the 3D space can provide more information to make this task more accurate. There are a number of new features and geometric information that can be gathered from a 3D representation over a 2D image. One such example is to automatically determine a definitive surface area of the skin lesion. In this thesis, we present a proof of concept towards the segmentation of a marked lesion area and determining an estimate for the surface area. We process a 3D model of a body part generated by the KinectFusion? algorithm and textured with texture mapping. An approach towards filtering and modifying the generated point cloud of the general lesion area is presented. Afterwards, we can utilize the polygon faces between the points of the lesion to calculate an estimated surface area. With the processing power of modern CPUs and GPUs, generation of the model and going through the segmentation pipeline can be done in real time. The information calculated by the pipeline can be used beneficially in the analysis and treatment of skin cancers. | Master Thesis | |
Smart-Phone Incremental Pose Tracking In Unregistered Environments The topic for this thesis was developed as a complement to the rapidly evolving market in mobile computation and the potential for innovative means of user interaction. Ever increasing speed and efficiency within non-traditional computing platforms, such as smartphones, may lead to a of commoditization of processing power, such as has occurred with memory storage. In practice, this means not only the possibility for high computational load tasks, but for the use of not yet developed forms of user interaction. This creates a market need not yet adequately addressed by the current state of the art. We seek to address this need with the creation of an a priori method for tracking three dimensional motion on an isolated mobile device. There have been many useful and well implemented solutions for advancing the field of motion tracking on mobile devices. Three weakness common to many current methods addressed by the present proposal are the requirement for a marker to be present within the the camera’s field of vision, for the user to limit his range of motion, or for the user to perform some initial calibration steps. It is hoped that the algorithm developed herein will, in practice, build upon prior achievements and advance us another step toward creating a seamless user interface for mobile platforms. | Master Thesis | |
Improving high dimensional prediction tasks by leveraging sparse reliable data | Master Thesis | |
Development of spatio-temporal segmentation model for tumor volume calculation in micro-CT To develop a spatio-temporal segmentation model where the network is exposed to previous temporal information and builds this complex mapping to segment a given mouse micro-CT image to allow accurate tumor volume calculations. A dataset with micro-CT scans of over 69 mice with repeat imaging is available with ground truth annotations. Mice were either treated with radiotherapy or left untreated. The small animal data act as a surrogate for clinical datasets treated with MR-linac technology, which requires automatic spatio-temporal segmentation. | Master Thesis | |
Statistical Iterative Reconstruction for Reduction of Artifacts using Spectral X-ray Information | Master Thesis | |
Automatic segmentation of the Spinal Cord and Multiple-Sclerosis lesions in MRI scans | DA/MA/BA | |
Siemens AG: Detection of Complex Stents in Live Fluoroscopic Images for Endovascular Aneurysm Repair The abdominal aortic aneurysm (AAA) is a dilation of the aorta that may result in rupture, and is one of the most common aortic diseases. An AAA may be repaired by open surgery or by endovascular aneurysm repair (EVAR) to prevent rupture, and in recent years EVAR has become predominant. During the EVAR procedure, a stent is placed at the position of the aneurysm to exclude it from direct blood flow. The accuracy of stent placement is critical to prevent occlusion of the branching arteries, e.g., renal arteries. This master thesis implements methods to detect complex deployed stents in live 2D fluoroscopic images during EVAR. The proposed learning-based method trains fully convolutional neural network (FCN) models [1, 2, 3] to detect the stent. The training set consists of labelled 2D fluoroscopic image patches that contain the stent. The detection result can be further improved by integrating prior knowledge, e.g., the overlay of the registered pre-operative CT segmentation. Possible evaluation metrics include DICE coefficient (to measure repeatability), true positive rate (sensitivity), positive predictive value (precision) and Hausdorff distance. Previous research [4, 5, 6] has improved the accuracy of stent detection for EVAR. However, the methods are limited to infra-renal, abdominal EVAR cases. In contrast, stents with fenestrations/scallops are necessary for supra-renal cases and for complex AAA anatomy. Methods of this work aim to detect the complexity of the stent. The expected result qualitatively and quantitatively describes which portion/branch of the stent corresponds to which artery. For example, in a supra-renal case, the result describes whether a portion/branch of the stent covers the aorta, the left/right external iliac artery, the left/right renal artery, etc. | Master Thesis | |
Interventional Stent Deformation Tracking Despite the incredible improvement of treating abdominal aortic aneurysms (AAA), the minimally invasive deployment of a stent graft within a diseased vessel may cause comorbidities induced by the stent itself or a malfunction. So far, only ex-vivo or in-vitro experiments analyzing the interaction between stent graft and weakened vessel wall have been performed. This project aims at finding a solution for the extraction of in-vivo stent graft deformation that is to be further used for quantitaive analysis of vessel wall deformation. | Master Thesis | |
Super Resolution Depth Maps This project aims at single image super-resolution applied to depth maps. Many available depth sensors have a relatively low resolution compared to the accompanied color image. We would like to recover a high-resolution depth map from the combination of low-res depth and high-res RGB image. This problem is inherently ill-posed since a multiplicity of solutions exist for any given input. Nonetheless it is a highly studies topic in computer vision. Starting with the work of Dong et. al. "Image Super-Resolution Using Deep Convolutional Networks" (TPAMI 2015) we would like to expore the application of deep neural networks to this field of study. | DA/MA/BA | |
Evaluation of the impact of super resolution on tracking accuracy Evaluierung der Auswirkungen von Super Resolution auf die Tracking-Genauigkeit Super resolution is a technique to generate a high resolution image from one or more low resolution pictures. This technique allows the usage of normal cost cameras or even webcams to receive data that would normally need better hardware. One of the purposes of this paper is to determine whether this data is usable for tracking as well as receiving more precise and still accurate sub pixel positions in the enhanced images. To be able to make a test of the impact of super resolution on tracking accuracy the first task to accomplish is to create a simple and fast to use super resolution program. After that an analysis can be done with several tests. deutsch: Super Resolution ist eine Technik, mittels derer aus einem oder mehreren Bildern ein höher auflösendes Bild errechnet wird. Dies würde die Möglichkeit eröffnen, mit einer günstigen Kamera oder sogar einer Webcam Daten zu erhalten, für die normalerweise eine bessere Hardware nötigt wäre. Eine der Aufgaben dieser Arbeit ist festzustellen, ob diese Daten für Tracking, sowie für das Erhalten genauerer und dennoch korrekter subpixel Positionen in den verbesserten Bildern geeignet sind. Um die Auswirkung von Super Resolution auf die Tracking-Genauigkeit zu messen, muss als erstes ein einfach und schnell zu benutzendes Super Resolution Programm erstellt werden. Anschließend kann mittels einer Reihe von Test eine Auswertung erfolgen. | Master Thesis | |
Phase Recognition in Surgical Workflow In recent years, with advancements in technology and medicine, the operating room has evolved into a complex and technologically rich environment. In this environment, methods to monitor surgical workflows have gained particular interest [1] with potential applications such as the evaluation of surgeons, or the creation of context-sensitive user interfaces to provide available information only when necessary. Different approaches in the field of surgical workflow recognition [1] include approaches to extract a structured model from recorded surgeries [2], to recognize the surgical phases or activities through instrument and sensor data [3-5], laparoscopic video [6-8], kinematics information [9], or a mixture thereof [10]. Very recently, also methods using deep learning have been introduced [6, 11]. This master thesis focuses on the recognition of surgical phases and the derivation of actionable information from surgical videos. | Master Thesis | |
Surgical Workflow Software Infrastructure Based on Business Workflow Modeling Standards Surgical procedures can be described and structured by their workflow, phases, and hierarchical order of tasks and activities. This enables further analysis and comparison both of ongoing surgeries as well as for recorded datasets. Several methods for modeling of business processes have already been established, though due to the different methodical approach in this case, the available methods are not necessarily directly applicable to surgical process modeling. The aim of this work is to implement a common class structure suitable for surgical workflows and develop import and export functions to and from several known languages describing business process models (e.g. YAWL and BPMN). During the course of the implementation these languages should be evaluated for their fitness to describe surgical workflows with their specific requirements (e.g. variability in the process and probabilistic phase transitions). | IDP | |
Data-Mining and Survival Analysis using Electronic Health Records Electronic health records (EHR) store a patient's hospital/physician visits, where disease diagnosis, prescribed medications, and results of diagnostics tests are recorded for each patient. As such EHR present an extensive description of a patient's health over time and can help to identify patient subgroups that are more susceptible to certain diseases. The challenges in analyzing this data are: 1. the data is highly heterogeneous, consisting of demographic information, lab measurements, questioners and clinical tests, 2. the number of variables is high with only a subset of variables being relevant when studying a particular outcome, and 3. only a small subset of variable is available for all patients. In this project, you are going to work with data from the Framingham Heart Study, which followed a cohort of people over 30 years to investigate which factors influence the risk of suffering from cardiovascular disease. Every 3-4 years, a follow-up was performed where hundreds of different indicators were recorded as well as any diseases or adverse events patients suffered since the last follow-up. This data can be used to develop a model that predicts the probability of experiencing one or more pre-defined events, such as myocardial infarction, cancer or death, at a given time point t. This type of analysis is called survival analysis, and models are referred to as survival models. Using this model, it is usually of most interest to find subgroups of patients at differ significantly in their expected survival. Since the data is heterogeneous (a mix of continuous and categorical variables), high-dimensional and correlated, traditional statistical learning techniques such as the Cox Proportional Hazards model cannot be applied. | DA/MA/BA | |
[[Students.MaTRUSMROrganMotion][]] | ||
Mobile Telephony Management in the Surgery Room DECT phones usually increase the reachability of the surgeons and assistants. However they introduce noise and disturbance in the operating room. To solve this problem, the "situational awareness" of the system is to be used. With a system that analyzes the current situation in the surgery and aware of what’s happening can reduce the disturbance while handling phones more effectively. MITI research group is willing to get a student to research and implement a solution for a system that take the responsibility to reply on the phones while the surgeon is performing a surgical operation. The system should be intelligent enough to categorize the received phone calls according to their importance and prioritize them according to the analysis of the current situation in the operating room – and forward the call to the surgeon if it is really needed. Also it’s important that the system store all the information needed to recognize the call (e.g. phone number, importance, and maybe the subject of the call). Project includes research opportunities in selecting the most appropriate technology and intelligent algorithm to reduce disturbance while not missing the important information of the caller. Student has the opportunity to include his or her creative ideas in how to solve and implement the solution. | DA/MA/BA | |
Temporal 3D Object Detection in Lidar Point Clouds for Autonomous Driving Although Lidar data is acquired over time most Object Detection methods only work on a frame by frame basis and neglect useful temporal information. The goal of this master thesis is develop and implement novel ways to use temporal information from Lidar Point Clouds to improve Object Detection or Motion Forecasting. This Thesis will be done in cooperation with BMW | Master Thesis | |
[[Students.MaThompson][]] | ||
Navigated Biopsy of scintigraphic cold nodules for patients suspected of having thyroid cancer Thyroid nodules a are very common clinical finding. Both physicians and surgeons are often asked to make a diagnostic decision or a management recommendation. In the thyroid diagnosis field, a scan of the functionality of the thyroid by using the radioactive isotope of iodine or technetium is generally available and standard technique in Nuclear Medicine. Such examination, commonly known as thyroid scintigraphy, enables the doctor to detect areas where an hyperfunction (hot areas) of the thyroid or an hypofunction (cold areas) are present. Nodules on the scan are classified according to their ability to take up the isotope. Almost all hot nodules are possibly benign tumors which can be treated with a radioiodine therapy, while cold nodules may be malignant tumors which have to be removed surgically. At the moment, pictures of the metabolic activity of the thyroid are normally acquired, with using small gamma-ray cameras. While the taken images are usually "precise enough" for examinations, they only offer a two-dimensional view. The intraoperative technology of freehand SPECT makes possible to generate a three-dimensional view of the radioactive uptake by using an optically tracked gamma probe. Using this technology , extended by an ultrasound (US) probe, it is possible to distinguish between nodules with different uptake which are layered on top of each other. Furthermore in some cases, even if the doctor recognized the nodules, it is hard for him/her to figure out which was the cold nodule and which was the hot nodule in the US image. In fact, this is a very important information, as it is recommended to perform a biopsy of cold nodules to check whether they are malignant tumors. In general, ultrasound guidance for such interventions is difficult to learn and perform. In a needle biopsy, the needle must be inserted into an anatomical target to remove a sample of tissue. The challenge here is to reduce the number of needle insertions by accurately inserting the needle into the target. In some cases, the collected material does not contain thyroid cells, or they contain too little to say, whether it is cancer, whether it is missing. It has the place in 30% of the cases in thyroid biopsy. To solve this problem biopsy is repeated, sometimes, if other cancer symptoms are present, the patient is operated in spite of deficiency in recognition of the malignant process in the biopsies. To make biopsy procedures more accurately and certain we propose to visualize the scintigraphic information and the conventional needle on the ultrasound image. We believe that our system would help to precisely guide the needle to the target point (for example cold nodules) and thus to minimize the number of needle insertions as well as guarantee a secure diagnose. A successful needle biopsy could mean avoiding open surgical biopsy, which may require general anesthesia, hospitalization and a longer recovery period, beyond side effects and unnecessary danger. In this thesis presented an extension of the existing system combining the ultrasound with freehand SPECT technology of a biopsy needle guidance. | Master Thesis | |
Machine Learning for Automatic Tissue Characterization in Intra-Vascular Optical Coherence Tomography Images Optical coherence tomography (OCT) uses light rather than ultrasound to record high-resolution images that permits a precise assessment of biological tissue. Used intra-vascular, OCT is increasingly used for assessing safety and efficacy of intra-coronary devices, such as drug-eluting stents and bioabsorbable stents. Obtained images provide insights regarding stent malposition, overlap, and neointimal thickening, among others. Recent OCT histopathology correlation studies have shown that OCT can be used to identify plaque composition, and hence it is possible to distinguish “normal” from “abnormal” neointimal tissue based on its visual appearance. The aim of this project is to develop machine learning algorithms for automatoc analysis of tissue in intra-vascular OCT. | DA/MA/BA | |
Tracking Add-On Camera System for gamma and ultrasound probes Tracking and Navigation of instruments / systems and small imaging systems is performed using commercially available systems based on infrared cameras or electromagnetic fields. Both have advantages and drawbacks for use with handheld gamma cameras or ultrasound probes. Main disadvantage is that you require a separate system. In this work a small system ADD-ON is developed that is directly attached to the device that is being tracked. | DA/MA/BA | |
[[Students.MaTrackGamUs][]] | ||
Tracking and Visualization in ioRT | IDP/Klinisches Anwendungsprojekt | |
Trajectory Validation using Deep Learning Methods | Master Thesis | |
Transformer-Based Pipeline for Pre-Processing MR Spectroscopic Imaging Data MRS data is composed of 1D spectra that can quantitatively characterize the metabolic composition of in-vivo tissue. This is especially useful for characterizing brain tumors. The primary drawback to this data type is the extensive and costly pre-processing and analysis necessary to prepare and annotate the data. Attempts to accelerate this work using deep learning is a budding, active research field. Transformers were initially developed for NLP tasks. However, recent research has shown them to be highly effective for image classification in computer vision as well. Due to the spatial nature of MRS data, CV CNN models and MLPs have been effective in this pre-processing task. Therefore, we would like to explore the use of transformers to assess their potential for automating the MRSI pre-processing pipeline. Steps to be evaluated would include things like phase and frequency correction, baseline estimation, and eddy current corrections. | Master Thesis | |
Transformer-Based Regression Model for Metabolite Quantification in MR Spectroscopic Imaging MRS data is composed of 1D spectra that can quantitatively characterize the metabolic composition of in-vivo tissue. This is especially useful for characterizing brain tumors. The primary drawback to this data type is the extensive and costly pre-processing and analysis necessary to prepare and annotate the data. Attempts to accelerate this work using deep learning is a budding, active research field. Transformers were initially developed for NLP tasks. However, recent research has shown them to be highly effective for image classification in computer vision as well. Due to the spatial nature of MRS data, CV CNN models have been effective for this quantification task. Therefore, we would like to explore the use of transformers to assess their potential for this computer vision-based regression task. | Master Thesis | |
Analysis of Conceptual Differences and Similarities Concerning the Interaction with Physical and Digital Objects in Augmented Reality The aim of this Master's Thesis is to find out to what extent physical objects in our real world differ from digital objects that are created by a computer. These differences create barriers between the digital and the physical world. The field of augmented reality, in which the physical world is enhanced with digital elements, must cope with the imposed barriers. This work tries to systematically find and explain the inherent characteristics of both physical and digital objects. There will be a analysis on which characteristics exist in both worlds and which ones are more closely related to one world. Ways are shown how to simulate or replace the exclusive characteristics in the opposite worlds. It will be pointed out how digital and physical objects can be interconnected with each other in order to form mixed objects. Furthermore, some examples will be given how collaborative work can benefit from mixed objects. Finally, an exemplary augmented reality application will be designed which makes use of the gained insights. | Master Thesis | |
Automatic acoustic coupling for robotic ultrasound imaging Medical ultrasound (US) is already used today as the primary modality of choice for many clinical indications. Robotic ultrasound systems could potentially provide an automation of current ultrasound acquisitions, which would be especially helpful for interventional and screening applications. While the feasibility of automatized ultrasound imaging systems was shown by various research groups up to now, the application of ultrasound gel still needs to be performed manually today. This project focuses on ultrasound coupling and aims at overcoming this issue based on developments in other research areas to allow for automatic robotic ultrasound acquisitions. | Master Thesis | |
Parallel Tracking and Mapping in UbiTrack | Master Thesis | |
Uncertainty in Deep Learning for Medical Application Deep learning has become the default tool to approximate nonlinear functions. The results are usually measured by accuracy or AUC. However, neither of them captures the uncertainty of the result. Dealing with systems that decide about human life, it is crucial to know to what extent the outcome can be trusted. Attempts have been made to tackle this problem, both in the computer vision and medical community. It was established that a model can be uncertain of its decision because of its parameters or because of the data that was fed. Nevertheless, none of these completely captures the uncertainty that can be caused by labels provided by multiple experts. In this work, a model is proposed to quantify this specific type of uncertainty. Its behavior will be studied under different conditions and it will be compared to the already known types of uncertainty. Finally, it will be shown that this information can be used to improve the overall performance of the model. | Master Thesis | |
Uncertainty Aware Methods for Camera Pose Estimation in Images and 3-Dimensional Data Camera pose estimation is the term for determining the 6-DoF rotation and translation parameters of a camera. It is now a key technology in enabling multitudes of applications such as augmented reality, autonomous driving, human computer interaction and robot guidance. For decades, vision scholars have worked on finding the unique solution of this problem. Yet, this trend is witnessing a fundamental change. The recent school of thought has begun to admit that for our highly complex and ambiguous real environments, obtaining a single solution is not sufficient. This has led to a paradigm shift towards estimating rather a range of solutions in the form of full probability or at least explaining the uncertainty of camera pose estimates. Thanks to the advances in Artificial Intelligence, this important problem can now be tackled via machine learning algorithms that can discover rich and powerful representations for the data at hand. In collaboration, TU Munich and Stanford University plan to devise and implement generative methods that can explain uncertainty and ambiguity in pose predictions. In particular, our aim is to bridge the gap between 6DoF pose estimation either from 2D images/3D point sets and uncertainty quantification through multimodal variational deep methods. | Project | |
Uncertainty Estimation for Segmentation in Autonomous Driving In the context of Autonomous Driving, it is crucial to have a measure of the uncertainty associated to the various predictions performed by the Deep Learning models. This helps not only to combine various predictions from different pipelines, but also to understand the real confidence associated to each prediction. Networks tend to be overly confident (~99%, as derived from the logits probabilities) also on wrong predictions, or rather unknown data and scenarios. This makes such confidence unreliable. Therefore, uncertainty estimation complements predictions by quantifying how certain the models really are, with respect to the inputs, or their own weights and the way they were trained. Among the fundamental tasks of Autonomous Driving there is segmentation (semantic segmentation, instance segmentation, panoptic segmentation, part segmentation...), which is where we would like to integrate uncertainty estimation. This Master Thesis will be done in cooperation with BMW. | Master Thesis | |
Explainability of Artificial Intelligence (XAI): Taxonomy Recently, Artificial Intelligence, Machine Learning, Deep Learning have shown positive results in various domains: recommender systems, autonomous driving, speech recognition, etc. The demand for AI is increasing exponentially, leading to a lot of startups and investment in AI. This brings in a lot of considerations, like certification and the fear of ‘Singularity’. The main reason for this is because Deep Learning is a black box. Most of the times to get something to work, we do a lot of tweaking and trying. Once we get it working, we are then able to interpret the reason for the functionality. But it is very difficult to know what the effect of a change would be before even trying it. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th, 2018 will make black-box approaches difficult to use in business. This requires a possibility to make the results re-traceable on demand. To achieve this, we need to generate the underlying explanatory structures of models, which explains the cause of the result of the model, Explainable AI. This is required in every field, only then will everyone trust machines completely. Medical domain being one of the fields which needs precision and explainable ability the most. The goal is to categorize the different possibilities for explainable AI. Based on the results of the research, some of the approaches will be implemented on an available dataset, to verify the findings. | Project | |
Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images Reliably modeling normality and differentiating abnormal appearances from normal cases is a very appealing approach for de- tecting pathologies in medical images. A plethora of such unsupervised anomaly detection approaches has been made in the medical domain, based on statistical methods, content-based retrieval, clustering and re- cently also deep learning. Previous approaches towards deep unsuper- vised anomaly detection model local patches of normal anatomy with variants of Autoencoders or GANs, and detect anomalies either as out- liers in the learned feature space or from large reconstruction errors. In contrast to these patch-based approaches, we show that deep spatial au- toencoding models can be efficiently used to capture normal anatomical variability of entire 2D brain MR slices. A variety of experiments on real MR data containing MS lesions corroborates our hypothesis that we can detect and even delineate anomalies in brain MR images by sim- ply comparing input images to their reconstruction. Results show that constraints on the latent space and adversarial training can further im- prove the segmentation performance over standard deep representation learning. | Project | |
Calculation of signal intensity-time curves from MR images of the beating heart In order to assess whether a patient with a narrowing of a coronary artery has an increased risk for a heart at-tack, one is interested in quantitative parameters of the blood supply to the heart muscle. Currently those parameters could be judged only semi quantitative by visual analysis of a sequence of MRI im-ages: For that approximately 300 consecutive dynamic images are acquired by a MRI (magnetic resonance imaging) scanner. During the acquisition an intravasal CM (contrast medium) is injected and the distribution of the signal intensity increase caused by the CM is visually observed by a radiologist. The higher the increase of the observed signal intensity, the lower the risk of the patient is to suffer a hard attack in the future. To derive reliable statistical data from these measurements, the increase of the signal intensity must be measured quantitatively in each pixel of the image. That means that the signal intensity-time curves must be derived automatically for each pixel for a time period of approximately 1 minute. Due to the breathing and small movements of the patients during the acquisition period, the measured images could not be evaluated directly. The main objective of this work is to develop a mathematical method to “freeze” the movement of the heart. The second object should be to calculate the signal intensity-time curve in each pixel of the frozen sequence and visualize the result as a parameter image of the signal intensity peak. | Master Thesis | |
Mesh CNN for vessel annomaly detection The goal of this project is to develop a novel neural architecture for mesh-based anomaly detection in the blood vessels. | DA/MA/BA | |
3D vessel tracking using transformer The goal of this project is to develop a novel deep learning-based method leveraging transformer to track multiple vessel instances in 3D medical image volumes. | Master Thesis | |
A Deep Learning Approach to Synthesize Virtual CT based on Transmission Scan in hybrid PET/MR Computed Tomography (CT) is a mandatory imaging modality for radiation treatment planning while magnetic resonance imaging (MRI) and positron emission tomography (PET) have advantages in tumor delineation and dose prescriptions. With the advent of PET/MRI, this hybrid imaging modality has advantages of simultaneous acquisition of soft tissue morphological imaging and molecular imaging, which provides advanced information supporting clinical diagnosis and therapy planning (1). To avoid multiple scanning and additional high radiation doses, a new concept was proposed to integrate low dose transmission scan (TX) into a PET/MRI machine for the synthesis of virtual CT (VCT) for treatment planning (2). However, TX is usually extremely noisy with artifact spots and it is necessary to smooth the sinogram to obtain interpretable images. This leads to consequently blurred low-resolution images. The proposed Master thesis project will aim to synthesize high-resolution virtual CT planning based on low-resolution transmission scan in integrated PET/MRI. This VCT is aimed to substitute CT scans in several applications such as radiotherapy treatment planning and attenuation correction. In particular, this project will develop advanced deep learning approach to achieve imaging super-resolution. | Master Thesis | |
Virtual Simulation of Multi-Camera Optical Tracking Systems Marker-based optical tracking systems are widely used in realtime object tracking, augmented reality, computer-aided medical procedures and industrial applications. Thereby, the position and orientation (pose) of a marker is to be determined in real time. It consists of multiple fiducials that can easily be detected and tracked in the images of multiple cameras. However, in spite of the fact that there exists a number of time-proven technologies and methods of optical tracking, no attempts were made yet to thoroughly and exhaustively investigate the question of uncertainty of object tracking. The existing works on tracking accuracy estimation are mainly based on the idealized assumption of 2D uncertainties on the CCD of the cameras. They are based on purely analytical considerations and rely on linearization of the mathematical models for error propagation. Effects such as partial or full occlusion of individual fiducials by other fiducials or enviromental objects can not be considered by this technique. They have the major impact on tracking uncertainty, however their influence is poorly understood. This is a crucial issue for such safety-critical areas as medical procedures or industrial metrology, for example. Therefore the aim of this master thesis work is to develop (design, implement, test and produce) a framework to carry out the computer simulation of an arbitrary optical tracking system, that would enable us to make the exhaustive analysis of its tacking uncertainty and try out the new design decisions without actually building the technical prototype. The proposed simulation framework practically allows to test the arbitrary tracking algorithms without the need for additional analysis of the underlying functional models. The framework would allow parametrization of the concrete setup, perform the efficient simulation of operation of the tracking system (which involves virtual laboratory setup, generation of artificial camera images, motion recognition of fiducial positions and marker poses),and the technique of Monte-Carlo to investigate the propagation of errors; and finally provide the results that could be easily evaluated with the aid of existing tools of statistical data processing. The workflow implies the full lifecycle of development of software for scientific computing simulation, starting from modelling and software engineering design, efficient implementation and bridging to the existing tracking algorithms, and thorough testing and cross-validation versus the empiric measurements by the chosen IR multi-camera optical tracking system and the FARO high-precision mechanical measurement arm. The thesis will be concluded by the analysis and scientific visualization of the precision achieved with the proposed simulation framework. | Master Thesis | |
Weakly-Supervised Action Segmentation Activity understanding in videos became a popular research topic in the computer vision community because of its application to video analysis. Thanks to large-scale labeled video datasets[1,2], classifying activities in trimmed videos made significant progress in recent years. In contrast, action segmentation, which requires finding boundaries and action labels in untrimmed videos, is still a challenging problem. The key issue is untrimmed videos are usually quite long and contain multiple sub-activities, therefore gathering a large scale video dataset for action segmentation is time-consuming and cumbersome. To address these issues, recent research on this direction focuses on training deep architectures with weak labels. This project will focus on designing a new method for action segmentation with weak labels. The proposed approach will be compared against new SOTA methods [3,4,5] on action segmentation datasets. | DA/MA/BA | |
3D Bounding Box Prediction from RGB and LiDAR Data Using 2D Proposals The performance of AI-driven systems highly depends on training data which is (usually) manually generated and requires money and time whilst being human error-prone. This thesis will address the problem of automatic 3D object detection of pedestrians and cars in an autonomous driving setting. We will exploit deep 2D mask proposals from RGB data (e.g. Fast/Faster R-CNN). In combination with LiDAR? data and classical depth completion and clustering, our goal is to bridge the gap to 3D allowing the generation of 3D bounding boxes in a weakly supervised setup. | Master Thesis | |
Weakly-Supervised Anomaly Detection assisted by Attention Models Localization of anatomical regions of interest (ROIs) is a natural pre-processing steps in many medical image analysis tasks, for example in diagnosis. While it is sometimes trivial for physicians, it turns to be tedious and very time consuming for them. Convolutional Neural Networks (CNNs) have proven to be very successful in computer vision tasks as object detection and image classification, due to the ability of extracting rich and hierarchical features that are useful for both: localization and classification. In this thesis, we investigate the concepts of Self-Transfer Learning (STL) and Spatial Transformer Network (STN). We want to exploit STL which jointly optimizes classification and localization only with weak labels, no localization information is provided, and STN that allows us to find a canonical representation by means of learning invariance to scale, rotation, translation and more generic warping. We want to explore whether the use of STN in combination with STL will improve the classification performance and if STL could assist in the anomaly detection task. We evaluate our model using three medical image datasets, chest X-rays, femur X-rays and mammograms, and compare them to previous weakly supervised approaches. | Master Thesis | |
Below The Surface Exploration Interaction in an immersive virtual environment (IVE) such as a CAVE or in our case FRAVEis an important issue to investigate. Whithin the scope of the project Virtual Arabia, we need to perform Below surface exploration. For that purpose we would like to use the magic-lens solution with the Clearview solution(TUM3D) | Master Thesis | |
Webly supervised human activity recognition | Master Thesis | |
[[Students.MaWesemann][]] | ||
Comparative Study of Path Variants for implicit Guidance in a simulated 3D Environment Simulation systems may generate points of interest (POI) somewhere in the simlated volume. Let's assume we (the system) know, where such POIs are. Then, a system is needed to guide the user to that POI. But sometimes you have to be at a certain location to be able to see it in a specific direction... We already developed such a system for guidance and got much insights into furter demands. Your job will be to get into the previous work, to port the system to our rendering environment and to extend it according to the demands already found and with stuff that we discuss in our discussions. | Master Thesis | |
Robustness of Knowledge Transfer Methods Neural networks have been the solution to many of the problems in topics such as computer vision, natural language processing, etc. in recent years. They can achieve superhuman performance in many tasks, However they are black boxes that makes them difficult to rely on in sensitive cases such as medical imaging. Interpreting the internal state of neural networks helps us to improve their performance or prevent errors by finding the reason behind their happening. There has been different works on interpretability of neural networks which can generally be grouped into three categories[1], manual visual inspection[2, 3], saliency analysis[4, 5] and statistical analysis[6, 7]. The first two categories rely on human evaluation, while TCAV[6] and NetDissect?[7] can be quantitatively evaluated. Knowledge transfer objective is to train a student network from one or multiple teacher networks[8, 9]. Generally, the student network is small and fast while the teacher network is large and accurate. Tasks and goal In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7]. This study can be useful for comparing the internal state of teacher and student networks or comparing the distilled student network with the same student trained in a supervised manner with labels. The experiments in this project will be done with limited amount of data to imitate the real world conditions. In this project, we aim to investigate the robustness of knowledge transfer methods based on interpretability approaches such as TCAV[6] or NetDissect?[7]. | Project | |
X-ray In-Depth Decomposition In this project, we would like to work further on X-ray In-Depth decomposition presented in [1] modelling the physics of X-ray to recover the depth information. | IDP | |
X-ray Depth Map Reconstruction for Improved Interventional Visualization Most minimally-invasive interventions are performed under continuous 2D X-ray imaging. However, recovering the 3D information is crucial for physicians to perform safer navigation and precise surgical instrument positioning. In the current clinical workflow, this requirement yields an increase in radiation and use of contrast agent whenever vessels need to be made visible. Improved and adapted visualization of X-Ray images has mostly been addressed in terms of 2D-3D image registration. Yet, the output of such algorithms is a projection matrix that, by itself, does not yield any meaningful visualization. It is the major aim of this master project to yield depth hints for improved interventional visualization. Having training datasets containing each an X-ray image and corresponding depth map, available, an X-ray depth model is to be built that is subsequently applied to any new incoming X-ray image to reconstruct corresponding depth hints. | Master Thesis | |
Tomographic reconstruction and visualization of x-ray scattering tensor data | Master Thesis | |
3D augmented virtuality sketching Within the interdisciplinary project "Collaborative Design Platform" http://cdp.ai.ar.tum.de/ a large-scale multitouch table was developed to aid urban planning and prototyping. The table interacts with the user over a multitouch surface, and can also scan objects positioned on top of it in 2.5 dimensions. In order to enhance its presentation cababilities, an additional display will be added, which offers a 3D augmented virtuality perspective on the workspace, which reflects both the physical and the virtual aspect of the scene. The view can be controlled using a special 3D mouse. In addition, the user can sketch and draw directly on the virtual world presented on the new display. | Master Thesis | |
ManvBook ManvBook is A Disastar Managment System. The aim of this project is to develop an user friendly interface for the web based application.So that it can be use efficiently on the touch screen devices(e.g. Tablet pc, iPadad, iPod etc) in emergency stituation. | Hiwi | |
Integrierte Erweiterung des User Interfaces einer existierenden Kartenanwendung und dessen benutzerzentrierte und iterative Evaluation In dieser Arbeit sollen verschiedene User Interface Alternativen Benutzer-zentriert erweitert und evaluiiert werden. Dabei werden die Ergebnisse aus den vorangegangenen Arbeiten in diese Arbeit hineinfließen. Der Student bekommt hier die besondere Gelegenheit der kollaborativen und interkulturellen Arbeit mit dem ASB-Muenchen, die ihre freundliche und kompetente Unterstützung als Verbundpartner im Projekt SpeedUp anbieten. Das Projekt SpeedUp erforscht im Allgemeinen technische Unterstützungsmöglichkeiten für den Rettungsdienst im besonderen Kontext eines Massenanfalls von Verletzten, kurz MANV, mit dem Ziel, das Retten von Verletzten in einer solchen Katastrophe zu beschleunigen und Führungskräften einen Überblick über die chaotische Lage zu geben. Dem User Interface kommen aufgrund der gefährlichen und instabilen Situation ganz neue Anforderungen zu. | Bachelor Thesis | |
Entwicklung und Evaluierung von verschiedenen Alternativen zur Interaktion mit einer Kartenanwendung auf einem robusten Tablet PC im Bereich des Rettungsdienstes Diese Arbeit setzt auf eine andere Arbeit auf in der bereits verschiedene Alternativen zur Interaktion mit einer Kartenanwendung entwickelt und evaluiert wurden. Die darin gewonnenen Erkenntnisse sollen in diese zweite Iteration einfließen, so dass eine neue Evaluierung mit verbesserten Interaktionselementen möglich wird. Dabei wurden für die folgenden typischen Aufgaben Konzepte entwickelt, die den speziellen Anforderungen in sogenannten Großschadensereignissen gerecht werden: Selektion, Verschieben des Kartenausschnittest (scrolling) und zoomen. In dieser Arbeit sollen die am besten bewerteten Elemente kombiniert und zu einem größeren System integriert werden und gegen ein neues System evaluiert werden. | DA/MA/BA | |
MapOrientation Multiple people work around a multitouch table with a map application. The map is viewed from different sides causing a rotated view. How much is the influence of each position (from 0°- 180°) on the navigation performance? Possible orientation cues should be implemented as a help for orientation. | Project | |
[[Students.MdBetaProbeInVivoValidation][]] | ||
[[Students.MeetingMinutesLabCourseWS03][]] | ||
Modeling brain connectivity from multi-modal imaging data | DA/MA/BA | |
[[Students.MonDay][]] | ||
[[Students.MonDay_01_03_2004][]] | ||
[[Students.Monday_01_03_2004][]] | ||
[[Students.Monday_05_04_2004][]] | ||
[[Students.Monday_08_03_2004][]] | ||
[[Students.Monday_15_03_2004][]] | ||
Towards Monocular Depth Benchmarking | Master Thesis | |
Motivation Sports Cave Kinect Emotion Exergotchi In dieser Arbeit soll die Frage beantwortet werden, welchen Einfluss die Emotionen eines virtuellen Avatars auf den Menschen oder gar Besitzer des Menschen hat. Konkret soll evaluiert werden, in wie weit Menschen bereit sind Sport zu machen, um eine Tamagotchi-ähnliche Figur glücklich zu machen. Es existiert ein Vorprojekt zu diesem Thema, dessen Ergebnisse in diese Arbeit einfließen sollen Diplomarbeit Exergotchi 2008. In dieser Arbeit sollen CG Animationen in Form von Real-Time-3D-Rendering in ein Evaluationsstoryboard integriert werden, so dass alle Zustände der Evaluationsaufgabe durchlaufen werden können. Als Evaluationsplattform soll die sich derzeit im ITüpferl befindliche "Cave" verwendet werden. | DA/MA/BA | |
The impact of virtual emotions on real persons In dieser Arbeit soll die Frage beantwortet werden, welchen Einfluss die Emotionen eines virtuellen Avatars auf den Menschen oder gar Besitzer des Menschen haben. Konkret soll evaluiert werden, in wie weit Menschen bereit sind Sport zu machen, um eine Tamagotchi-ähnliche Figur glücklich zu machen. Es existiert ein Vorprojekt zu diesem Thema, dessen Ergebnisse in diese Arbeit einfließen sollen Diplomarbeit Exergotchi 2008. | DA/MA/BA | |
The impact of virtual avatars and its motivation to do Sports In dieser Arbeit soll die Frage beantwortet werden, welchen Einfluss die Emotionen eines virtuellen Avatars auf den Menschen oder gar Besitzer des Menschen hat. Konkret soll evaluiert werden, in wie weit Menschen bereit sind Sport zu machen, um eine Tamagotchi-ähnliche Figur glücklich zu machen. Es existieren zwei Vorprojekte zu diesem Thema, dessen Ergebnisse in diese Arbeit einfließen sollen Diplomarbeit Exergotchi 2008 und Bachelorarbeit: Exergotchi Flatscreen 1.0. | DA/MA/BA | |
Increasing motivation for sports with the help of ubiquitous computing This thesis is about developing a concept for increasing the motivation to do sports with the help of ubiquitous computing. An important point of this inital step is to investigate currently available techniques in this scope. | Bachelor Thesis | |
Entwicklung und Evaluierung von verschiedenen Alternativen zur Emulation einer Mouse auf Touchscreens In dieser Arbeit sollen verschiedene Alternativen zur Emulation der herkömmlichen Mouse auf einem Touchscreen entwickelt und evaluiert werden. Einige Vorschläge für mögliche Alternativen werden hierbei initial gegeben. Das Einbringen eigener Ideen ist jedoch gerne gesehen. Bei der Zielhardware handelt es sich um einen besonders robusten Tablet PC, der auch in instabilen und gefährlichen Situationen verwendet werden kann. | Bachelor Thesis | |
[[Students.MpBoot][]] | ||
[[Students.MpBylt][]] | ||
[[Students.MpEnable][]] | ||
[[Students.MpFireSpark][]] | ||
[[Students.MpKanji][]] | ||
Integration einer Kartenanwendung der Firma Navimatix in einen bestehenden Demonstrator Im Rahmen des Projektes SpeedUp soll eine existierende Kartenanwendung der Firma NaviMatix in einen bereits bestehenden prototypischen Demonstrator auf einem TabletPC eingebunden werden. Im Falle von Großschadensereignissen befinden sich die Benutzer der Anwendung, also z.B. der leitende Notarzt (LNA) oder der Organisatorische Leiter des Rettungsdienstes (ORGL), in außergewöhnlichen Stresssituationen. Diese äußeren Umstände führen zu besonderen Anforderungen an die grafische Benutzeroberfläche (GUI). Eine besondere Bedeutung kommt demnach der Reaktionsgeschwindigkeit des Systems zu. Flüssiges Scrollen und Zoomen der Karte sind demnach unabdingbare Voraussetzungen für die Akzeptanz und den Erfolg der Anwendung. | DA/MA/BA | |
Development of modules for NeuroTable Among stereotactic operations, the most frequently performed procedure is a biopsy. Stereotactic biopsies have evolved as a powerful and safe tool providing tissue diagnoses with minimal disruption of anatomical structures. The method combines stereotactic localization and image-guided surgery using computed tomography (CT) and magnetic resonance imaging (MRI) as well as Fluoroethyl-L-tyrosine positron emission tomography (FET-PET). For conducting stereotactic biopsies, the patient’s head is fixed in a stereotactic frame using an invasive fixation system. A computed tomography (CT) scan is performed with 0.6 mm slice thickness. The imaging data is transferred to a workstation for planning. After localization of the correct head position, the CT-images are fused together with MRI images (T1 + KM, T2, contrast enhanced MRA). The target point is defined within the pathological lesion and an instrument entry point on the patient’s skull is cautiously chosen, trying to avoid affecting risk structures e.g. neural or vascular structures. As visualization method a multiplanar reconstruction is available to get a three-dimensional impression of the planned trajectory in relation to anatomical structures and the pathological lesion. Additionally, the distance to anatomical structures at risk, as well as the biopsy tract can be examined in horizontal, sagittal, coronal and oblique section planes (see figure 1). Unlike most procedures, stereotactic biopsies are conducted in an environment where the surgeon does not visualize the anatomical structures directly, but must rely on the adjunctive technologies for guidance. The lack of a direct line of sight to the target limits the ability to recognize and correct mistakes during the procedure. A small error at any stage of the procedure will make accurate lesion targeting impossible, and may lead to severe complications. Therefore it is essential that the surgeon has an apprehension of three-dimensional space for optimally conducting stereotactic biopsies. However the current planning system does not provide a real-time volumetric visualization neither of the trajectory nor of the pathological lesion. The neurosurgeon has to repeatedly compute multiplanar reconstructions to achieve a three-dimensional view of the trajectory. In particular for residents of the neurosurgical clinic or for students the planning system limits the apprehension of anatomical features to quickly get a mental three dimensional model of the operating site. Another key issue is the time-consuming user interface the system provides. For analyzing the trajectory in relation to neural and vascular structures, the neurosurgeon has to use the mouse as input device to navigate slice by slice through the axial slices or probe view cross sections by manually clicking the mouse button. The third major limitation of the planning system is the combination of the monitor as output device and the mouse as input device. Only one person can directly interact with the system while the other participants are constrained to be passive spectators. The lack of a collaborative workspace limits information sharing and knowledge transfer between senior experts and junior experts during a stereotactic biopsy treatment planning. The aim of the NeuroTable? project is to design and develop a novel co-located collaborative neurosurgical workspace with latest visualization techniques enabling time-efficient stereotactic biopsy treatment planning meanwhile ensuring treatment quality and knowledge transfer to residents. | IDP | |
[[Students.NikolasDoerfler][]] | ||
[[Students.OctagaPluginTest][]] | ||
[[Students.OffSiteMATLAB][]] | ||
[[Students.OldAchievements][]] | ||
[[Students.OsiriX][]] | ||
Radiomics and AI in Pancreatic Ductal Adenocarcinoma The department on Quantitative Oncological Imaging of the Institute for Diagnostic and Interventional Radiology at the Klinikum rechts der Isar is looking for a master student at the earliest possible date to complete a master thesis with the topic: "Radiomics and AI in Pancreatic Ductal Adenocarcinoma". The supervision is provided by the AG Menze (TUM Informatics / TranslaTUM?) and the AG Braren (Radiology) and takes place at the Institute of Diagnostic and Interventional Radiology (Klinikum rechts der Isar, Ismaninger Str. 22). | DA/MA/BA | |
[[Students.PatternSkin][]] | ||
[[Students.PetSpectFusion][]] | ||
[[Students.PoseEst][]] | ||
[[Students.PoseEstimation][]] | ||
[[Students.PredicatesDebugger][]] | ||
[[Students.ProgrammingResourcesLinks][]] | ||
Marker Tracking in MRI for Colon Transit Measurements Symptoms of colonic motility disorder, i.e. diarrhea or constipation, are commonly encountered complaints in daily clinical practice, and therefore considered a major health problem. Most technical approaches which describe and partly quantitate colonic motility are in experimental settings not readily applicable to the clinical use. Ingestion of radiopaque markers or isotopes and measurement of colonic transit time and their migration through the gut has been feasible in clinical settings , however, these approaches obviously have several drawbacks (radiation!).A new approach to this problem is to perform a simple and practical measurement of colonic transit applying MR-visible markers in the patient. Tracking of those markers throughout the resulting MR-volumes taken at different time intervals after ingestion has been performed mostly manually up to now. The goal of this project is to develop a method of automatically or semiautomatically segmenting and tracking those markers. | Project | |
Evaluation of the clinical results of a multicenter freehand SPECT study for lymphatic mapping in breast cancer | Project | |
[[Students.ProjectForm][]] | ||
Evaluating modularization and efficiency of a C++ reconstruction framework for medical imaging modalities Medical imaging modalities such as X-ray Computed Tomography (X-ray CT) or Positron Emission Computed Tomography (PET) have been the basis of accurate diagnosis in clinical practice for decades, one particular example being the detection of tumors. But medical imaging also plays a central role during the therapy, for example when planning complex surgeries or when planning and monitoring radiation therapy treatments. The formation of three-dimensional images from the detector signals in commercial devices is still mostly done by analytical reconstruction methods, mainly due to their efficiency. However, iterative methods, such as statistical reconstruction, are becoming more popular, as they provide better image quality combined with reduced radiation dose. The higher computational complexity of the iterative methods is partly compensated by the increasingly available computational power. | IDP/Klinisches Anwendungsprojekt | Neural Network Compression Deep neural networks require large amount of resources in terms of computation and storage. Neural network compression helps us to deploy deep models in devices with limited resources. There have been proposed many works in the field of network compression attempting to reduce the number of parameters in a network or to make the computations more efficient. Some of the problems of these approaches include: suffering from loss of accuracy, changing the network architecture and dependence on special frameworks. In this thesis, we examine the problem of network compression in depth. We evaluate all strategies of compressing a Convolutional Neural Network for object classification and come up with a new approach. We propose to use adversarial learning, together with knowledge transfer, to reduce the size of a deep model. In our experimental section, we make an extensive study on compression strategies and compare with our framework. We show that adversarial compression achieves better performance than other knowledge transfer methods in most of the cases without using labels and it outperforms the same network with labeled data supervision. | Master Thesis | , ) $IF( $AND($EXACT(finished,open),$FIND(AmalBenzina,Main.AndreasKeil>0)), |
SAR Optimization | Hiwi | |
Stenosis Classification Using Joint MR Intensities | Project | |
RF2B : RF to B-mode Ultrasound | IDP/Klinisches Anwendungsprojekt | |
Radiation Exposure Estimation of full surgical procedures using CamC | Master Thesis | |
[[Students.SEPGUIBetaProbe][]] | ||
[[Students.SaDrivingExperiments][]] | ||
[[Students.SaKoch][]] | ||
[[Students.SaLabuttis][]] | ||
Computer-aided survival, grading prediction and segmentation of soft tissue sarcomas in MRI. This project will investigate three-main task in soft-tissue sarcomas. The first is the prediction of disease progression or survival given different time point scans of the patient. The second consist of grading the aggressiveness of the sarcomas in a three-class classification task. The third task is related to the medical image segmentation to automate the treatment planning. | Project | |
Entwicklung einer intuitiven Benutzerschnittstelle zur Selektion von Kartenelementen In der konkreten Arbeit sollen verschiedene Selektionsalternativen entwickelt werden, welche vom Rand des tablet PCs aus bedient werden können. Die Verwendung von einfachen Buttons wären zum Beispiel eine solche Alternative, welche in seiner primitiven Form allerdings nichts die vorliegenden Anforderungen erfüllt. Eine besondere Herausforderung liegt darin, den endlichen Platz am Rand des Tablet PCs optimal zu nutzen. Sind mehr selektierbare Objekte vorhanden, als Platz für die Selektionselemente am Rand des Bildschirms vorhanden ist, muss das User Interface auf geeignete Weise darauf reagieren. Einige Alternativen werden hier initial vorgeschlagen. Das Mitbringen eigener Ideen ist allerdings nicht nur möglich, sondern sogar sehr gern gesehen. | DA/MA/BA | |
Transfer in WPF und Erweiterung von existierenden Randbedienungskonzepten zur Selektion von Kartendaten auf einem robusten TabletPC In dieser Arbeit sollen verschiedene Selektionsalternativen implementiert und evaluiert werden, welche vom Rand eines speziellen sehr robusten und stabilen TabletPCs aus verwendet werden können. Der Schwerpunkt der Arbeit liegt in der Evaluation und der Ergebnisauswertung der implementierten Alternativen. | Bachelor Thesis | |
Natural Feature Based Head- and Eye-Tracking This SEPs focus resides in integration of a normal USB camera and a infrared USB camera into an Augmented Reality application for use in a Head-Up Display. The infrared camera is intended to give two unique spots from the drivers pupils, because these are extremely reflective. On the other hand, the normal light camera allows natural feature tracking of the nose's root and the eyeball. Computer Vision algorithms shall be used to track the points and areas of relevance. | SEP | |
Combining different Trackers in AMIRE This project basically combines optical tracking of markers within AMIRE based on firewire video streams with traditional tracking techniques. Used to calculate and approximate the actual camera position & orientation the tracking system ART was additionally supported with the Intersense InertiaCube2. Using this allowed a previously unimaginable combination of reality with a very exact scenegraph (vrml/x3d) of the enviroment, allowing to display data & information in a very feasible way. | SEP | |
SUSI - Segmentation User Interface | SEP | |
Construction of a mobile navigation wheel using Bluetooth Goal of this SEP is to construct and evaluate a navigation wheel concept for users in mobile working environments. The concept consists of a (mouse) wheel which the user wears like a ring. This ring is connected to the pc via a Bluetooth connection. The wheel allows the user to navigate through the menu just by turning and clicking the wheel. | SEP | |
Synchronizing 3D movements for quantitative comparison and simultaneous visualization of actions In a foregoing Lab Course? an existing birth simulator, developed by the "Institute of Automatic Control Engineering" (Department of Electrical Engineering and Information Technology, TUM) and the "Clinic for Orthopedics" (Klinikum Rechts der Isar, Muenchen) has been extended to an AR application for training. The idea of an AR training solution included capture and 3D replays of subtle movements. The crucial part missing for realizing such a training system was an appropriate way of synchronizing trajectories of similar movements with varying speed in order to simultaneously visualize the motion of experts and trainees, and to study trainees’ performances quantitatively. In this SEP we review the research from different communities on synchronization problems of similar complexity. We give a detailed description of the two most applicable algorithms. We then present results using our AR based forceps delivery training system and therefore evaluate both methods for synchronization of experts’ and trainees’ 3D movements. We also introduce the first concepts of an online synchronization system allowing the trainee to follow movements of an expert and the experts to annotate 3D trajectories for initiation of actions such as display of timely information. A video demonstration provides an overview of the work and a visual idea of what users of the proposed system could observe through their video-see through HMD. | SEP | |
Optimization Algorithms for 3D Computer Vision: Not only in Computer Vision but also in many other applications, optimization algorithms are crucial methods for error minimization in order to get good computational results. In Computer Vision these algorithms are used for estimating e.g. homographies, projection matrices, fundamental matrices, or tensors optimally. Especially non-linear optimization methods like Levenberg-Marquardt, Gradient Descent or Gauss-Newton iteration (which might be known from Konkrete Mathematik) are essential for optimal estimation. This SEP covers the implementation of these algorithms first in Matlab, a powerful and easy-to-learn tool for mathemtical computations, and in C++. Moreover, an application for testing the optimization algorithms shall be implemented, which can be chosen from the Computer Vision area, e.g. mosaicing, or 3D scene reconstruction from 2 or multiple views. Prerequisite is an advanced knowledge in C++ programming. | SEP | |
Deformable Registration for Bolus Tracking Procedures The goal of this project is to remove the motion between subsequent images of a series of MR scans. This is supposed to be done in order to enable the comparison of the intensities of the images at the same points in space. The changes in the intensity are caused by a contrast agent which enables the physician to perform a diagnosis - e.g. detect a stenosis. The project is performed in cooperation with clinical partners ("Klinikum der Universität München") and the results are intended to enable the automatic tracking of the contrast agent during the scanning period. | SEP | |
Implementation of an AR application for robust marker detection The main task of this SEP is an implementation of a fiducal marker system, which can detect digital markers, encode them and estimate their correct pose with respect to the camera. This system must have very small inter-marker confusion, false positive- and false negative detection probabilities. | SEP | |
Navigated Flexible Procedures (Bronchoscopy) Bronchoscopy minimally invasive access to peripheral lung locations. The goal is to implement an application that visualizes within imaging data of computed tomography images the actual position of a electrimagnetic sensor. Therefore, a graphical user interface that visualizes three orthogonal sliceviews as well as a crosshair, that indicates the current position of the sensor. For tracking the Aurora system from Northern Digital will be used. | SEP | |
Depth Control and UI for Camera Augmented Mobile C-arm | SEP | |
Visual Servoing for CamC to be completed | SEP | |
CamC Visual Servoing Interface | SEP | |
!CamC Image Manager | IDP/Klinisches Anwendungsprojekt | |
Assessment of Knee Cartilage Thickness using Magnetic Resonance Imaging Degeneration of knee joint cartilage is an important and early indicator of osteoarthritis (OA) which is one of the major socio-economic burdens nowadays. Accurate quantification of the articular cartilage degeneration in an early stage using MR images is a promising approach in diagnosis and therapy for this disease. Particularly, volume and thickness measurement of cartilage tissue has been shown to deliver significant parameters in assessment of pathologies. This project was aimed at analyzing and displaying local thickness data, notably in patellar cartilage. The resulting tools have been integrated in the “PaCaSe” software. | SEP | |
A GUI description language for use in Distributed Applications Goal of this work is to provide a software package to collect graphical user interfaces descriptions in a distributed system, so that a system administrator can manage all connected application processes from his personal authoring computer from one application window. | SEP | Personalized Energy Measurment In this thesis I propose the eMeter infrastructure which enables consumers to measure their personal energy consumption and thus helps them to conserve energy. It is designed to be as lightweight and easy-to-use as possible. For this purpose it uses one of the upcoming smart electric meters as a single sensor and presents the user interface on a mobile phone. Users can keep track of their entire house consumption as well as measure the consumption and cost of single appliances. | Bachelor Thesis | , ) $IF( $AND($EXACT(finished,open),$FIND(AmalBenzina,Marcus Tönnis>0)), |
A User Interface for 3D Volume Rendering in Vision Space Introduction of virual-reality technology into teaching and assessment of clinical skills in the undergraduate medical course through a programme of collaborative research and development between the Human Interface Technology Lab NZ, University of Canterbury and the Departments of Surgery and Radiology, Christchurch School of Medicine and Health Sciences, University of Otago. | SEP | |
Evaluation of Software Defined Radio for Parasitic Tracking This Sep focus on meassuring IEEE 802.11(Wireless Lan) signal strength with the Universal Software Radio Peripheral(USRP) and GNU-Radio. The 802.11 signal is converted from analog to digital with the USRP, an later demodulated with GNU-Radio. This demodulated signal is used to extract the needed data, like signal strength, type and MAC-Address. On the other side, data received with an Intel Corporation PRO/Wireless 3945ABG controler integrated into a Laptop, is used for comparsion. At the end, a visualisation of both signals is done with MagicMap. | SEP | |
2D/3D Registration using ICP and Bitangent Invariants Registration of 2D medical images to 3D volumes of a patient is a crucial issue for locating injected instruments in 3D and helping an operating physician to guide a way through the patient’s anatomy. An algorithm shall be developed to register extracted blood vessels from 2D and 3D images represented as 2D and 3D curves. This can be achieved by using an Iterative Closest Point (ICP) algorithm adapted to the 2D-3D projective case. | SEP | |
Distribution of Image Registration Algorithms Existing intensity-based image registration algorithms are implemented in a distributed version using the Message Passing Interface (MPI). Performance evaluations are performed on the Infiniband cluster. | SEP | |
User interface design of beta-probe-guided surgery In minimally invasive tumor resection, the desirable goal is to perform a minimal but complete removal of cancerous cells. In the last decades interventional nuclear medicine probes supported the detection of remaining tumor cells. However, scanning the patient with an intraoperative probe and applying the treatment are not done simultaneously. In the past we extended the one dimensional signal of a nuclear probe to a four dimensional signal including the spatial information of the distal end of the probe. This signal can be then used to guide the surgeon in the resection of residual tissue and thus increase its spatial accuracy while allowing minimal impact on the patient. The next step is to prepare clinical experiments and integrate the solution into the clinical workflow. The student in charge of this project will contribute in that step by developing an interface suitable for the surgeons and evaluate it in ex-vivo experiments. | SEP | |
Evaluation of inside-out and outside-in pose estimation It has been proved in the past, that for Augmented Reality applications inside-out tracking works more accurate than outside-in tracking in terms of image overlay error. However, distance of the measured pose by an inside-out system is higher compared to the true pose than the distance between the outside-in and the true pose. A demosetup at ART Tracking in Weilheim using a tracked single camera (inside-out) and a rig of multiple cameras (outside-in) provides us data to perform more optimization in the terms of augmentation accuracy. | SEP | |
Run-time Development and Configuration of Dynamic Service Networks Ziel ist es also dem Benutzer die Möglichkeit zu eröffnen, in das laufende DWARF-System einzugreifen und das Benutzer Interface von DIVE auszudifferenzieren. Die genauen Anforderungen ergeben sich hierbei aus den Beduerfnissen des CAR-Teams. Daueber hinaus soll DIVE aktiv die Authoring Komponenten von CAR unterstuetzen und ergaenzen. | SEP | |
Haptic Billard Game | SEP | |
Patient Position Detection using Machine Learning Techniques Although magnetic resonance imaging is considered to be non-invasive, there is at least one effect on the patient which has to be monitored: The heating which is generated by absorbed radio frequency (RF) power. It is described using the specific absorption rate (SAR). In order to obey legal limits for these SAR values, the scanner's duty cycle has to be adjusted. The limiting factor depends on the patient's position with respect to the scanner. Detection of this position allows a better adjustment of the RF power resulting in an improved scan performance and image quality. In this thesis, different machine learning techniques have to be researched and evaluated. This may include PCA, ICA, neural networks, Haar features, ... but the student is also encouraged to propose own approaches. This thesis would perfect for students who are interested in medical imaging and machine learning. Previous knowledge in those domains is helpful but not mandatory. | DA/MA/BA | |
Empirical estimation of tracking ranges and application thereof for smooth transition between two tracking devices Many augmented reality applications face the problem that the tracking devices being used have limited working areas and therefore do not provide sufficient coverage. If one wants to use multiple trackers in order to extend the overall tracking area, the problem arises how to combine them, so that a smooth transition is obtained while moving from one tracking area to the other. This thesis deals with this question and proposes two different transition strategies, which do not depend on prior knowledge of specific properties or the setup of the trackers, but instead are able to adapt to the respective tracking areas. Both strategies, being based on convex hulls and neural networks respectively, have been implemented prototypically and have been embedded into the DWARF framework. The design of this implementation allows future developers to add new transition strategies easily. | SEP | |
User Interface for Realignment and Fusion of 3D Modalities For both visualization and navigation in three-dimensional modalities, the existing visualization means often turn out to be insufficient. In this project, an advanced visualization system is to be developed, which is based on a common 3-slice-view of a volume. However, the ability to define rigid transformations in this view, as well as fused visualization of two registered volumes will be integrated. One particular application is the alignment of human heart CT data along the heart's major axis. | SEP | |
Realization of a "window-into-a-virtual-world" screen concept. 3D visualization in CAVEs and Powerwalls are situations where, the view frustum, describing the projection of the scenery onto the screen depending of the viewers position to the screen, is in general asymmetrical. The wording "window-into-a-virtual-world" is used to express a dynamic recalculation of the view frustum, based on the position of the viewer and the pose of the screen so that the view on the display appears as showing the virtual scenery behind the area occluded through the display. This project implements a generic "window-into-a-virtual-world" screen concept that allows establishing a wide range of applications, ranging from CAVE environments over powerwall setups to portable LCD screens. Through extension of the Ubitrack library by a new pattern, that models the view frustum, easy incorporation into further systems will be enabled. | SEP | |
Design of a 3D view component for DWARF In the course of the ARCHIE project a new viewer had to be developed, because previous systems developed at the chair for applied software engineering have shown several problems in available solutions. | SEP | |
Implementation of a Fast Covariant Feature Detector to be completed | SEP | |
Analysis of the Intensity-Bias in Image Registration | SEP | |
Development of visualization framework for medical real time applications This SEP is a feasibility study for the the upcoming CAMPAR medical AR framework. The outcome is supposed to be a proof of concept. | SEP | |
Application of Different Regularization Methods to Non-Rigid Registration of Medical Images | SEP | |
Head Mounted Laser Projector As Head Mounted Displays in Augmented Reality applications are fare away from being ready for (commercial) use we are going to construct a Head Mounted Laser Projector. This work is a case study for a car manufacturer and can lead to further steps. To fulfil this work knowledge in handicrafts, java and 3D geometry is required. | SEP | |
Surface Acquisition using Infrared Laser for Interventions | SEP | |
Robust Surface Reconstruction using IR-Laser-Pointer Point-based surface acquisition methods commonly suffer of inaccuracy issues and artifacts, that result in non-smooth surfaces that do not match the original one sufficiently. To reduce the errors during this procedure, clever computational postprocessing methods are needed. This project focuses on improving an already existing point-based laser surface reconstruction algorithm developed at CAMP to ensure the smoothness of the acquired surface, while minimizing the presence of outliers. | SEP | |
Context-Aware Service Selection Based on the ARToolkit This thesis deals with extending the AR Toolkit's functionality to allow not only small stationary setups but wide range tracking applications as well. | SEP | |
Virtual Engineering: Design of Optical See-through Displays The design of optical see-through displays implicates special requirements to computer-aided optical design. Position and orientation of the focal plane as well as the image size and shape are of particular importance. This document introduces an approach for the design of a construction software for such optical systems and discusses some important aspects of a concrete implementation of particular optical elements (e.g. lenses, mirrors, etc.). A uniform structure for the optical elements was established that is able to represent all desired objects. An algorithm for the computation of the optical path and the image is described, that provides the sequential arrangement of the optical elements within the optical system. Based on this algorithm, an interactive construction tool is presented. The tool features an intuitive proceeding in finding new optical arrangements and to facilitate the understanding of the underlying effects and relationships. In extension, some aspects of the design of a conformal head-up display are discussed. | SEP | |
Entwicklung eines Systems zur dreidimensionalen Visualisierung von Lufträumen für VFR-Piloten Ziel ist die Entwicklung eines Systems, welches den Piloten kleiner Maschinen im unteren Luftraum bei der Navigation unterstützt und insbesondere in kritischen Situationen (z.B. plötzlicher Wetterumbruch, der die Fortsetzung des geplanten Fluges nicht zulässt) durch sofortige intuitive Vermittlung der aktuellen Fluglage bei der Entscheidungsfindung entlastet bzw. ihm navigatorische Arbeit abnimmt. | SEP | |
Integrating a X-Server into a CAVE In this SEP we have to integrate the data represented from a x-Server onto display into CAVE-application. The data is to be represented as a texture mapped onto object(e.g. plane) in the virtual space. So the user can see a virtual desktop, placed somewhere in the virtual world. In a future SEP(Dimitar Marinov) this desktop have to be made able to response to user interaction directly in the CAVE. | SEP | |
Projektive Erweiterte Realität Zur Unterstützung offener chirurischer Operationen sollen präoperative Planungsdaten intraoperativ mittels geeigneter Projektionsverfahren auf den Operationssitus überlagert werden. Ziel des Projekts ist weniger die konkrete Umsetzung eines speziellen klinischen Sachverhalts als vielmehr die Untersuchung der grundsätzlichen Möglichkeiten einer solchen Projektion. Die zu entwickelnde Technik soll in verschiedenen Szenarios prototypisch eingesetzt werden. Anhand eines Prototyps sollen der Nutzen und die prinzipielle Machbarkeit einer solchen Augmentierung evaluiert werden. Die hierbei zu untersuchenden Fragestellungen reichen von der Kalibrierung des Gesamtsetups über die Verwendung unterschiedlicher Trackingkonfigurationen bis hin zu Fragen der Projektion auf unregelmäßige Oberflächen und der Kalibrierung eines solchen Systems. | SEP | |
Segmentation, Centerline Extraction, and Graph Creation of Angiographic Vessels in 2D to be completed | SEP | |
User interface to create precalibrated tool description files User interface design project at BrainLAB AG | SEP | |
Development of Multitouch-Enabled Games The first intention of this work was the development of a complex program for a multitouch-table, using the libTISCH library, which provides the interaction and interface design. Thus the libTISCH library was tested excessively and its quality improved during the development of this program. In order to choose a complex program, fulfilling the required extensive use of widgets and different interaction styles, we decided in favour of a strategy game. Instead of just adapting a computer strategy game, we decided to port an intricate tabletop game, because our second intention was to lower the required knowledge of complicated game rules in order to make difficult tabletop games more open to the public. | SEP | |
Systementwicklungsprojekt: Intermittent synchronisation of mobile devices using regional available communication channels Nowadays in mobile scenarios different kind of data is generated locally on PDAs , mobile phones, and so on. Depending on the application it is necessary to synchronise or share this data with others and therefore a communication channel is needed. At the moment UMTS/GPRS is the favorite technique and it should be shown that it is possible to cope with limited coverage in specific situations. In order to demonstrate the usability a location based multiplayer game will be developed where it is essential to synchronise data intermittently. The game is called "Munich Hunt". It is an adoption of the well known board game “"Scotland Yard" and will be playable here in Munich as a location based multiplayer game. | SEP | |
Anticipation of Driver Behavior Advanced Driver Assistance Systems (ADAS) are a very important feature for the car industry. Their task is to make driving safer and more convenient. New, more intelligent ADAS systems (e.g. Adaptive Cruise Control System - ACC) will be rejected by a driver if its behavior is not similar to his manner of driving. To improve the acceptance of such systems, the reaction of human drivers during various traffic situations has to be analyzed and compared to ADAS functionality. The developed system allows easy recording of virtual traffic scenarios. Afterwards these scenarios can be replayed and viewed from different perspectives in various environments (e.g. driving simulators). It offers the possibility to let a car driver experience such a scenario in the lab. His driving behaviour can be analyzed by psychologists and compared to analog ADAS data. To ease creation of scenarios for people with no former experience in traffic simulators and 3D-graphics a tracking system in combination with a backprojection table is used. It allows to steer a tracked car-body on a road shown on a workbench's surface and recording its trajectory. A recorded scenario can be viewed from different perspectives, for instance from a driver's point of view in a driving simulator. The main goal was to develop a tool to facilitate the generation of such scenarios, allowing laymen to learn how to create a scenario as easy as possible. The usability has been tested with persons not involved in the project. The result appears promising. Test persons were able to create their first simple traffic scenarios after working through a tutorial (taking about 40 minutes) within 20 minutes. Experienced users can generate a scenario of five cars in approximately 8 mins. | SEP | |
Development of an OsiriX Segmentation plugin OsiriX? is an open source DICOM image viewer for Mac OS X offering advanced image visualization capabilities. The goal of this project is to create a plugin for the segmentation of tumors in fused PET / CT image. The plugin should provide methods for integrating with external image procesing libraries, so that additional segmentation can be easily added. | SEP | |
Break Out - Augmented Reality for Computer Games The aim of this project was to implement a simple game which can be used to demonstrate the possibilities of augmented reality in general and particularly of the DWARF framework. The game developed during the course of this project is based on the popular game Break Out which was originally developed by Atari in 1976 as an arcade video game. | SEP | |
Optimization of a Positron Emission Tomography (PET) reconstruction algorithm Iterative expectation-maximization algorithms are nowadays the most widespread choice of image reconstruction for tomographic medical scanners. However, due to the exceptional requirements in terms of memory and computation, these algorithms have often been implemented using certain simplifications in the underlying system model. Within the context of our MR/PET research project, our group is currently developing a modified version of the popular OSEM algorithm which does not rely on a simplified system model. This implementation should provide us with the required flexibility to perform an interesting set of experiments on dynamic, motion-compensated image reconstruction. The student will be in charge of analyzing the proposed reconstruction algorithm (C++) and writing an optimized version that minimizes convergence time. In a first stage the code must be studied and modified to reduce memory usage and improve cache reutilization. In a second stage, vectorization and/or GPU programming must be used to accelerate the algorithm. The resulting code will be benchmarked with real clinical data from PET/CT studies. | SEP | |
Augmented Reality Visualisation and Calibration of Medical Instruments | SEP | |
Phase-based Registration of Ultrasound Images The registration of ultrasound images is a complex task. There are several ways how to perform the registration. Feature- and intensity-based registration approaches have both their advantages and inconvenients. Recently, phase-based approaches have been published, which combine advantages from intensity- and feature-based approaches. In this work, we will implement a phase based registration method and compare its results to standard registration approaches. | DA/MA/BA | |
2D / 3D Pose Estimation "By pose we mean the transformation needed to map an object model from its inherent coordinate system into agreement with the sensory data."[1] In other words: given a world object in the object coordinate system (OCS) and a camera taking an image in the camera coordinate system (CCS) we want to determine a rotation and a translation from OCS to CCS by only using information from model and camera image. Determining the pose from 2D image and 3D model information is used in many fields of Computer Science, like robotics, Augmented Reality, etc. There are numerous ways to estimate the pose with, e.g. line or point correspondences, or even without any corresponding information given. | SEP | |
Visualizing Distributed Systems of Dynamically Cooperating Services To facilitate the development of distributed augmented reality applications, the chair of applied software engineering at TU M?unchen has developed the DWARF software archi- tecture. The heart of this architecture is a middleware which dynamically locates and connects services distributed across a network. Applications built on top of DWARF inherently are self-assembling. Developers who are working with DWARF systems often face the problem that it is difficult to see how the components, spread all over the network, are connected by the middleware. The solution to this problem proposed here is DIVE (DWARF Interactive Visualiza- tion Environment), a visualization tool for DWARF systems which I have designed and implemented as a system development pro ject (SEP). This document is the DIVE manual both for users and developers. It’s basic structure follows the methodology described in [10]. The document starts with a chapter about requirements analysis, followed by a discussion of existing graph layout tools. Chapters 4 and 5 will give details about the system and ob ject design of DIVE. Chapter 6 describes possible future extensions. New users of DIVE are recommended to read the requirements analysis chapter which shows how the software is operated from a user’s perspective. The other chapters are more technical and aimed towards future developers who need to understand the internal structure of DIVE. | SEP | |
Rapid prototyping with Open Inventor for Medical Applications The SEP was part of a project at Siemens Corporate Research, Princeton, where a rapid prototyping environment is being used in order to construct prototypes for medical applications with the Open Inventor framework. Based on this existing environment, to create Open Inventor scene graphs with a graphical IDE, different medical applications have been designed. In order to use those easy to maintain networks in an application later on, a platform has been developed, based on C++ and the Microsoft Foundation Classes (MFC), which could load and interpret those networks and interact with them. Besides constructing the MFC application, various nodes have been developed, which could be used in a plug-in-like way and would add additional functionality to the existing library of Open Inventor nodes. The goal was to demonstrate, that it is possible to develop a simple medical application within two to three weeks, by using the graphical IDE to construct the Open Inventor scene graph and the MFC platform to integrate the network into a stand-alone application. Three prototypes have been developed, using this approach. | SEP | |
Entwicklung eines lernbasierten Echtzeitsteuerungssystems für Modellautorennbahnen mit Hilfe eines optischen Trackingsystems Für eine Carrera-Autorennbahn wird eine Echzeitsteuerung realisiert. Im ersten Schritt wird eine Ansteuerungseinheit erstellt. Diese kommuniziert über den seriellen Port mit dem PC und regelt die Fahrspannung über eine pulsweitenmodulierte Verstärkerschaltung. Im zweiten Schritt wird eine Anwendung geschrieben, in der verschiedene Lernalgorithmen eingebettet und getestet werden können. Zur Positionsbestimmung wird das Auto über das ART-System getracked. | SEP | |
Multimodal Deformable Registration by Mutual Information | SEP | |
Comparison of different volume renderers for 2D/3D registration 2D/3D or volume-to-image registration needs to compare data of different dimensions. By projecting the volume and comparing the resulting 2D image with the real 2D image using some similarity measure the quality of a registration can be evaluated. Then, the viewpoint is slightly changed, another projection is performed, and the two images are compared again, etc. The process stops once the best viewpoint (with highest similarity between projected and 2D image) is found. Different projection techniques can be used for this first step including purely software-based renderers modeling a raycasting through the volume and hardware-accelerated renderers using textures. This SEP compares these different techniques in terms of speed and accuracy | SEP | |
Application Programming Interface for Fire-wire Cameras | SEP | |
Filter Framework for DWARF At the moment each application and service in DWARF implements its own algorithms. This is unnecessary and error-prone. It is better to have a well tested basic set of calculations. So it is obvious to make these calculations more general and find a way to connect these calculations to a network. | SEP | |
Visualisierung der Bodenreaktionskräfte im Alpinen Skisport Zur Unterstützung der Fahrtechnikanalyse der Ski Alpin Nationalmannschaft werden die Bodenreaktionskräfte bei der Abfahrt gemessen und mit dem Fahrstil verglichen. Dazu wird bisher eine Messeinrichtung zwischen Bindung und Ski geschraubt wird. Da diese allerdings zu alt ist und nicht mehr den Anforderungen entspricht (zu schwer und ungenau), soll sie überarbeitet werden. Die gewonnen Daten werden mittels Augmented Reality in das Kamerabild eingeblendet. Dazu kommen Trackinggeräte wie GPS und Gyroskop zum Einsatz. | SEP | |
Smart Home Showroom Das Smart Home Showroom bei Siemens Corporate Technology wird als voll ausgestattete intelligente Wohnung direkt neben normalen Büros und Labors eingerichtet. Es dient als zentrale Stelle für die Integration von aktuellen und zukünftigen Siemens-Technologien im häuslichen Umfeld, um neue Produkte und Dienstleistungen für Komfort, Sicherheit, Gesundheit, Unterhaltung und Umweltschutz zu ermöglichen. Die Ergebnisse verschiedener F&E-Projekte bei Corporate Technology werden in gemeinsamen Demonstratoren präsentiert. | SEP | |
Stent Graft Detection in 2D Xray Images In the current clinical workflow of endovascular abdominal aortic repairs (EVAR) a stent graft is inserted via an introducer system through one femoral artery into the aneurysmatic aorta under 2D angiographic imaging. Due to the missing depth information in the X-ray visualization, it is highly difficult in particular for junior physicians to place the stent graft in the preoperatively defined position within the aorta. Therefore, methods for accurate stent graft recognition or segmentation in fluoroscopy images are highly required. | SEP | |
3D view of stereo laparoscope in the operating room In minimally invasive surgery, instruments and an endoscope camera are inserted into the patient's body through small ports or incisions, respectively. The goal here was to perform a calibration of a stereo endoscope and visualize their images on a 3D Monitor. | SEP | |
Personalized Ubiquitous Computing with Handhelds in an Ad-Hoc Service Environment DWARF helps developers to build context-aware Augmented Reality applications. One aspect of context is personalisation of the application for an specific user. Since DWARF applications are formed by chains of interdependent services, you can personalize an application by configuring or reconfiguring a service at runtime or by providing a set of specialized services who the user can select. | DA/MA/BA | |
Personalized Ubiquitous Computing with Handhelds in an Ad-Hoc Service Environment Current AR application are mostly static configurated systems which only hardly or not at all be altered by the user at runtime to fit his/her specific needs. However, the DWARF framework enables the system to provide default configuration values which can be merged or overwritten by personal preferences of the users. This work introduces concepts which are realized in the ARCHIE system by using DWARF. | SEP | |
Building a New Video See-Through AR System Based on a video seethrough HMD (head mounted display) and an infra red camera for tracking a new AR system is build. The student has to care for intrinsic and extrinsic calibration of the cameras. | SEP | |
High-Level User Interface for an Augmented Factory Navigator In a project sponsored by Siemens Corporate Technology, we are developing a software to check the correctness of the building’s structure knowing the CAD model (3D model). Given pictures of the building, the software augments with the virtual model. You will need to create new and innovative user interactions which could be used on a Tablet PC and on a desktop computer, in order to navigate in this complex dataset. | SEP | |
Vascular Simulator | Project | |
TouchGlove Input Device The TouchGlove input device is a new input device developed by the Columbia University. It consists of a half-glove and touch-sensitive surface device which is mounted on the palm portion of the half-glove.The physical design and the interaction method make it possible for the user to interact simultaneously with a wearable computer, as well as with objects and machines in the environment. Input is performed using both tapping and dragging motions of the fingertips on the surface. | SEP | |
Design of a GUI for Definition and Manipulation of Hidden Markov Models | Bachelor Thesis | |
Implementation of Direct Optimization Techniques for Deformable Image Registration | SEP | |
Serial Driver for the Aurora Magnetic Tracker | SEP | |
[[Students.Shadi_IDP_Pr01][]] | ||
[[Students.Shadi_MLMI_Pr01][]] | ||
[[Students.Shadi_MLMI_Pr02][]] | ||
[[Students.Shadi_MLMI_Pr04][]] | ||
[[Students.Shadi_PMSD_Pr01][]] | ||
[[Students.SharedFilesLabCourseWS03][]] | ||
[[Students.SiggelkoAACDocu][]] | ||
[[Students.SpecificationDICOMInventorNodes][]] | ||
Statistical Modelling for Image Segmentation Statistical modelling summarises the general data property. Such models are usually more robust against noises and artefacts. The position is about validating some existing and novel models that are used for segmenting images and possibly videos. | Hiwi | |
Design and Practical Guidline to a CORBA-Based Communication Framework This work is a practical introduction to the use of CORBA and DWARF in two sections for users without any relevant knowledge required. CORBA specifies a platform independent interface which facilitates transparent communication in distributed systems. DWARF is based upon this technology and provides a software framework for the connection of different components at run-time. These capacity highly recommends DWARF for the use as base for systems dealing with augmented reality. | SEP | |
Exploring 3D Scene Graphs for Surgical Operating Rooms Scene graphs are structures, which can be used to describe an image or a 3D environment in a semantically rich and compact manner. Nodes of these graphs represent objects, whereas the connections between the nodes represent relationships. In our work, we plan to utilize 3D scene graphs to describe medical operations (e.g. surgeries). The goal is to investigate 3D temporal scene graphs along with the dataset requirements, and then explore the use cases of these graphs in OR workflow, such as detecting the current phase of the surgery, identifying possible anomalies or predicting roles seen in the operation. | IDP | |
[[Students.Systemdesign_Meeting_Monday_15_03_2004][]] | ||
[[Students.TestTopic][]] | DA/MA/BA | |
Detection and Texturing of 3D Objects in Multiple Cameras The setup consists of a given rigid 3D object filmed with 5 or more high-resolution color cameras and one Kinect as shown in Fig. 1. The 3D object is positioned on a flat surface and two sets of images are taken. First the front side of the object is filmed and then after manually turning the object the back side of the object is filmed. The surface is under control, i.e. it can be equipped with calibration marks, such that relative positions of the cameras are know. The 3D CAD model of the object of interest is also available. The main task of the project is a mapping of the high-resolution color images to the CAD data, yielding a seamlessly and finely textured 3D model. In order to produce high quality texture map from images 3D object has to be detected and its 3D pose has to be determined. For that an extension of the solution of Hinterstoisser et al.[1] that relies on Kinect data will be used. Permissible processing time including detection and texturing should be few seconds. | Hiwi | |
[[Students.ThursDay][]] | ||
[[Students.Thursday_01_04_2004][]] | ||
[[Students.Thursday_04_03_2004][]] | ||
[[Students.Thursday_11_03_2004][]] | ||
A tiling window manager for multitouch devices Developing and implementing of a prototype for a tiling window manager on a multitouch device where apps can be launched in tiles. Tiles should be able to be opened, resized, closed and minimized using touch gestures. Content within these tiles will scale accordingly. Find a set of gestures for managing tiles that fit well into existing gestures used on touch devices. Let users with different backgrounds and levels of experience test the system and measure their performance. | DA/MA/BA | |
[[Students.UserManualLabCourseWS03][]] | ||
Weakly-Supervised Action Segmentation Activity understanding in videos became a popular research topic in the computer vision community because of its application to video analysis. Thanks to large-scale labeled video datasets[1,2], classifying activities in trimmed videos made significant progress in recent years. In contrast, action segmentation, which requires finding boundaries and action labels in untrimmed videos, is still a challenging problem. The key issue is untrimmed videos are usually quite long and contain multiple sub-activities, therefore gathering a large scale video dataset for action segmentation is time-consuming and cumbersome. To address these issues, recent research on this direction focuses on training deep architectures with weak labels. This project will focus on designing a new method for action segmentation with weak labels. The proposed approach will be compared against new SOTA methods [3,4,5] on action segmentation datasets. | DA/MA/BA | |
[[Students.WebBottomBar][]] | ||
[[Students.WebChanges][]] | ||
[[Students.WebHome][]] | ||
[[Students.WebIndex][]] | ||
[[Students.WebLeftBar][]] | ||
[[Students.WebNotify][]] | ||
[[Students.WebPreferences][]] | ||
[[Students.WebRss][]] | ||
[[Students.WebSearch][]] | ||
[[Students.WebTopBar][]] | ||
Transfer in WPF und Erweiterung von existierenden Randbedienungskonzepten zum Scrollen/Zoomen einer Karte auf einem robusten TabletPC In dieser Arbeit sollen verschiedene Zooming -und Scrollingalternativen implementiert und evaluiert werden, welche vom Rand eines speziellen sehr robusten und stabilen TabletPCs aus verwendet werden können. Der Schwerpunkt der Arbeit liegt in der Evaluation und der Ergebnisauswertung der implemtierten Alternativen. | Hiwi | |
[[Students.anduinUI][]] | ||
[[Students.phdDcomex][]] |
Running topics | Type of thesis | Student | Image |
---|---|---|---|
Augmented reality driving assistance systems | DA/MA/BA | ||
Increase motivation to do sports by manipulating an ego AR view of the trainee in realtime In this thesis the student will develop an ego AR (Augmented Reality) view which will be augmented according to the trainees workload in real time. While training, the trainee looks in an digital mirror. The goal is to motivate the trainee by activating his/her curiosity. This work will be done in close cooperation with the company eGym. | DA/MA/BA | ||
[[Students.ApplicationFlowVis][]] | |||
[[Students.ArPrakt05Marbles][]] | |||
[[Students.ArPrakt05Pruefungen][]] | |||
[[Students.ArPrakt05Tracker][]] | |||
[[Students.ArPrakt05flippAR][]] | |||
[[Students.AtheroscleroticHistologyLabeling][]] | |||
Entwicklung und Evaluation einer innovativen Texteingabetechnik für Tablet PCs die auf der Rückseite Im Rahmen des Forschungsprojektes SpeedUp ist ein User-Interface zu entwickeln, welches u.a. die besonderen Anforderungen von sogenannten Großschadensereignissen erfüllt. Als Plattform soll ein Tablet-PC (Touchscreen) verwendet werden. Diesbezüglich sind unterschiedliche Daten von beispielsweise Verletzten oder Rettungskräften nur mit Hilfe der Touch-Screen Oberfläche des TabletPCs einzugeben, so dass auch in zeitkritischen, instabilen Stresssituation die Einsatzkräfte in der Lage sind, die Formulare direkt am TabletPC auszufüllen und Daten vom Einsatzort an den Einsatzleiter weiterzugeben. Das User-Interface soll die Arbeit des Einsatzleiters unterstützen und ihm dabei helfen sowohl mit der Einsatzleitstelle, als auch mit den Einsatzkräften in Verbindung zu bleiben und diese zu koordinieren. Fokus der Arbeit ist die Eingabe von Text ohne die Hilfe von Maus und Tastatur. Je intuitiver und effizienter die grafische Benutzeroberfläche einer solchen Anwendung ist, desto mehr Zeit bleibt Rettungskräften Menschenleben zu retten. Aufgrund dessen kommt hier einer zielgruppennahen Entwicklung des Systems eine besondere Bedeutung zu. Das Texteingabekonzept steht bereits und auch eine Implementierung des Gestyboard-Prototypen steht bereits. | Bachelor Thesis | Christoph Bruns | |
Gruppenvisualisierungen auf einer digitalen Karte welche im MANV eingesetzt wird In dieser Arbeit sollen verschiedene Alternativen entwickelt und evaluiert werden, welche Sinneinheiten auf der Karte zu Gruppen zusammen fasst. Während das Grundkonzept schon steht, ist dieses vom Studenten noch zu verfeinern und zu verbessern und gegen andere Konzepte Benutzer-zentriert zu evaluieren. | DA/MA/BA | ||
Einsatzabschnittsunterteilung einer digitalen Kartenanwendung zur besseren Unterteilung von wichtigen Resourcen und Patienten in einem MANV In dieser Arbeit soll ein Algorithmus entwickelt werden, die verschiedene Einsatzabschnitte in einem MANV (Massenanfall von Verletzten) visualisiert und diese als Grundlage für das Selektieren, Zoomen und Scrollen einer Kartenanwendung dient. Die Anwendung soll auf einem TabletPC laufen und vollständig vom Rand aus bedient werden können. | DA/MA/BA | ||
Emulation einer Mouse auf Multitouchscreens In dieser Arbeit soll auf einem Mulitouch-Screen eine Simulation der herkömmlichen Mouse entwickelt werden, mit der wie gewohnt ein Zeiger (Cursor) zur Steuerung verwendet wird. Hiermit soll das Problem der geringen Präzision auf Touchscreens gelöst werden. Bei Interesse bitte EMail an artingee@in.tum.de oder coskun@in.tum.de | DA/MA/BA | ||
Repetition of MapInteraction2.0 on an Android Device In dieser Arbeit sollen die Karteninteraktionskonzepte bezüglich des Scrollens der Karte die im Projekt SpeedUp? entwickelt wurden auf einem Android GErät neu implementiert werden und mit den Ergebnissen der Studie mit einem robusten Tablet PC verglichen werden. Aufgaben des Studenten: - Literaturrecherche im Bereich vom Rettungsdienst, Kartenanwendungen und Touchscreens - Reimplementierung der Konzepte auf einem Android Gerät - Durchführung einer zur zweiten Iteration ähnlichen Evaluation - Auswertung der Ergebnisse - Ausarbeitungen | Bachelor Thesis | Andreas Schmidt? | |
Design and Development of a Mobile Indoor AR Application The purpose of this project is to design and implement a mobile AR Application for the indoor exploration of a building. The mobile AR App should be used by visitors for the TUM CS department to be able to explore the building as well get some important information. A user study has also to be done to report about the intuitiveness, the acceptance and the usability of the APP. | Bachelor Thesis | Sandra Mueller | |
Konzeptionierung, Implementierung und Evaluierung alternativer Realisierungen eines virtuellen Joysticks In dieser Arbeit sollten Alternativen zum bekannten virtuellen Joystick für touchscreens konzeptioniert und implementiert werden. Anschließend werden diese auf unterschiedliche Paramter wie zum Beispiel Intuitivät, Effizienz und Benutzbarkeit im Rahmer einer Benutzerstudie verglichen. | Bachelor Thesis | ||
[[Students.BMCModules][]] | |||
[[Students.BMCRegulations][]] | |||
Advanced Intra-operative Visualization In Neurosurgery Abstract With the advance of imaging facilities and their broad availability in medicine it has become essential to combine data from multiple image modalities such as Computed Tomography (CT) or Magnet Resonance Imaging (MRI) data and recently ultrasound (US) into one view. Images are important both in pre-operative planning in intra-operative guidance and in post-operative evaluation and verification of the success. The most time-sensitive task is intra-operative guidance, as it does not allow for much input of the physician. Within the workflow of an intervention the right data has to be available at the right time. This work summarizes the possibilities offered to key hole surgery of the brain by today's advanced visualization techniques. With this application in mind a simple focus and context real-time volume renderer is implemented that tries to tackle some of the problems inherent to intra-operative use and discusses it's design. It has the aim of integrating CT and/or MRI with US in one focus and context view of the volume. To visualize the brain special methods to remove the cranial bone occluding the brain have to be used. A major issue is the integration of ultrasound into this multi-pass focus and context visualization. A complete pipeline is designed that can handle all of the above in a real time approach. As a proof-of-concept a straight forward volume renderer is implemented that supports the techniques presented in the thesis. | DA/MA/BA | André Aichert | |
Augmented Reality Tabletop Game | Bachelor Thesis | Stefan Laimer, Florian Weinberger | |
Accelerated solving strategies in Anisotropic X-ray Dark-field Tomography Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics. | Bachelor Thesis | Felix Merkl | |
Realtime Medical Volume Rendering on Microsoft Hololens The recent release of the Microsoft Hololens developer kit opens up exciting possibilities to further study the utility of head-mounted displays for augmented reality applications in medical procedures. An important feature is the visualization of volumetric images, such as MRI and CT. Due to the hardware limitations of the Hololens Graphics Processing Unit (GPU) and the missing OpenGL? support, off-the-shelf solutions do not provide satisfactory results. This project aims to explore the feasibility of using the onboard GPU of the Hololens for volume raycasting. | Bachelor Thesis | ||
[[Students.BaAvini][]] | |||
[[Students.BaBauer][]] | |||
[[Students.BaBothe][]] | |||
Bestimmung der Pose einer mobilen Kamera relativ zu Objekten aus einem georeferenzierten Umgebungsmodell Im Rahmen der Bachelorarbeit wurde die Nutzung von georeferenzierten Gebäudedaten aus öffentlich zugänglichen Quellen zur Bestimmung der Position und Orientierung (Pose) einer mobilen Kamera im Freien untersucht. Anhand einer initialen Schätzung, unter Verwendung von Sensoren zur globalen Orts- und Orientierungsbestimmung, wurde unter Einbezug einer Datenquelle für räumliche Umgebungsmodelle die anfängliche Kamerapose optimiert. In einem weiteren Schritt wurde die Kamerapose unter Einbezug der verschieden Sensormodalitäten über die Zeit getrackt, um Anwendungen zu ermöglichen, die einen räumlichen Bezug zur Umgebung voraussetzen. Als Beispiel für die räumliche Datenquelle diente Google Earth, als Trägerplattform für die mobile Kamera kam ein, mit entsprechender Sensorik ausgestattetes, Fahrzeug zum Einsatz. | DA/MA/BA | Benedict Brück | |
Comparative Study on CNN Initialization | Bachelor Thesis | Michael Wengler | |
Robotic Anisotropic X-ray Dark-field Tomography: calibration Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics. | Bachelor Thesis | Carolin Bruckmaier | |
User supporting Visualisationmethods for Hand-Eye Calibration | DA/MA/BA | Michael Boxhammer | |
Methodological Categorization of Vessel Registration Techniques | Bachelor Thesis | Stefan Matl | |
[[Students.BaColourCosmos][]] | |||
Block-based solving strategies for Anisotropic X-ray Dark-field Tomography Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics. | Bachelor Thesis | Rebecca Richter | |
Development of a tool for reconstruction of CT images using Rebinning | Bachelor Thesis | Felix Kopp | |
Implementation and evaluation of iterative solving methods for X-ray Computed Tomography Reconstruction of X-ray computed tomography (CT) data enables insight into the human body without a surgical procedure. The basic concept comes down to sending X-rays through the human body and measuring the changed X-ray on the other side of the patient. Such methods are called projective imaging methods. In order to reconstruct a 3D volume of the human body providing a map of the physical properties which led to the according projective measurements there exist several algorithms, for example direct methods such as Filtered-Backprojection, or iterative methods. Iterative methods can be classified into statistical approaches using the maximum-likelihood and into series expansion methods using linear equation systems. Consequently, there exist multiple algorithms in order to compute tomographic reconstructions. | Bachelor Thesis | Theodor Cheslerean Boghiu | |
[[Students.BaDechamps][]] | |||
[[Students.BaFelleisen][]] | |||
MRF as regularizations on forest predictions for matching meshes | Bachelor Thesis | Vera Hug | |
[[Students.BaFrattini][]] | |||
Gamification in the Medical Context Today's clinical procedures often generate a large amount of digital images requiring close inspection. Manual examination by physicians is time-consuming and machine learning in computer vision and pattern recognition is playing an increasing role in medical applications. In contrast to pure machine learning methods, crowdsourcing can be used for processing big data sets, utilising the collective brainpower of huge crowds. Since individuals in the crowd are usually no medical experts, preparation of medical data as well as an appropriate visualization to the user becomes indispensable. The concept of gamification typically allows for embedding non-game elements in a serious game environment, providing an incentive for persistent engagement to the crowd. Medical image analysis empowered by the masses is still rare and only a few applications successfully use the crowd for solving medical problems. The goal of this project is to bring the gamification and crowdsourcing to the Medical Imaging community. | Bachelor Thesis | Stefan Matl | |
[[Students.BaGegg][]] | |||
[[Students.BaGruber][]] | |||
Distance Visualization for HMD Interaction | DA/MA/BA | Maximilian Tharr | |
Ultrasound Training Simulator using a Haptic Feedback Device | DA/MA/BA | Hongming Huai | |
[[Students.BaHell][]] | |||
Tracking Error Evaluation and Propagation With Hybrid Tracking System Both IR-optical tracking (OT) and electromagnetic tracking (EMT) are clinically established technologies for real-time tracking of imaging devices, instruments, and other objects. Hybrid magneto-optic tracking systems overcome the individual disadvantages of both technologies, at the price of additional calibration steps, which may introduce further errors. For optical tracking systems there exist methods to estimate the current tracking error, but for electromagnetic tracking systems is is hard to detected error or even correct them, in particular with dynamic sources of error. Several electromagnetic tracking systems are to be comparatively evaluated. Approaches for combination into hybrid magneto-optical tracking systems need to be selected, and the accumulation and propagation of errors from data acquisition over different calibration methods needs to be considered. Possible approaches for improvement of tracking and calibration accuracy should be pointed out. If you are interested or if you have questions, please contact Tobias Reichl. | DA/MA/BA | ||
Analysis of Tool Tracking Methods Fluoroscopic images are used in intravascular guided interventions to help physicians steer the tools towards a desired location. Several algorithms have been developed in this group to target different clinical application sand settings. The goal of this project is to throughly analyze these methods and test different variations of post-/ and preprocessing schemes. | IDP/Klinisches Anwendungsprojekt | ||
F the FRAVE The FRAVE is a Fully Reconfigurable CAVE at the Showroom I-Tüpferl in the Magistrale of the computer science building. It is mainly driven by a 3D terrain rendering software that uses Equalizer for parallelization of rendering. Our approach goes further. The F stands for Fully Equalizer does not yet support dynamic reconfiguration of displays. Your honorable job is to extend Equalizer to make the FRAVE what it is. | Hiwi | Evgeny Ilyushkin | |
Integration of Markerless Tracking with Global Reference Frames The project is meant to introducing feature-based tracking to the real-world in terms of scale and orientation (7 D.O.F). The problem is that feature-based tracking provides only 6DOF and the scale, however the scale is not reliable because feature-based tracking coordinate system depends on the features it is initialized with, therefore the origin of the system differs from one initialization to another, on the other side, marker tracking is relatively reliable and accurate, however there are some industrial situations where using markers is not suitable, moreover the dependency on the presence of markers acts as a barrier towards using tracking in our daily life. The approach of solving this problem is using reference pose frames from an accurate tracking system (Marker Tracking, Advanced Real-time Tracking - A.R.T, etc.) to register the feature map of feature-based tracking in the world. Finally, this hybrid tracking system is supposed to provide both the independence of feature-based tracking and the accuracy of the other tracking system, which provides reliable, robust and powerful tracking system. | Bachelor Thesis | Mahmoud Bahaa | |
Controlling 3D objects by using a multitouch surface with gesture recognition This thesis is about using a mulititouch surface to control 3D-objects using the example of a Chemistry program. There are two systems which already exist. The TISCH libraries in case of the multitouch surface and the Ubitrack libraries for the manipulation of 3D objects. Ubitrack contains certain drivers for different devices, which can be used to control 3D objects. So the task of this thesis is to write such a driver for the TISCH. The main challenges of this thesis are to find a way to link both libraries by using the existing tools and additional libraries, to identify the possibilities of interaction between both systems, to map the different gestures, which you can make on a multitouch surface, to rotations, movement and other actions on 3D-objects and finally to make sure that the driver works in the same way the Ubitrack users are familiar with. | Bachelor Thesis | Franziskus Karsunke | |
[[Students.BaKessner][]] | |||
[[Students.BaKindl][]] | |||
[[Students.BaKoehler][]] | |||
[[Students.BaLudus][]] | |||
Improving Generalizability through Generative Adversarial Domain Adaptation for MR Spectroscopy Spectroscopy is a technique that, when used in medicine, uses magnetic properties to evaluate the chemical composition of tissue of interest in a noninvasive manner. Analyzing quantified metabolite ratios can help physicians to differentiate between physiological and pathological tissues. This is an advanced topic that can be simplified to signal processing. In collaboration with a MRS research group at University of California, San Francisco, we are looking to develop a tool to aid in the training of DL models for spectra analysis. High quality, processed, and annotated medical data is expensive in general. This is even more problematic with MRS data. Current physics models can generate synthetic spectra, but due to the nonlinearities inherent in spectroscopy data and its acquisition protocols, models trained on these spectra do not perform well when tested on real spectra. | DA/MA/BA | Linus Kreitner | |
ARKit Games | DA/MA/BA | ||
Gamified Competition Game elements are used throughout modern development to motivate and entice users. Still, we lack a clear understanding about which elements affect us the strongest and how they can be improved. To gain further insight on this effect we want to commence a study. We develop a simple game for Browser- or Desktop-Use and observe players usage-times throughout different scenarios. For this thesis, we want you to develop the game and observe your users in their interaction. | DA/MA/BA | ||
Fog of Triage [DE] In dieser Arbeit soll das aus der Computerspiele-Community stammende Prinzip "Fog Of War" auf auf den Anwendungsfall im Rettungsdienst bezüglich der sogenannten Triage transferiert und evaluiiert werden. Aufgrund des Triage-Kontextes wird das daraus resultierende Konzept "Fog of Triage" genannt. Die Triage ist ein im Rettungsdienst etabliertes Verfahren, welches ursprünglich aus dem militärischen Bereich kommt, zum kategorisieren von Verletzungen bezüglich ihrer Kritikalität. Fokus liegt hier in der benutzer-zentrierten Entwicklung. Der ASB-München bietet hier als Verbundpartner im Übergeordneten Projekt SpeedUp Personal und Fachwissen an, die der Qualität der Arbeit zu Gute kommen. | Bachelor Thesis | Sebastian Klingenbeck | |
3D Bodymapping from RGB-D cameras | IDP/Klinisches Anwendungsprojekt | ||
Computer Aided Diagnosis of Moles | IDP/Klinisches Anwendungsprojekt | ||
Interactive Augmented Reality for Anatomy Teaching | DA/MA/BA | ||
Animation von typischen Abläufen von Situationen in einem MANV in digitaler Kartendarstellung und darauf basierende Anforderungsanalyse mit dem ASB In dieser Arbeit sollen verschiedene Situationen eines MANVS (Massenanfall von Verletzten) animiert werden. Ziel ist es, diese Animationen Personen aus dem Rettungsdienst zu zeigen und sie die Situation interpretieren zu lassen. Hierbei sind explizite Situationen einzubauen, in denen beispielsweise Einsatzkräfte ineffiziente Wege gehen. Der Proband soll dann immer sagen, in welchen Situationen er eingreifen würde, wenn er dies keine Animation, sondern Real-Time Daten wären. Das Tool womit die Animationen erstellt werden soll, kann sich der Benutzer frei aussuchen. Optionen sind: Flash, Blender, Microsoft Blend .... | Bachelor Thesis | Teodora Velikova | |
Competition on Purpose Motivational benefits are the essential driving force behind Gamification and Serious Games. Dull tasks can be intensified, learning can be made accessible and joy can be found in unexpected places. However, many categorizations show, that different game elements can cause different motivational impacts. This project is about determining the precise effect competition has on participants in games with a purpose, respectively calibrational tasks in Augmented Reality or Virtual Reality. How strong is the difference made? How do different competitional scenarios affect user behaviour? | DA/MA/BA | ||
Skeleton Animation for an Augmented Reality Magic Mirror The goal of this thesis is to implement a skeleton animation for an augmented reality magic mirror | Bachelor Thesis | ||
Improving User Recognition for an Augmented Reality Magic Mirror The goal of this thesis is to improve the recognition of users in an augmented reality system by using simple computer vision and machine learning techniques. | DA/MA/BA | ||
Markerloses Tracking von Personen und Geräten im OP - Möglichkeiten und Ansätze Das Markerlose Tracking gewinnt in der Augmented Reality immer mehr an Bedeutung, zum Beispiel in den Bereichen der Automobil-Fertigungs- oder der Unterhaltungsindustrie (Motion Capture). Im Gegensatz zu normalen Umgebungen herrschen im OP ganz andere Bedingungen, die ein markerloses Tracking erschweren. Bei dieser Bachelor-Arbeit sollen verschiedene Möglichkeiten untersucht und Ansätze ausprobiert werden, wie man durch verschiedene Methoden des markerlosen Trackings Geräte und Personen im OP erkennen kann. | Bachelor Thesis | Sonja Vogl | |
[[Students.BaMathiasGorf][]] | DA/MA/BA | ||
Meta-Learning of Regularization Parameters in X-ray Computed Tomography Medical imaging modalities such as X-ray Computed Tomography (X-ray CT) have been the basis of accurate diagnosis in clinical practice for decades, one particular example being the detection of tumors. But medical imaging also plays a central role during the therapy, for example when planning complex surgeries or when planning and monitoring radiation therapy treatments. | Bachelor Thesis | Stefan Haninger | |
Visualization of 4D Breathing Motion from Medical Datasets Through new improvements in medical imaging, it is possible to record multiple three dimensional volumes over time, capturing the organ motion. To be useful for the medical staff these volumes need to be displayed in some way. This bachelor’s thesis first introduces the basics of volume rendering, especially the principle of raycasting, to generate 3-dimensional images from medical data sets. The main focus lies on visualizations that enhance the viewer’s perception of movements inside the volume. Five different visualization techniques are presented which are all designed for an interactive usage and an optimal interplay with the volume renderer. In order to reach an interactive performance, all methods rely on hardware acceleration through a GPU. | Bachelor Thesis | Markus Müller | |
A Framework for Visual Tracking of Articulated Objects | DA/MA/BA | Daniel Muhra | |
Classification of AR Presentation Principles: Trends and Gaps in AR | Bachelor Thesis | Monika Nill | |
Octree-based line integral algorithms for X-ray Computed Tomography X-ray Computed Tomography (CT) has been essential for medical diagnostics for decades now. Based on accurate forward modeling and solving the inverse problem, tomographic reconstruction is the algorithmic framework to enable X-ray CT. Central to the forward model is the projector and back-projector pair computing discretized line integrals, which model the interactions of X-rays with the sample and the acquisition geometry. Along with high computational demands, accuracy is paramount. | Bachelor Thesis | Philipp Bock | |
A Web based Photogrammetric Camera Calibration Toolbox Computer Vision and Augmented Reality are gaining more and more popularity and importance these days. Camera calibration, which is an essential step in preparation of those, remains a complex and time consuming process. The analysis of commonly found camera calibration tools and applications has shown that they are often either inaccessible or hard to use. This thesis describes the implementation of a camera calibration service. This web service uses a 2D plane based calibration for a straightforward procedure which provides accurate results. Besides, it features presets for calibration patterns, module parameters, and export templates. Additionally, camera calibrations can be saved, reviewed, and loaded. All of this is achieved with cutting-edge technologies. The front-end is created with HTML5, CSS3, and JavaScript?. The back-end is powered by the fast and scalable server environment Node.js. Persistent user data is made possible with the document-oriented database MongoDB?. The finished implementation demonstrates that camera calibration as a web service is, in fact, both feasible and beneficial. Furthermore, it serves as a great foundation for future work and it could possibly be the pioneer in web based photogrammetric camera calibration services. | Bachelor Thesis | Benjamin Schagerl | |
Stereo AR Prototype for Ophthalmic Interventions using Unreal Engine Opththalmic Interventions are amongst the most challenging surgical interventions as they require high handling precision of the surgeon by at the same time only providing a limited stereo view through a microscope. For vitreoretinal procedures, where the surgeon has to operate on the retina inside the eye, additional complexity is added by the even more limited field of view, high defocusing and the endoscopic access pathway. With the Zeiss Lumera 700 [1], Carl Zeiss Meditec has introduced the first ophthalmic microscope with integrated live OCT imaging. This allows the surgeon to assess cross-sectional slices in real time during the surgery, improving the decision making and guidance. In the current product, the OCT slices are augmented onto the surgical view using a semi-transparent display in one eye channel to provide the overlay (see Fig. 1). The goal of this project is to recreate this system in a fully digital platform, using stereo cameras integrated into the microscope and the OCT data streamed from a web interface. The envisioned platform should recreate the views currently available inside the microscope using either a stereo screen or, alternatively, a head-mounted display such as the HTC Vive, to create a fully virtual surgical environment. However, it should display the OCT slices augmented on both eyes to provide a better depth placement. Based on this digital recreation, different experiments can be performed to optimize placement and visual parameters of the augmented views to minimize distraction from the direct view and occlusion problems. The flexibility of Unreal Engine [2] shall be leveraged in this project to maintain extensibility of this project's outcome for future extensions. | IDP | Aleksandra Dokic | |
[[Students.BaOppidum][]] | |||
Efficient Parallel Projectors for X-ray Computed Tomography X-Ray Computed Tomography (CT) is one of the cornerstones of medical imaging for many decades now. The tomographic reconstruction of CT is quite well understood theoretically and practically, but many open research issues remain. A central point for any reconstruction method is the projector and back-projector pair, which models the interaction process of X-rays with matter, the detection process in the detector and the acquisition geometry. Several standard methods for this are described in the literature, each with specific advantages and disadvantages. Common to all these methods are high computational requirements, necessitating the use of parallel computing. | Bachelor Thesis | Erdal Pekel | |
Addressing Artefact-Related Image Challenges In Automated Polyp Detection Deep learning techniques are becoming the state-of-the-art in automated polyp detection and the performance of these algorithms relies heavily on the size and quality of the training datasets. Specifically, endoscopic videoframes tend to be corrupted by various artefacts that impair their visibility and affect polyp detection rate. This thesis aims to tackle the issue of artefacts in endoscopic images by extracting knowledge from an artefact dataset that contains over 2,000 images with more than 17,000 annotated artefacts. Our first step was to participate in the ISBI 2019 Endoscopic Artefact Detection Challenge, where we implemented a RetinaNet? architecture and finished 3rd in the challenge. This object detection framework is then trained on polyp images while we explore different knowledge combination methods, such as Learning without Forgetting, to address the artefacts in these images. We hope that this improves polyp detection performance as well as further expands our comprehension of the effects of artefacts in automated detection in endoscopy. | Bachelor Thesis | Maxime Kayser | |
Manifold-based regularization of spherical functions in Anisotropic X-ray Dark-field Tomography Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics. | Bachelor Thesis | Nikola Dinev | |
Bachelorthesis: Implementation of a SVG-GUI-Builder in the Context of the SpeedUp-Project In the context of the Speed-Up project a user-interface is needed which fullfils the requirements of handling a major incident involving mass-casualties. The goal of this project is to develop a program aiding a GUI-programmer in the process of implementing such an Interface. The general idea is to extract the given information about control elements provided by a GUI-designer in form of a svg-file and to create a GUI-predesign based on that information. The GUI-programmer then only needs to implement the actual functionality of each predefined control element. The GUI-design and the GUI-programming will be separated strictly thus enabling the generation of multiple different designs with the same functionality. Using this approach a GUI-designer will also be able to create and edit the needed GUI-designs without any programming skills. | Bachelor Thesis | Kang-Hunn Lee | |
Implementation and Evaluation of Motion-Based Input for a Volleyball Game | Bachelor Thesis | Johannes Schmidt | |
[[Students.BaSchwab][]] | |||
[[Students.BaSchwarzmann][]] | |||
Touch Floor Large scale 3D Plasma TV on the ground are equipped with multi-touch capabilities. Make them available as User Interfaces. You'll be part of a international partnership with KAUST university. | Bachelor Thesis | Rolf Sotzek | |
Reconstruction of sparsely sampled X-ray Computed Tomography data using dictionary learning Reconstruction of x-ray computed tomography (CT) data enables an insight to the human body without the need of opening it. The basic concept comes down to sending X-rays through the human body and measuring the overall change the X-ray recieved on the other side of the patient. Such methods are called projective imaging methods. Tomographic reconstruction aims at reconstructing a 3D volume of the human body providing a map of the physical properties which led to the according projective measurements. In order to reduce noise and/or improve reconstruction quality (e.g. with respect to a specific task) one can incorporate prior assumptions or knowledge into this reconstruction process, i.e. regularization. | Bachelor Thesis | David Frank | |
Regularization of spherical harmonics for Anisotropic X-ray Dark-field Tomography Reconstruction of x-ray computed tomography (CT) data enables an insight to the human body without the need of opening it. The basic concept comes down to sending X-rays through the human body and measuring the overall change the X-ray recieved on the other side of the patient. Such methods are called projective imaging methods. Using grating interferometry, acquisition of phase-contrast and dark-field data is now possible, in addition to the usual attenuation data. The directionally dependent dark-field data allows the tomographic reconstruction of anisotropic scattering coefficients, which can be represented by spherical harmonics. | Bachelor Thesis | Maximilian Endrass | |
Statistical Reconstruction Methods for Anisotropic X-ray Dark-field Tomography X-ray grating interferometry enables the simultaneous acquisition of absorption contrast, phase contrast and dark-field contrast. The directionally dependent dark-field data allows the tomographic reconstruction of anisotropic scattering coefficients inside the sample. This allows the recovery of structural orientations without the need to explicitly resolve them in the X-ray detector. The anisotropic scattering coefficients can be represented as spherical functions, such as spherical harmonics. Based on a closed-form spherical harmonics based forward model, the anisotropic dark-field data can be reconstructed in three dimensions. We refer to this technique as "Anisotropic X-ray Dark-field Tomography", or in short AXDT. | Bachelor Thesis | Nathanael Schilling | |
Regularization of Structures in Anisotropic X-ray Dark-field Tomography Medical imaging modalities such as X-ray Computed Tomography (X-ray CT) have been the basis of accurate diagnosis in clinical practice for decades, one particular example being the detection of tumors. But medical imaging also plays a central role during the therapy, for example when planning complex surgeries or when planning and monitoring radiation therapy treatments. New X-ray contrast modalities, such as phase-contrast and dark-field contrast, are being developed in the last few years, based on a break-through in grating interferometry, with many promising clinical applications, ranging from breast cancer detection to diagnosis of osteoporosis. | Bachelor Thesis | Maximilian Hornung | |
[[Students.BaSuchacek][]] | |||
Motivationsanalyse des Designs eines On-Board Systems zum Training für Ökologisches LKW-Fahren | Bachelor Thesis | Thiemo Taube | |
[[Students.BaTipecska][]] | |||
[[Students.BaTolstoi][]] | |||
Deep Learning for Tool Detection and Tracking in Microsurgery The aim of this project is the investigation of the state-of-the-art deep learning architectures and frameworks with the purpose of detection and tracking of instruments in retinal microsurgeries. An implementation of a deep learning based instrument detection workflow shall be provided at the end of the project. | Bachelor Thesis | Luca Alessandro Dombetzki | |
Analyzing and Monitoring Tracking Accuracy of an A.R.T. System The aim of this bachelor thesis is to explore which criteria limit the precision of an infrared optical tracking system, and how a certain degree of precision can be guaranteed. Further- more a tool is developed that enhances an existing tracking system software with the capa- bility of setting and monitoring the tracking accuracy. The main task in the first half of this project is to find out which parameters determine and influence infrared optical tracking. After an intense brainstorming on parameters that are relevant for tracking accuracy, all these items are categorized and a preliminary estimation of the relevance of each of them is made. Next, some of these criteria are extensively assessed using the infrared tracking system of the chair this thesis is done at. Therefore procedures for measuring and evaluating scientific data have to be established first. For example, scripts and interfaces have to be created in order to establish data exchange between the tracking system and visualization tools. In the second half of this project it will be evaluated, if and how the knowledge and results of the previous study can be used for monitoring or optimization of tracking accuracy. To demonstrate this, a precision monitoring tool shall be developed for A.R.T. GmbH, manu- facturer of the before mentioned infrared optical tracking system. | Bachelor Thesis | Christian Trübswetter | |
[[Students.BaUWAR][]] | DA/MA/BA | ||
An introduction to OR-Use, a systematic usability data acquisition framework for the operating theatre Due to the rapid technical development in recent years, studying the usability of a system has become increasingly popular. Usability is seen as a crucial factor for the success of many systems and products. In a complex domain, like the operating room (OR), creating systems with high usability is even more important, as deficiencies in the design can have catastrophic effects. Due to the complexity of medical devices and the associated environment, it is very challenging to gather usability data. Furthermore, there is a deep desire on storing and managing this information to evaluate and though improve the usage of the devices. In order to manage and evaluate the usability of medical devices, we propose a framework architecture for intra-operative usability data management, which is based on an OR specific domain model. Therefore, our framework provides to record usability logs with respect to the workflows of a surgical intervention. To store the accumulated data, a client server based web service is used to transmit the data to a central location, where further analysis can be applied. In order to document the feasibility and benefits of our system, several performance test are accomplished using a novel imaging device in combination with the proposed framework architecture. | Bachelor Thesis | Max-Emanuel Hoffmann | |
Die Klassifizierung und Analyse von Darstellungsprinzipien in AR-Anwendungen The AR design space is large. We aim on classifying this design space. We then want to find, which combinations of presentation principles have not been used so far or are used to a larger extent. Earlier work already has been done: A paper lists the very beginning. A thesis got deeper into it, leading to a technical publication. We now want to have a look on systems that have not been built in context of the ISMAR conference series, but rather in application near concerns. | Bachelor Thesis | Ulrich von Waldow | |
Evaluation of Advanced real-time Visualization Techniques for Medical Augmented Reality In recent years GPUs have evolved from fixed pipeline to fully programmable highly parallel architectures. These advances have led to equal improvements in the field of real-time medical image visualization. Combined with modern video see-through head-mounted displays this offers great opportunities for real-time, high-quality augmented reality visualization of medical data in interventional or surgical settings. Especially in minimally-invasive procedures, detailed, task-specific, visualization aids the physician in understanding the patient-specific anatomy and supports the navigation of medical instruments to the region of interest, in absence of a direct line of sight onto the operating situs. A possible medical AR configuration is to overlay real video camera images of e.g. a torso with a virtual image generated from 3D patient data. Recent approaches have dealt with the integration of hardware accelerated volume rendering into such an environment. This work investigates the integration of advanced rendering techniques, such as virtual mirror and focus and context rendering for an improved visual perception of the augmented reality scene. Furthermore, real-time occlusion handling of real and virtual objects has been added. The techniques have been efficiently implemented in GPU programs and allow real-time visualization on computationally demanding stereo video see through AR systems. | Bachelor Thesis | Matthias Wieczorek | |
[[Students.BaWandinger][]] | |||
Wavefront coding techniques for extended depth of field in lightfield microscopy Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. | Bachelor Thesis | Felix Wechsler | |
Investigating Viewpoint Control Metaphors for hand-held devices in Virtual Environments Designing interfaces for 3D VR environment navigation is a complex undertaking. Trying to give the user more freedom in his interactions and an immersive experience most novel interfaces rely on spatial input utilizing tracking systems. To keep the system intuitive and immediately understandable they usually rely on movement metaphors relating to known skills or concepts and familiar motions already practiced like using a steering wheel while driving. This work examines an interface using two metaphors and the problems occuring when allowing to make use of both at the same time. The concept examined here uses a hand-held tablet (Samsung Galaxy Tab) held sideways. Prior to this work, two metaphors have already been established and tested. The first is the car's steering wheel metaphor while the second is analogous to an airplane leaning into a turn. Depending on the way the device is held the car metaphor is expected to be dominant in the upright position. Airplane should correlate more with a flat holding position. Tests have shown this to be mostly true. They also showed that a seamless transitioning parallel use of both metaphors for all holding positions is desirable. This however lead to problems with the airplane metaphor. Airplane relies on the degree of turning the device around its local axis parallel to the tablet's screen pointing forward when held flat before the user. Holding it upright, turning around the same axis shows an inverted metaphorical refference for turning. Left turns in the flat position become right turns in the upright position. The aim is to find a third metaphor ("window frame") accounting for the previously confusing functionality of the airplane metaphor and integrate it in an equally seamless manner with the existing two. | Bachelor Thesis | Sandro Weber | |
Investigating Haptic Interfaces for Viewpoint Control Metaphors in Virtual Environments A major issue in Virtual Environments is the process of navigating through a Virtual World. The underlaying base concept for the interpretation of the user’s input in such a setup is the usage of Navigation Metaphors. By way of example, holding the input device vertically like a steering wheel implies a car driving metaphor, whereas the horizontal usage like a gamepad indicates a mental model of flying an airplane. To get insight whether there is a specific holding angle that marks the usage of a certain metaphor a user study had been conducted. The starting point for this objective was the assumption that the current metaphor depends on the holding angle of the input device when the user begins a manoeuvre. The input method was the two handed, spatial moving of an optical tracked tablet computer. Before each manoeuvre, specific degrees of freedom of this device have been blocked using a haptic feedback unit installed on it. | Bachelor Thesis | Maximilian Weber | |
[[Students.BaWintergerst][]] | |||
Vision-based Robotic Pick and Place (with KUKA Roboter GmbH) This project aims at integrating robust algorithms for object pose estimation and tracking as vision guidance for industrial robotic manipulators. Such vision-based control will be applied to industrial tasks such as bin picking and object pick-and-place from shelves. The main goal of the project is to achieve robust vision-based robot control using inexpensive sensors, in comparison to standard expensive industrial ones. The project includes both testing of the perception part, as well as integration with the path planning and grasping algorithms. The project is made in collaboration with KUKA Roboter GmbH?, one of the leader companies in the field of industrial robots. The student will be often working directly with KUKA Comporate Research, located in Augsburg (around 35 minutes by train from Munich HBF) The student will receive financial support from KUKA (in the form of a monthly stipend) for the duration of the Master Thesis. | Master Thesis | ||
[[Students.BiomedicalComputing][]] | |||
[[Students.BmcInterviews2012][]] | |||
[[Students.BmcInterviews2013][]] | |||
[[Students.BmcInterviews2014][]] | |||
[[Students.BoostTransformationGraph][]] | |||
RGB-D Object Detection with Deep Learning Detecting multiple 3D objects in a scene and estimating their 6DoF pose is a challenging task, especially in presence of clutter and heavy occlusions. Furthermore, scaling to many objects without increasing the runtime poses another challenging problem. With this thesis, we plan to advance the state of the art by developing a new 3D object detection approach based on the use of Convolutional Neural Networks (CNNs). | DA/MA/BA | ||
[[Students.CarWashExample][]] | |||
Automatic detection and reconstruction of catheter-tips in X-ray images The current incidence of sudden cardiac death (SCD) worldwide is 4 to 5 million cases per year. Left ventricular dysfunction, such as ventricular tachycardia (VT) is currently the best available predictor for SCD. An advanced technology called radio-frequency (RF) catheter ablation can help treat these disorders. This procedure involves inserting a catheter inside the heart and delivering RF currents through the catheter tip so as to ablate the arrhythmogenic site. The catheters contain a number of electrodes used to make intracardiac electrograms. Using these electrograms, the firing spot or conduction path causing the arrhythmias can be identified. Objectives: The aim of this project is to investigate image processing and pattern recognition techniques in order to localize automatically the catheter tip in 2D fluoroscopy images. Subsequent 3D reconstruction of the catheter tip and fusion of the electrical activation times will serve as a navigational tool for the electrophysiologists. | DA/MA/BA | ||
[[Students.CellSignal][]] | |||
[[Students.ClinicalProject][]] | |||
[[Students.CnnFooling][]] | |||
[[Students.CryoET][]] | |||
[[Students.DAKinectGestures][]] | |||
[[Students.DAMultiTouchDesktop][]] | |||
Instrument Naviagtion using US Imaging | DA/MA/BA | ||
Multimodal Interacion in the Frave Many interface have been and are being implemented for interaction (Travel, Selection and Manipulation) in Virtual Environment, in this case the FRAVE. The idea of this project is to implement the integration of those difference interaction devices and exchange of context data and information between the devices in general and specifically between the touch panels and the hand-held devices used. The touch panel should be able to detect that the hand-held device is in proximity or even put on the touch table and exchange important context information, like it is being modified and displayed on the FRAVE. | Hiwi | Maximilian Weber | |
2D-3D Registration of Cerebral Angiographies | DA/MA/BA | ||
Intraoperative Guidance via Medical Augmented Reality For the NARVIS project two critical stages of minimal invasive spinal trauma surgery have been identified that can be improved with advanced visualization of imaging data. The stages port placement and pedicle screw placement require anatomical imaging data in order give the surgeon sufficient information for performing his task. Port placement is a very early stage of the procedure that determines the access to the operation site and the course of the whole surgery. To allow the surgeon for an optimal access to the operation site we want to provide an intuitive guidance making him find adequate places of the ports. Pedicle screw placement is one of the critical parts of the surgery since their alignment and position decides on success outcome of the surgery to stabilize the spine without harming surrounding tissue. For both stages 3D guidance of surgical instruments can support the surgeon. The aim of the guidance can either be guiding to a certain position and orientation of the instrument or avoiding critical anatomical structure during the procedure. The guidance will be achieved by in-situ visualization with a head mounted display (HMD) and visualization of preoperative CT data and intraoperative imaging data. | Diploma Thesis | Martin Schulze | |
Augmented Reality Game for Teaching Medical Skills Augmented Reality (AR) is a very promising technology for training. Today 2D illustratio are used to teach complex three dimensional structures. AR allows to visualize object in 3D coregistered with the real world. In medicine, AR allows to view inside the patient an visualize organs directly on the patient. We have an existing AR system using a head mounted display (HMD) and a patient dummy. The goal of this project is to develope a game to teach medical skills by showing e.g. 2D slices and let the user select the corresponding slice from the physical phantom. | DA/MA/BA | Andreas Spurny | |
Multimodal Consultation System for Patient Education in Plastic Surgery | Diploma Thesis | Patrick Wucherer | |
Multimodal Human Mocap | DA/MA/BA | ||
Precise Distance and Position Determination of Objects in the Driving Environment with Stereo Vision for various Applications | Master Thesis | Benjamin Kormann | |
Group-wise CT to Ultrasound Registration of Lumbar Spine Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine aids surgeons in the spinal needle injection which is a common procedure for pain management. Patients are always in a supine position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and intra-operative freehand ultrasound images of the lumbar spine is presented. The approach utilizes a point-based registration technique based on the *unscented Kalman filter*, taking as input segmented vertebrae surfaces in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming approach, while the CT images are semi-automatically segmented using thresholding. The registration approach is designed to simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to ensure convergence. | DA/MA/BA | Abtin Rasoulian | |
Visual Recognition of Surgical Actions Using a View-Invariant Descriptor The task of action recognition inside the operating room can help on identifying the phase of an operation. It can also support the analysis of the workflow of an operation. Activity recognition has attracted a lot of attention during the past three decades, initially using a single camera and more recently multiple cameras [1]. In the current project, the task of surgical action recognition will be accomplished using a multi-camera system. The setup includes four CCD cameras installed on the ceiling of an operating room. The reason for the big number of cameras is the complex set-up of the operating room. It is not possible to cover the whole operating room with a single camera because of the occlusion and its dynamical environment. The proposed framework will be based on the idea of the Self-Similarity descriptors, which have been proved to give excellent results on the task of action recognition [2]. A view-invariant descriptor will help to recognize the same action from different cameras simultaneously. | DA/MA/BA | ||
Discovery and Detection of Surgical Activity in Percutaneous Vertebroplasty | Diploma Thesis | Ahmad Ahmadi | |
Navigated RF Ablation Currently, RF ablation needles are used to treat tumors in every part of the body. In order to place the tip of the needle in the center of the lesion, US imaging is commonly used for guidance. In this case, the surgeon tries to visualize the tumor and the needle in the same US image. The needle however is not visible in the US image before insertion, making it difficult to imagine the tumor location relative to the needle tip and orientation. Additionally, the surgeon needs to precisely handle the US transducer and the needle at the same time. The navigation system developed for this purpose as part of the CAMPAR framework helps to simplify the RF needle placement by dividing it into two simple consecutive tasks: lesion finding and needle placement. This way, the surgeon does not have to simultaneously hold the US probe and RF needle. Additionally, the needle can always be visualized relatively to the US plane. | Master Thesis | Claudio Alcérreca | |
In-situ visualization of laparoscopic image data The endoscope view is an important sight for the surgeon in a minimally invasive surgery. The endoscope display in a common operating room comparably far way from the operating site. In this thesis the possibilities of in situ visualization with an head mounted display (HMD) are going to be explored. | Master Thesis | Alfredo Higueras Esteban | |
Master IDP: Automatic Anatomical Landmark Detection Fast and accurate detection of anatomical landmarks is highly desired for both clinical decision-making and higher-level image processing algorithms. This project aims at building a fully automatic method for such a purpose. The main methods that will be explored include image registration and machine learning. | IDP | ||
Android Selection and Zooming metaphor in a VR environment The Purpose of the project is to Use the built-in sensors of an Android phone (Accelerometer, Compass) for selection task in Virtual Reality Environment. Also to use the Android phone touch capability for zooming in VR tasks. This is work is going to be a part of an ongoing project; Android phone has been already used for travel task using the built in sensors and the multi-touch capability of the phone. | Bachelor Thesis | Yanko Sabev | |
Development of Android touch interface for navigation in an immersive environment such as CAVE The navigation technique in this work is: Walking In Place using touch: The basic idea of this navigation technique is simulating a locomotion technique using the fingers on the touch handheld device in this case Android phone. In this Bachelor thesis the first part is to implement the 2D translation part. Using Android SDK, the translation gesture of the user on the android device is detected and the translation value, speed of translation and angle of translation is calculated. | DA/MA/BA | Ashry Mohamed | |
Quantifiable Indices Towards Estimating Hemodynamics in Cerebral Aneurysms and the Efficacy of Flow Diverter Deployments A cerebral aneurysm is an abnormal dilation of a blood vessel in the brain that harbours the risk of rupturing. Hemorrhages resulting from ruptured aneurysms are a major cause of morbidity and mortality throughout the world. Flow diversion is an emerging endovascular treatment option that aims at redirecting blood flow away from the aneurysm neck by deploying a stent-like device (flow diverter) inside the parent artery. This local change in hemodynamics supposedly leads to flow stasis and thrombus formation inside the aneursym, eventually cutting it off from the normal circulation of blood. To decide whether an induced blood flow alteration is sufficient for aneurysm occlusion is an essential task that the treating doctor currently fulfills by visually comparing cerebral angiography sequences taken before and after flow diverter placement. Thus far, several angiography-based methods have been proposed that try to quantify the effectiveness of a flow diverter in reducing the flow inside the aneurysm. This research project focuses on evaluating these state-of-the-art methods while developing novel approaches towards predicting the efficacy of flow diverter deployments. | Master Thesis | Tobias Benz | |
Anonymization | DA/MA/BA | Christoph Niedermayr | |
Hybrid Segmentation Method for Treatment Planning of Abdominal Aortic Aneurysms Abdominal aortic aneurysm (AAA) is a vascular disease that results in an enlargement of the abdominal aorta due to weakened aortic wall. The preferred treatment today is a minimally invasive procedure based on endovascular placement of an aortic stent graft. In order to perform this procedure successfully, the appropriate stent graft device has to be selected during treatment planning. Therefore, accurate measurement tools are required in order to calculate the stent graft’s preferred size out of the preoperatively acquired CT volume. Moreover, Physicians need to choose the right material in order to suppress rupture and locations of high pressure. A thorough segmentation of both the aorta and the aneurysm can provide more information about the characteristics of the aortic wall and the stent graft’s effect on it. | DA/MA/BA | Guy Lejeune | |
Efficient Visualization of Lighting Simulation Data | Diploma Thesis | Stanimir Arnaudov | |
Analysing the Accuracy of the Fluoroscopic Navigation and Image Registration in an established Medical Software Application in collaboration with BrainLAB The thesis deals with the analysis of the accuracies in the process of fluoroscopic navigation and image registration in the field of orthopaedic software, and shall give starting points and possible solutions to contribute to the refinement of said accuracies. | Diploma Thesis | Matthias Assel | |
Automated Lung Segmentation | Master Thesis | ||
Auto-Calibration of Spatial Sensor Information | DA/MA/BA | ||
BOOPS: Breast RegistratiOn fOr Plastic Surgery | DA/MA/BA | Martin Sälzle | |
Fast and Robust Background Subtraction Background Subtraction is a commonly implemented procedure for higher level applications like People Tracking, Human Motion Estimation and Surveillance. As a rst and crucial part in moving object detection it provides a distinction between fore- and background objects in video data that was captured with a stationary camera. Despite the vast pool of methods that were published the last two decades, background subtraction remains a complex task and a gold standard algorithm has not been found yet. Varying lighting conditions, moving shadows and changing background geometry are only a few difficulties a good algorithm has to deal with. Last but not least many applications require real time capability with a reasonable video resolution. Pixel and color based methods like Gaussian Mixture Models are very popular but reach their limit in difficult lightning environments. Newer approaches overcome those limitations by including spatial and temporal features. The aim of this work was to implement and analyze an algorithm that meets the requirements of an indoor environment. The algorithm fuses therefore an adaptive Gaussian Mixture Model along with a texture analysis that uses Scale Invariant Local Ternary Patterns, to obtain an expressive feature space that is able to handle shadows and camou aged objects. Sudden illumination changes are detected by an additional post-processing step that is based on luminance histograms. To exploit the power of today's highly parallelized GPUs, the implementation was almost completely ported to the GPU using OpenCL? and allows to process full HD resolutions in real time. Finally, the system performance is comprehensively evaluated on relevant real and virtual generated indoor environment datasets. | DA/MA/BA | Christoph Resch | |
Image Segmentation for Occupancy Map Reconstruction This work is part of the Collision Avoidance project, a project with Siemens Medical Solutions - Germany. When having a fully automated medical device in an operating room, the control of the device takes the patient's position into account and not the rest of the environment, i.e. surgical staff and additional equipment. This project aims to detect and prevent collisions between the medical device and its dynamic environment. The collision avoidance is done thanks to an equipment of 16 cameras mounted on the ceiling of the interventional room. The combination of different views of the scene makes it possible to compute the probability of occupancy in each unit of space of the room in the neighborhood of the device. In order to get a static (low-resolution) occupancy map, the images acquired are processed to obtain silhouette images. Then, each voxel is projected onto the 16 images and its occupancy is tested. To do that, we need a robust segmentation. The more precise is the segmentation, the faster is the algorithm. The students works on the segmentation part. The internship is divided into two main objectives: - First, he will implement a standard segmentation algorithm, where each camera is considered independently. The segmentation should be fast and robust to illumination changes and background clutter. - Second, he will design a more enhanced way to segment the images: the segmentation should take into account the fact that the images are the projection of the same scene; therefore, it should be consistent with the intrinsic and the extrinsic parameters of the set of the 16 cameras. | Master Thesis | Nuno Barreiro | |
Discrete Optimization and Uncertainty Estimation in Non-Rigid Image Registration Non-rigid image registration is one of the most popular problems in computer vision. Many different approaches have been used to recover the transformation between two images. Continuous methods often compute this transformation through optimization of a cost function using standard optimization methods such as gradient descent. However, in the last years discrete optimization methods have become very popular. They turned out to be powerful optimization tools in a wide range of vision problems. The key contribution of this work is a general framework which bridges the gap between the continuous models used in deformable registration and discrete optimization algorithms. Additionally, we developed an optimization method based on the famous Alpha-Expansion which overcomes the compromise in standard discrete labeling problems between computational speed and accuracy. We implemented a multi-level approach which is fast and capable to compensate for large and very small deformations at the same time. Furthermore, we introduce uncertainty estimation for non-rigid image registration. We are able to provide local uncertainty information for the recovered transformation. This allows us to evaluate the registration results but also offers new ways of advanced visualization. We tested our algorithms on optical flow estimation and medical image registration. | Diploma Thesis | Ben Glocker | |
Validation of Navigated Beta Probe Application in Cancer Surgery In minimally invasive tumor resection the desirable goal is to perform a minimal but complete removal of cancerous cells. In the last decades interventional nuclear medicine probes supported the detection of remaining tumor cells. However, scanning the patient with an intraoperative probe and applying the treatment are not done simultaneously. In the past we extended the one dimensional signal of a nuclear probe to a four dimensional signal including the spatial information of the distal end of the probe (current status). This signal can be then used to guide the surgeon in the resection of residual tissue and thus increase its spatial accuracy while allowing minimal impact on the patient. The next step is to prepare clinical experiments and integrate the solution into the clinical workflow. The student in charge of this project will contribute in that step by designing ex-vivo (and eventually in-vivo) experiment protocols, preparing the experimental setup and evaluating the experiments on their own. | DA/MA/BA | ||
Construction of Decentralized Data Flow Graphs in Ubiquitous Tracking Environments Augmented Reality (AR) is a form of interaction with computers by augmenting objects of the real world with virtual representations of otherwise invisible information. For a precise augmentation of the user s view of the real world with virtual objects, an up-todate spatial model of the user s environment is essential. Classic AR applications are limited in their range by the scope and the diverse properties of commonly used sensors for tracking. To obtain such a spatial model in large-scale Ubiquitous Computing environments, for which Augemented Reality is a natural interface, different tracking technologies have to be combined transparently. The concept of Ubiquitous Tracking leads to such a dynamically configurable sensor network providing positional information. An abstraction of the underlying tracking infrastructure is obtained and a uniform query mechanism for spatial information is offered to AR applications. The spatial model of the environment can be represented by a graph structure. On basis of this high-level description of available measured spatial relationships, information about geometric relationships can be inferred between arbitrary objects, whilst the actual computation of the positional information is provided by a network of interdependent software components. The concept of Ubiquitous Tracking is introduced to DWARF, a component-based framework to form highly distributed AR systems. This thesis deals the aggregation of positional information in a distributed AR system built on DWARF. A new middleware component is introduced to DWARF concerning the distributed representation of spatial information. It provides flexible integration mobile clients participating in the AR system. Positional information requested by applications is aggregated on basis of a distributed represenation of available spatial relationships. The actual computation of this information is invoked by configuration of according software components. | Diploma Thesis | Dagmer Beyer | |
Advanced 3D visualization for intra operative AR Dorsal surgery, e.g. herniated intervertebral discs, is a medical domain that can benefit from new techniques for visualization of medical data. Vertebras are covered by the spine, big muscles, rips, blood vessels and other viscera. Reaching the operation site while an intervention is a difficult and sometimes hazardous undertaking and a direct view onto the vertebras is nearly impossible. If possible, nowadays interventions concerning the spinal column are performed via keyhole surgery. Because there is just a small hole through the skin, the surgeon has to navigate his instruments according to the images created by the camera of the endoscope and screened on a monitor. Effects of computer graphic, techniques of visualization, a video see-through head mounted display (HMD) and an optical tracking system enable the stereoscopic in-situ representation of CT data. Data is aligned with the required accuracy and the surgeons get an intuitive view onto and ”‘into”’ the patient. Unfortunately, this method of representing medical data suffers from a serious lack. Virtual imagery, such as a volume rendered spinal column, can only be displayed superimposed on real objects. If virtual entities of the scene are expected behind real ones, like the spinal column inside the human thorax, this lack implicates incorrect perception of the viewed objects respective their distance to the observer. The powerful visual depth clue interposition is responsible for this misinformation about depth. This thesis aims at the development and evaluation of methods that improve depth perception in stereoscopic AR. Its intention is to provide an extended view onto the human body that allows an intuitive localization of visualized bones and viscera such as vertebras, muscles, rips and blood vessels. Therefore various approaches were implemented and designed to create or intensify helpful depth clues within the workspace of a surgeon. | Diploma Thesis | Christoph Bichlmeier | |
Billiard Ball Tracking In collaboration with the Munich startup company Master-Q a tracking system should be developed that tracks the position of billiard balls in a precise way. This tracking is essential for positioning the billiard cue of the automatic billiard machine developed by Master-Q. Since a most precise position of the balls is requiered for the billiard machine to work, the highest possible accuracy is the major goal of this thesis. Additionally the positions of the balls must be assigned to the corresponding balls on the table. | Bachelor Thesis | Manuel Wolf | |
Biological Image Processing The hiwi position lasts for 6 months and starts as soon as possible. The job is for implementing software and testing new ideas for semi-automatically or automatically processing microscopy images. | Hiwi | ||
The use of biplane C-arm systems for instantenous 3D flow estimation | DA/MA/BA | ||
Implicit Guidance to Poses in Augmented Reality Developing a guidance system for a see-through stereo-view head mounted display. The system should guide to a position and a pose. A pose is specified through the orientation and the facing of the user. The System projects a path from the user's position to the target into the reality. By following this path the user is implicit lead to a specific pose without any extra effort. | Diploma Thesis | Martin Birkmeier | |
Blood Flow Simulation for Angiographic Interventions | DA/MA/BA | Max Hainz | |
Surgical Workflow Analysis: Representation and Application to Monitoring The goal of the workflow project is the representation and automatic monitoring of surgeries. An important application of workflow monitoring is to provide context-sensitive interfaces and information to the surgeon. Furthermore detection of exceptions and automated report generation are possible tasks related to workflow. Previous work by Ahmadi and Tautschnig led to advances in the offline and online recovery of surgical workflow. The aim of this work is to further research the meaning and creation of an average surgery, online recognition of phases and applications of surgical workflow analysis. | Diploma Thesis | Tobias Blum | |
Quantification of Deformation in Navigated Bronchoscopy In navigated bronchoscopy one particular challenge is the deformation of the lung during patient movement, breathing and coughing. By combining available data from CT/virtual bronchoscopy, the video stream from the bronchoscope and electromagnetic tracking, the deformation of the lung should be quantified. For the part of the lung which is reachable with bronchoscope, a patient-specific map of deformation due to respiratory motion should be generated. Methods for compensation of this respiration motion should be evaluated. Working knowledge of C++ required. Knowledge of OpenGL and Qt is of advantage. This project is going to be located at the IFL lab and will be done in close collaboration with the Pneumology department of the Kinikum rechts der Isar. Depending on progress of the project, clinical trials e.g. for compensation methods might be possible. If you have questions please contact Tobias Reichl. | DA/MA/BA | ||
Video-based bronchoscope tracking for navigated bronchoscopy Bronchoscopy is a technique for diagnostic and therapeutic procedures in medicine. Thereby a endoscope is inserted in the mouth or nose of the patient and is passed through the trachea in the bronchi. Until now the physicians mostly have to accomplish the examination with only what they see on a monitor and their knowledge of anatomical structures. Since the bronchus are very complex and have many branches it isn’t easy to stay oriented and so at the expense of the patient the duration of an examination increases. Therefore a system which assists the physicians might be very useful. The main goal is the development of a bronchoscopic navigation system (BNS) that delivers the current position and orientation of the bronchoscope with reasonable visualizations. In this thesis the tracking process of the bronchoscope based on a 2D-3D registration problem is presented. The 3D-dataset consists of a virtual bronchial tree that is rendered from a given CT-dataset with an iso surface renderer. A flight with a virtual camera through the bronchial tree is simulated to obtain virtual endoscopic images. The 2D dataset are the image frames from the bronchoscopic camera. Since the pose of the virtual camera in CT space is known the tracking process is reduced to a registration problem where the virtual image which is most similar to the current real image, have to be found. With a given start position and an adequate optimizer and similaritymeasure, this registration step can be performed for each image frame of the real camera and thus we obtain the current position and orientation of the bronchoscope for each frame. The performance and robustness of our tracking method was tested in experiments and evaluated. | Bachelor Thesis | Benedikt Schultis | |
Fiducial-Free Registration Procedure for Navigated Bronchoscopy | DA/MA/BA | Tassilo Klein | |
Calibration of Endobronchial Ultrasound Endobronchial Ultrasound (EBUS) has become a valuable tool for guidance during bronchoscopic interventions. In this thesis a calibration method and associated phantom are developed. This calibration allows for the estimation of the spatial relationship between the optical camera of a bronchoscope and the integrated ultrasound transducer, without the need for a tracking sensor. Knowledge about this spatial relation, in conjunction with a model of the perspective projection of the camera, enables the mapping of points from the ultrasound into the camera image, and of points from the camera images to lines in the ultrasound image. A possible application for this is the compounding of freehand 3D ultrasound directly within a CT volume. For this, the position of the camera is first determined via an image based camera to CT registration, and the spatial relation is then applied to find the position of the ultrasound plane in CT coordinates. The proposed method is based on an automatic pose estimation for the camera using a dot pattern. A method based on Z-fiducial pose estimation with hollow rubber tubes is used for the ultrasound plane. After the required geometrical properties of the phantom are approximated, a precise specification is developed, a phantom is built and the resulting geometry is measured with an optical tracking system. The achievable accuracy of the proposed calibration method is evaluated. The calibration is compared to another calibration method based on the established hand-eye and singlewall techniques using an electromagnetic tracker for the latter method. The new phantom based calibration is found to be more robust, producing an average transformation with a smaller backprojection error than the established techniques. Calibrations can also be performed much faster with the phantom and do not require a tracking system, thus rendering it an interesting alternative to the established methods. | Diploma Thesis | Philipp Dressel | |
Interaction Concept for a Medical Augmented Reality System | DA/MA/BA | ||
C-arm Calibration | DA/MA/BA | Xin Chen | |
Radiation-free drill guidance for Interlocking of Intramedullary Nails using video-augmented-X-ray | DA/MA/BA | Benoit Diotte | |
6DOF C-arm Motion using Kinematic Analysis The standard mobile C-arm has five joint parameters. The relationship between the motion of the X-ray source and the joints parameters can be modeled using kinematic chains. Due to one DOF missing, not all motion can be achieved by the standard mobile C-arm even within it working space. Therefore, taking the motion of the operating table into account will build a C-arm system having 6DOF. Forward kinematic analysis of the C-arm is to find the positions and orientations of the X-ray source, given the values for the joint variables of the C-arm. The inverse kinematic is used to obtain the joint parameters leading to the desired motion of the end effector (X-ray source). The main goal of this project is computing the joint parameters (including the operating table motion) using the inverse kinematic. With these obtained joint parameters, surgeons can quickly move the C-arm to reach the any target within its work space, which will definitely simplify their operation and reduce unnecessary X-ray radiation. | DA/MA/BA | Rui Zou | |
C-arm Motion Estimation without Radiation C-arm (X-ray source) motion estimation is a crucial step for many (computer aided) clinical applications, like X-ray image stitching, C-arm placement and so on. Most approaches are based on using X-ray images, and thus introduce additional radiation. Our Camera Augmented Mobile C-arm system is capable of acquiring the X-ray and optical image overlay, so the pose or motion of C-arm X-ray source can be computed using the information from optical images. This will definitely save a lot of radiation. Our current method for pose/motion estimation is using a planar marker pattern. The shortcomings are the difficulty of integrating this planar pattern into the clinical procedure and the limited viewed field to the camera. Therefore, using a flexible distribution of markers, in which markers can located in any 3D positions without any constraint, makes integration much easier and can provide an unlimited viewed field. However, using such a flexible distribution of markers will lose the 3D positions of each marker and thus the estimated translation is up to a scaling factor. We developed a novel method to recover this scaling factor. The main goal of this student project is to implement, evaluate and integrate the method. | DA/MA/BA | Maximilian Springer | |
Visual Marker Based User Interface for the Operating Room In the last decades, many advanced computer-based navigation solutions, e.g. using external camera tracking for navigation and augmented reality visualization, and intra-operative visualization systems e.g. Camera Augmented Mobile C-arm (CamC), have been introduced and deployed in the operating room (OR) for surgical practice. However, very few computer-based systems have succeeded to become clinically accepted and even a small number of them were integrated into daily clinical routine. Interactions between surgeons and these advanced complex computer aided intervention (CAI) systems play an important role of successfully deploying systems in the OR. Traditional computer-user interaction hardware, e.g. mouse and keyboard, is very difficult and impractical to be used, since they are hardly sterilized. A cheaper and practical solution based on visual marker detection is proposed for the user interface of CAI systems. In this project, a visual marker based robust and friendly user interface must is designed and developed. The developed user interface is integrated into a CamC system and its functionalities are evaluated with the CamC system. | IDP | Oleg Simin | |
Automatic detection and reconstruction of catheter-tips in X-ray images | DA/MA/BA | ||
Automatic definition of cardiac axis in emission tomography Emission tomography (PET and SPECT) are functional imaging modalities showing the three-dimensional distribution of radiolabelled molecules in the body. A major application is for cardiac imaging, mainly to evaluate myocardial perfusion. Cardiac images are reoriented before their clinical reading in order to have short- and long-axis views of the left ventricle. The reorientation is currently performed manually by defining the cardiac axis on two different views. The goal of the project is to develop and evaluate automatic methods to define the cardiac axis based on the image content and prior anatomical knowledge. | DA/MA/BA | ||
Optimization of image reconstruction and analysis workflows for cardiac Positron Emission Tomography integrating multidimensional physiological gating | DA/MA/BA | Brian Jensen | |
Image-based Quantification of Patella Cartilage using MRI - Evaluation of Novel Methods for Segmentation, Volume and Thickness Estimation Osteoarthrosis (OA) is one of the major socio-economic burdens nowadays. There is a strong need for non-invasive, accurate, and efficient tools in order to support the clinical work of diagnosis and therapy. MRI technology provides superb soft tissue contrast and high resolution three-dimensional image data; MRI can visualize cartilage and other articular tissues directly. The combination of accurate image segmentation methods and tools for volume and thickness quantification are promising approaches which can deliver significant parameters for diagnosis and therapy. In the literature of cartilage segmentation, a perfect method for the patella cartilage segmentation has not been accomplished. In this project, we evaluate several different segmentation methods: atlas-based, shape models, semi-automatic and manual, to see the respective advantages and deficiencies. Then we suggest how to use different methods in appropriate situations. We also present methods to calculate the volume of the cartilage instead of voxels numeration to increase the accuracy to the sub-voxels level. The reconstruction model accords with the anatomic shape of the patella cartilage. 3D visualization of the cartilage is made based on triangulated faces and minimum Euclidean distances from the vertexes of triangles of the cartilage bone interface to the triangles of the cartilage interface are calculated to find the minimum value at each point in order to determine the cartilage thickness distribution. The proposed methods are tested and evaluated on a big number of data sets from healthy volunteers as well as patients suffering from OA. The cartilage volume and thickness of the pathologic and healthy data sets are analyzed to find the statistic characters and help the diagnosis in clinical work. | Master Thesis | Shanshan Cui | |
Longitudinal Analysis of the Cerebral Vasculature in Diabetic Rats | DA/MA/BA | ||
[[Students.DaChisu][]] | |||
Closed-Form Solutions to Computation of Multiple-View Homographies Several fields of applications in computer vision require 2d-homography estimation based on point-correspondences between multiple images, which put into relation the 2d-projective transformations of each image in regard to a global reference coordinate frame. Due to noise in the measured point coordinates and eventual false matches on the point correspondences, it is impossible to find an exact solution for these homographies. Therefore - under certain assumptions about the type of noise - the aim is to find best approximates, notably Maximum Likelihood Estimators (MLE) for those homographies. Unfortunately the task of determining those MLE involves, amongst other processing steps, the optimization of a non-linear cost function which requires initialization. The quality or accuracy of the initialization also affects the quality and accuracy of the final result of the optimization. Hence, improving the initialization technique is of significant interest when it comes to high-precision mosaicing. So far, common initialization techniques either apply threading type methods to parallax-free scenarios or they apply batch type methods to sets of images with altering camera centers, although the concept of batch techniques allows the deviation of closed-form solutions. These closed-form solutions mainly differ from the threading type methods by the fact that they discard a whole level of detail from statistical point of view in order to linearize solving for an initial guess instead of heuristically withdrawing known and interpolating missing information. This thesis introduces some of those closed-form solutions and investigates their behaviour in regard to a simple threading type solution with synthetic experiments and proves practical feasibility with a real example. | Diploma Thesis | Pierre Schroeder | |
Dynamic analysis of coronary arteries in a fluoroscopy sequence | DA/MA/BA | ||
Visualization of activity parameters with an avatar suitable for the elderly people The demographic trend of today's society and the associated problems leads to big challenges for everyone. The aim of the research project Fit4Age founded by the Bayerischen Forschungsstiftung is, to allow elderly persons to take part in the social live for a longer time than nowadays. The exploratory focus of this subproject is to maintain and to increase the physical fitness of senior citizens. In this context plays the self-motivation in order to stay physical active a crucial role. Interactive training techniques can help to add playful elements to the physical training and make it substantially more attractive in the long run. One possibility to achieve that aim is the coupling of specially designed computer games for motivation and efficiency control with sensors of acquisition of motion data. Within the scope of this thesis a system is developed, which is specially designed for use of elderly people. Its purpose is to animate them for more physical activity using the Tamagotchi concept. It guides the users at the same time. The estimation of the training will be based on a specially prepared training schedule. In this context prototypes will be developed and evaluated to obtain their user acceptance. Furthermore the evaluation should help to find out which type of virtual character should be used. In addition the answer to the question how the user-interface should be designed on a mobile system to be simple and intuitive for the user will be searched. For this purpose different alternatives of pawns and menu navigations get compared. For every single type of the three presented pawns one instance will be chosen and a 3D shape of it will be created and animated to get integrated in the prototypes. | DA/MA/BA | Tayfur Coskun | |
Counting data on a map on a multitouch table Within the project SpeedUp (http://www.speedup-projekt.de/) we develop a map application for mass casualty incidents. On this map application the user should be able to count the data on the map in a very short time. The task of the thesis is to develop different variations for counting the data on the map. Finally the different implementations should be evaluated. TISCH, a multitouch table, which includes an ftir multitouch sensor and shadow tracking developed by Florian Echtler should be used as hardware. As development framework Microsoft Blend and C# should be used. | DA/MA/BA | Moritz Neugebauer | |
Coupled Curves Segmentation The term of coupled curves refers to two or more boundaries bounded biomechanically or anatomically. Some examples are luminal and outer borders of the vessel in intravascular ultrasound images, the myocardial borders of heart in different cardiac modalities like Echocardiography and MRI, retinal layers in optical coherent tomography of eye etc. Coupling these boundaries and taking into account their interdependency efficiently assists segmentation of weaker boundaries by the guide of stronger ones. This project is an extension to our recently developed segmentation approaches by modifying the formulation to segment coupled curves. The primary deliverable is segmentation of double boundaries and can be followed to the secondary deliverable, i.e. segmentation of multiple boundaries depending on the performance of the researcher. Platform of the project is visual programing with mevislab. Preferred coding language is C++ Nevertheless matlab coding can be used for development. There are plenty of applications to the segmentation of coupled curves in medical image processing and the project has significant contribution with high impact to the community. | DA/MA/BA | ||
Customizable Anatomical Landmark Detection in 3D CT | DA/MA/BA | TDB | |
Deformable Registration Methods This thesis implements a variational method for deformable registration and applies it to different medical problems. First we explore different approaches to the problem of deformable registration of medical images. After comparing feature-based and intensity-based approaches, we concentrate on the intensity-based variational method for deformable registration. The main reason are the inherent difficulties of the feature-based methods, i.e. feature extraction and feature-matching. The intensity-based variational method presents a general framework that can be adapted using a variety of similarity measures and regularization terms. This way, it can be adapted to a wide range of medical registration problems for different imaging modalities. We present an implementation using efficient numerical methods. Finally the implemented method is performed for different medical applications. | Diploma Thesis | Darko Zikic | |
Deep Context Learning for Image Understanding Machine learning is widely used in various image understanding applications, e.g., object detection, segmentation and depth reconstruction. This project aims at using deep learning to incorporate additional constraints so that the estimation result more globally optimal. The student will work based on state-of-the-art computer vision and image processing works as well as some latest work of Dr. Lichao Wang. | Master Thesis | ||
Joint Master Thesis with the University of Miami: The Combination of Medical Ultrasound and Deep-Sea Sonography The thesis will be supervised by Prof. Nassir Navab at the chair for computer aided medical procedures (CAMP) at TUM and Prof. Shahriar Negahdaripour at the college of Engineering at the University of Miami. The student will start at CAMP to learn the basics of medical ultrasound. This will to certain amount of the time happen at the Klinikum rechts der Isar. After this training period, he will continue his thesis at the University of Miami and work on optic-acoustic imaging for underwater search and inspection. The aim is to combine knowledge from these two separate fields of research. So, for example, investigating problems in sonar applications and looking in the community of medical ultrasound, if similar solutions have been proposed that could be adapted. | DA/MA/BA | Sarah Hempel | |
Deformable Registration of 3-D ultrasound and CT for gastrointestinal interventions Since pre-operative images and intra-operative patient posture can differ significantly, it is desirable to align pre-operative images with intra-operative data, e.g. three-dimensional ultrasound. Recently, a method has been proposed to compute this registration a computationally practical and fully automatic manner on the graphics card (GPU). This work will be done in close collaboration with the gastroenterological department of hospital rechts der Isar. A three-dimensional ultrasound machine and electromagnetic tracking system are available at our research lab at hospital rechts der Isar. If you have questions please contact Tobias Reichl. | DA/MA/BA | ||
Depth Prediction from Structured Light using Fully Convolutional Neural Networks Structured light sensors consisting of an infrared camera and projector are widely used in real-time applications to generate depth maps. However, the obtained depth maps sometimes lack accuracy. Recently, random forests were deployed to solve this problem by learning how to estimate the depth from a projected infra-red pattern [1]. This master's thesis aims at studying deep learning approaches to predict depth maps from infrared images with projected patterns. For this purpose, we will start with a Fully Convolutional Residual Neural Network (FCRN) [2] that learns to regress a depth map from a single RGB image. The first part of the thesis will be providing a tool for transcribing architectures (in particular FCRN) between deep learning frameworks, namely from MatConvNet? to TensorFlow?. The second part will focus on learning depth maps from single IR images instead. Further modifications will be developed, aiming to surpass the accuracy of standard consumer depth cameras in real-time execution. | DA/MA/BA | Helisa Dhamo | |
Depth-based hand tracking for RGBD Augmented C-arms | DA/MA/BA | ||
Improving Depth Perception for Video-Based, Intraoperative Augmented Reality Systems. Augmented reality (AR) is still a relatively young field of research. It concerns the extension of the view on a real scene with virtual objects. Virtual objects can be placed at 3D coordinates respective to the real world. This fact is highly attractive since almost any data from measuring devices can be represented in the real world. This gives the possibility of applying AR to a big set of applications in automobile, media, civil, military and medical industries. For this reason it is a highly interdisciplinary field combining a variety of subjects starting from math and ending with psychology. Medical AR is the target application of this master thesis. This branch of augmented reality gets closer and closer to real life applications with time, giving a possibility to help in different ways during various types of surgeries. It also brings probable ways of performing those surgeries, which were not doable before. Best and newest mathematical methods and algorithms are used for calculations of relations between real scene, video devices and virtual objects. But there still exists a serious bottleneck - the correct and correctly perceivable visualization. In cases of industrial or mechanical AR it might cause serious problems. Misleading visualization in Medical AR, being used for intra-operative guidance, is life-threatening. Medical AR in most cases has to show internal body structure i.e the virtual representation of anatomy from different data sources. Here the keyword is - “internal”. Different instances of medical imaging such as readings of measuring devices such as magnetic resonance (MR) scanner, computer tomography (CT), data received from a Gamma Probe are sources of this virtual information. But this information is not always intra-operatively available, and in case of availability - data sets are usually small in volume. For example in the case of vertebroplasty, which is a minimally invasive spine surgery, a CT scan of small spine segment covering 3 vertebrae is taken. Newest techniques of volume rendering allow to improve the perception of depth. In addition to the fact that they work mostly for big datasets, latter are still visually superimposed over the real body, although all spatial relations and positions are correct. In the case of small datasets even these techniques are not useful at all, since the simpler and the smaller the dataset the less depth cues can be created with the rendering algorithm. Thus new ways of depth perception improvement should be invented, old ones combined and all this ensemble should be applied to a maximum extent, getting closer to human everyday’s life perception. This task leads deeply to science of human perception in general and to depth perception in particular. The current master thesis discusses all possible depth cues and the extent to which they can be used in Medical AR to improve perception of depth. Reasons of other depth cues not being useful are also mentioned. Master thesis also describes a novel method of combining certain cues and gives examples of how this combination can be applied in cases of availability of different input data from a real scene. This new approach is compared to those, which were used previously. Results from a user study are analyzed regarding the enhancement of an intuitive depth perception. | Master Thesis | Maxim Kipot | |
Development of a Disposable Tracking Target for Medical Application | Master Thesis | Berkin Ergün | |
Building a Gesture-Based Information Display The Siemens SiViT is a gesture-based information display for use in public settings. We have been donated one of these systems, however, most of the hard- and software is not up to current standards anymore and needs to be replaced. Additionally, suitable information and presentation schemes for the finished system should be selected. The resulting system should support multiple pointer ( Finger tips ), possibly by multiple persons. | DA/MA/BA | Nikolas Dörfler | |
A hybrid method to compute the 3D dose in tissue samples during micro-CT image acquisition The aim of the project is to develop a software platform to compute the 3D distribution of dose deposited in tissue samples during micro-CT image acquisition performed with monochromatic synchrotron radiation. The dose can be divided into two components: the primary dose deposited directly by the incident X-ray beam and the secondary dose deposited by scattered and fluorescence X-rays. Previous work carried out in the context of synchrotron stereotactic radiation therapy has shown that a deterministic algorithm can be used to compute the primary dose map [1], whereas a hybrid approach (combination of Monte Carlo and deterministic calculations) is well suited to the calculation of the secondary dose [1,2]. The aim of the master internship is to adapt this method to assess dose deposition inevitable (but optimizable) in CT image acquisition. Special care will be taken to relate optimal hybrid parameters to the geometrical/physical CT set-up. The software development will be carried out in the framework of the Geant4/GATE Monte Carlo code. The final goal is to obtain a tool which can permit to calculate doses in biological samples, but that also can be used to simulate the experiment and optimize the experimental procedure. | DA/MA/BA | ||
Development of a model for computer-aided driver behaviour simulation | DA/MA/BA | Severina Popova | |
AR in Cars for Assistance at Interesection With the help of Augmented Reality it is possible to present different information in a contact-analog Head-Up Display (HUD). With increasing usage of sensor systems in automobiles, a huge number of information is made available. An intelligent, situation-dependent display of spatial information is necessary for the driver. As part of your work, you should investigate and develop different display strategies and concepts for HUD and integrate them in the fixed-based driving simulator of the Institute for Ergonomics, Mechanical Faculty (Fakultät für Machinenwesen). | DA/MA/BA | Markus Duschl | |
Occlusion Handling for AR in Dynamic Camera Szenarios | DA/MA/BA | Lorenzo Pirritano | |
Dynamische Kalibrierung eines Lokalisierungssystems basierend auf einer drahtlosen Technologie Lokalisierungssysteme werden immer häufiger in Kliniken zur Patientenüberwachung oder dem Gerätemanagement eingesetzt. Bei dieser Arbeit soll ein bestehendes Lokalisierungssystem basierend auf einer drahtlosen Technologie wie WLAN oder RFID optimiert werden. Aktuell wird eine Position des Lokalisierungssystems über Signalstärkemessungen berechnet. Diese Berechnung geschieht auf Basis eines Modells, welches Kalibriermessungen verwendet, die bislang nur einmal aufgenommen werden. Durch Interferenzen und Umgebungseinflüsse kommt es zu Schwankungen in den Signalstärkemessungen und der berechneten Position. In dieser Arbeit soll eine dynamische Kalibrierung untersucht werden, die die Messungen automatisch in bestimmten Zeitabständen durchführt und in das Modell mit einbezieht. | DA/MA/BA | ||
Efficient Object Detection using Fully Convolutional Neural Networks In recent years, Fully Convolutional Networks (FCNs) have set the state of the art in various dense prediction problems, either classification or regression. As part of this thesis, we investigate Fully Convolutional Residual Networks for the problem of object detection in RGB and/or RGB-D images. Our approach combines two-fold information, in order to detect the class and bounding box for objects in the image, given a specified set of known classes. Specifically, we aim to jointly perform semantic segmentation, assigning each pixel in the image to its corresponding class, and offset regression, predicting for each pixel a vector that points to the center of its object, with a unified network. This method additionally holds the potential of being real-time. We will compare the accuracy and efficiency of this approach with other recent object detection methods. | Master Thesis | Bharti Munjal | |
Egozentrische 3D-Visualisierung von Fussballtaktiken in einer Datenbrille -- Ego-centric 3D Visualization of Soccer Tactics in an HMD In soccer, many tactical manoevers are known and studied. Trainers repeatedly discuss such tactics with their playery. In particular, they discuss the individual tactical behavior of players after a match. In collaboration with master students of Prof. Veit Senner (Fakultäten für Maschinenwesen and Sportwissenschaften) this thesis will develop Virtual-Reality-based schemes to present tactical information to players in ego- and exocentric visualizations. | DA/MA/BA | ||
Error Classification and Propagation for Electromagnetic Tracking | DA/MA/BA | Julian Much | |
Error Classification for Electromagnetic Tracking | DA/MA/BA | Julian Much | |
Gameentwicklung zum Thema Ernährung | DA/MA/BA | ||
Endless Marker Tracking In the scenario of a wide area tracking (usually larger than 5x5 square meters) the classic marker tracking approach to determine a cameras' pose is limited by the size of the markers and the registration process of these within the large area. Following the ruler principle a new binary marker shall be designend to overcome these shortcomings. Therefore algorithms in marker detection, registration and pose determination must be programmed and evaluated. The marker should consist of repetitive patterns providing relative information about the cameras' pose in relation to the complete marker. | Hiwi | Yasir Latif | |
Endoscopic Cluster Matching for Optical Biopsy Localisation Diagnosis and treatment of the oesophageal cancer, the most rapidly increasing cancer in Western World, requires periodic examination of the tissue under endoscopic guidance together with systematic biopsies. During these examinations a flexible endoscope is inserted through the mouth of the patient into the oesophagus and biopsies (samples) are taken from suspicious regions of the tissue for pathologic investigation. Recent developments in bio-photonics resulted in a new technology called probe-based Confocal Laser Endomicroscopy (pCLE) that enables real-time visualisation of cellular structures during the endoscopy without removing any tissue sample. A fibered confocal microprobe is introduced through the instrument channel of the endoscope to perform so called “optical biopsies”. Although this procedure has several advantages over the conventional biopsy, it also introduces the new challenge of retargeting the optical biopsy locations in the follow-up examinations. Due to the lack of any markings such as a scar left on the tissue, re-targeting the optical biopsy locations becomes a challenge for endoscopic expert. Recent work towards guiding the endoscopic expert in re-targeting optical biopsy sites presented very promising first step towards a complete framework for optical biopsy localisation. The aim of this project is to advance these techniques by an automatic cluster matching algorithm in order to develop a fully automatic system for optical biopsy re-targeting. | DA/MA/BA | ||
Error Propagation for Augmented Reality Assisted Navigated Pedicle Screw Implantation | Diploma Thesis | Irene Faure de Pebeyre | |
Evaluation of Realtime Object Detection and Tracking Method for Defining an Optimal Performance Algorithm | DA/MA/BA | Ya Chen | |
X-ray Stitching Evaluation Based on DDRs: Towards Clinical Trial After extensive preclinical testing finally CAMC has been moved into the Operation Room of Innenstadtklinikum at Nußbaumstraße. Many Patients were treated successfully with CAMC-navigation support (see CAMCInOR). The clinical investigation also includes X-ray image stitching. X-ray image stitching is a way to create panoramic X-ray images that have exceptionally wide fields of view. Panoramic X-ray images are very helpful for long bone fracture reduction surgery since they can show the whole bone structure with its entire mechanical axis, in a single image. There are many error sources in X-ray image stitching based on CAMC. Before the real clinical study, evaluating all factors to X-ray stitching is necessary. This can give an overview of potential error sources and their effects on X-ray stitching, which will be used as guidance for applying our X-ray stitching in real clinical cases. Moreover, the obtained results can give us the hints to improve the X-ray image stitching methods based on CAMC. For the evaluation, we simulate the C-arm motion based on the kinematic model of the CAMC system, and then generate artificial fluoroscopy from real X-ray CT data. Generated artificial fluoroscopy will be used for X-ray image stitching. | DA/MA/BA | ||
Evaluierung eines alternativen optischen Trackingsystems zur Überprüfung der Genauigkeit eines Lokalisierungssystems Lokalisierungssysteme werden immer häufiger in Kliniken eingesetzt zum Gerätemanagement oder zur Patientenüberwachung. Zur Überprüfung der Genauigkeit eines Lokalisierungssystems wurde ein automatisiertes Prüfverfahren entwickelt. Dieses Prüfverfahren basiert auf optischen Trackingkameras und einem programmierbaren Roboter. Der Roboter fährt mit einem Sendemodul des Lokalisierungssystems eine bestimmte Strecke ab und wird einerseits durch das Lokalisierungssystem und andererseits durch das optische Trackingsystem erfasst. Am Schluss erfolgt ein Abgleich von den Positionen, die das Lokalisierungssystem berechnet und das optische Trackingsystem erkannt haben durch eine Software. Da der Roboter auch aus einem Raum in einen anderen Raum fahren kann, ist es wichtig, dass das Prüfverfahren auch über mehrere Räume funktioniert und das optische Trackingsystem größtenteils immer den Roboter erkennt. In dieser Arbeit soll das optische Trackingsystem durch eine kostengünstigere Variante, z.B. Weitwinkelkameras oder Kinects, ersetzt und evaluiert werden. | Master Thesis | ||
Optimization of models in Freehand SPECT reconstructions Nuclear imaging is a commonly used tool in today's diagnostics and therapy planning, giving necessary information about metabolic functions in the patient's body. They employ radioactive tracers which are injected to the patient and follow there a specific metabolic pathway. With the use of radiation detectors these tracers can then be imaged, for interventional use however the current methods suffer from drawbacks which limit their application. Tomographic systems like PET (Positron Emission Tomography) and SPECT (Single Photon Emission Computed Tomography) have too high acquisition times and are too bulky for intra operative use while nuclear probes and gamma cameras only provide 1D and 2D information. Over the last years a new imaging modality, called freehand SPECT, was developed at our chair to overcome these shortcomings. Freehand SPECT combines a nuclear probe with an optical tracking system to obtain its position and orientation in space synchronized with its reading. These informations can be used to compute a tomographic reconstruction of an activity distribution. This is done by discretizing the volume of interest into voxels xi with unknown activity values a(xi) so the readings of the probe can be regarded as a linear combination of the contribution of each voxel c(xi) to that reading r: r = c(xi)a(xi) A set of measurements then yields a system of linear equations and by inverting that system the activity in each voxel is obtained. For the inversion then the contribution of each voxel to each measurement is needed. That contribution is approximated by models of the detection physics. With a known ground truth of the activity distribution the system of linear equations can also be solved for the contribution of a voxel to the reading, by using the existing models as boundary conditions. The computed contributions could then be again used to optimize the models used for reconstruction which is the goal of this work. | DA/MA/BA | ||
Automatic fiducial and implant segmentation using C-arm fluoroscopy | DA/MA/BA | ||
Fast Contour Segmentation Recent advances in image processing show impressive results in deformable contour tracking and object segmentation. These novel approaches should be implemented and evaluated for the purpose of human contour tracking in real-time scenarios. The overall aim of this thesis is to evaluate and compare at least two different methods and show advantages and disadvantages of their applicabillity for AR applications. | DA/MA/BA | ||
Improving Augmented Reality Table Top Applications with Hybrid Tracking Augmented Reality (AR) applications enrich the real world by augmenting virtual objects. In order to gaze this fusion of real environment and virtual content Augmented Reality setups utilize common graphical output hardware like Head Mounted Displays or Tablet PC and tracking technologies to estimate the position and orientation of tracking targets. Frequently used vision-based techniques like Natural Feature Tracking are error-prone to camera move- ments. Features have to be found in subsequent video frames again. Basic idea of this work is to adopt the search area for features to the change in orientation of the user interface hard- ware. This work is a first step to solve this problem for a special class of Augmented Reality applications, Table Top Augmented Reality. The work provides a hybrid tracking approach to bring tracking and the user ’s movement context together. Orientation information given by an additional tracker is used and applied for a dynamic configuration during runtime of the vision-based tracking routine, a texture tracking algorithm. To accomplish this a special software architecture is proposed. After we introduced the basic ideas of table top Augmented Reality we show the design, the execution and evaluation of a user study. Goal is to find an approximation for a linear mapping between user motion and search window of the texture tracking routine. Applying statistical techniques we will show that it is possible to derive such a mapping. This map- ping can be expressed by a simple linear function with the change of orientation as input parameter. We will also evaluate that the user behavior is related to the performed tasks. We will identify tasks for Table Top AR and discuss implications for the tracking routine. | Diploma Thesis | Felix Löw | |
Design of a Planning Tool for Port Placement in Robotically Assisted Minimally Invasive Cardiovascular Surgery In minimally invasive cardiovascular surgery only small incisions are used to perform an operation. A split of the breastbone or the use of the straining heart-lung machine can be avoided, resulting in less surgical complications than traditional invasive surgery. A reasonable realization of these operations was facilitated by master-slave systems for teleoperations. Whereas the surgeon is controlling the teleoperator sitting at a master console, the arms of the slave robot are able to imitate the surgeon's actions on the patient. For a successful teleoperator based intervention it is crucial to find the optimal locations, also referred to as ports, for the incision sites of the teleoperator arms. Traditionally, planning the port placement was dependant on the surgeon's experience in interpreting previously acquired two-dimensional image stacks such as computed tomography slices. This work introduces a planning tool for port placement that is supporting the surgeon to segment cross-sectional axial slices acquired by imaging modalities, reconstruct a three-dimensional model of the patient from these slices, and simulating the cardiovascular intervention in a virtual environment. For the virtual simulation of an intervention, the teleoperator arms can be interactively placed and moved within the patient's model in such a way that they neither intersect each other nor vitals and bones. Therefore, the planning tool's graphical user interface provides various visualization, planning, and validation options. Collision detection techniques and a virtual endoscope view support the verification of the port placement process. All planned data can be exported to an intra-operative navigation tool. | Diploma Thesis | Marco Feuerstein | |
A Navigation Tool for the Endovascualar Treatment of Aortic Aneurysms - Computer Aided Implantation of a Stent Graft More and more aneurysms of the thoracic and abdominal aorta are treated minimally-invasive by implanting a stent graft inside the aorta. However, without opening the patient the clinicians rely on imaging techniques, e.g. computed tomography, magnetic resonance imaging and X-ray, in order to visualize the region of interest and for intraoperative navigation. Unfortunately not all imaging modalities are available during operation, some are only available preoperatively. In 2003 the STENT project explored the prospects of using computer aided imaging techniques for preoperative planning and intraoperative navigation. Within the project a first prototype application was developed, but due to time constraints never made it to the operating room. This project is the continuation of the 2003's STENT project and its goal is to provide the clinicians with a preoperative planning tool and an intra-operative navigation and visualization tool. Preoperativly acquired computed tomography images can be used for segmentation, visualization and metric measurement of the aorta and aneurysm. Intra operatively taken X-ray images are aligned with the CT data, thus aiding navigation by supplying a three-dimensional visualization of an anatomical detailed model, metrics and current as well as planned locations of the stent graft. | Diploma Thesis | Konstantinos Filippatos | |
BA/MA/DA 3D Particle System on a Multitouch Table A Particle System for Interactive Visualization of 3D Flows developed by the chair for computer graphics and visualization should be integrated on a multitouch table. | DA/MA/BA | Waltraud Sichart | |
Freehand 3D endosonography Gastrointestinal endoscopic 2-D ultrasound is routinely used for diagnosis of the pancreas. However, it is desirable to record 3-D ultrasound volumes from sequences of 2-D images, together with position and orientation information for each frame. Such volumes can be visualised in a more comprehensive and repeatable manner, e.g. viewed from any angle. This work will be done in close collaboration with the gastroenterological department of hospital rechts der Isar. A three-dimensional ultrasound machine and electromagnetic tracking system are available at our research lab at hospital rechts der Isar. If you have questions please contact Tobias Reichl. | DA/MA/BA | ||
3D Freehand Ultrasound Compounding for Ultrasound Doppler Angiography Ultrasound navigation is a desirable technique for guided neurosurgery. Blood vessels are among the most important anatomic structures which the surgeon needs for orientation. In 3D Freehand Ultrasound, we are (already) able to make 3D scans by sweeping an optically tracked ultrasound probe and combining many 2D slices with their respective pose into a 3D volume. In order to correctly visualize blood vessels, we would like to investigate improved reconstruction (or compounding) of blood vessels which are visualized with Doppler ultrasound. The improved images can be greatly useful for the surgeon to see anatomy correctly and have improved spatial context himself in a 3D ultrasound scan. | DA/MA/BA | ||
Display Technologies for Augmented Reality Support in Logistics As the world becomes smaller the complexity of logistic networks increases. With it highly adaptive systems are needed to withstand large-scale rapid changes with influences in the logistics chain all the way down to the internals of warehouses. To this end, several techniques such as pick-to-light and pick-to-voice have been developed to assist workers in the common picking task both in efficiency and to decrease errors. Augmented Reality systems might have the potential to help the workers even more when adapting to new environments, but it has yet to be seen how the different display technologies help the workers perform. In this scope this thesis is based, and by exploring the characteristics of the displays, some of them have been chosen to have their usability in picking scenario evaluated by experiments. Therefore an experimentation framework based on DWARF was developed which allows to rapidly change between different displays while measuring user performance during different task from the picking scenario. And finally two experiments have been conducted to measure the wayfinding times related to navigation between shelves and to picking items from a shelf. | Master Thesis | Troels Frimor | |
Design and optimization of collimators for handheld gamma cameras In nuclear medicine, the choice and configuration of the detector system plays a crucial role in defining the quality of the final image. In particular, the performance of handheld gamma cameras depends on the design of the collimator. The choice of a collimator always represents a compromise between the size of the field of view (FOV), the sensitivity, and the spatial resolution. Sentinel lymph node (sLN) diagnostics can be considered as a case requiring a small field of view (SFOV). Modelling collimators for SFOV imaging may allow to improve lesion detectability, achieve better resolution, and sensitivity. The goal of this work is the development of a framework for collimator optimization, allowing to select the best configuration for each clinical situation. | Master Thesis | Volodymyr Cherniy | |
GPU Accelerated Deformable Registration Deformable registration is one of the most challenging problems in medical imaging. The problem consists of recovering a local transformation that aligns two images that have a non-linear relationship which is often unknown. A recent presented method formulating the deformable registration in terms of solving a discrete labeling in a Markov Random Field formulation has shown to be very efficient. The main advantage of this method is the fast computation of the similarity measure in the image domain. The computation scheme approximates the local similarity of a potential deformation by weighted global translations of the image to be registered. Such a method is highly parallelizable. Within this project, the proposed scheme should be ported to a GPU implementation to reduce the computational time. Various similarity measures should be implemented on GPU and an extensive evaluation on synthethic and real medical data should be performed. | DA/MA/BA | Philipp Stefan | |
User Interface for Medical Augmented Reality Augmented Reality is an emerging technology with promising applications in the medical domain, e.g. medical education or intra-operative assistance. The latter offers an enhanced view for keyhole surgery providing the physician with information in a more intuitive way. To use Augmented Reality for intra-operative surgery assistance a special User Interface is necessary. The traditional Human-Computer Interaction tools like mouse and keyboard are not suitable for surgery environments, because special restrictions must be considered. Besides of sterile equipment, no discharged air is allowed in an operation room and tracking systems in combination with possibly used CT or MRT may cause electromagnetic interference. Furthermore the system should not constrain the physician during his work, but assist him to make his work more comfortable. Several research groups already worked and still work on this subject, so the first step of the thesis is to analyse published work and to carry together their advantages and disadvantages. Therefor not only medical User Interfaces but also Virtual Reality systems are considered. In the end this thesis should give a possible solution for a specific medical attendance in cooperation and arrangement with the respective physicians. | Diploma Thesis | Marion Gantner | |
Multitouch Gesten Leitfaden Multitouch Geräte, die mit Gesten bedient werden, werden immer beliebter. Die Anzahl der unterschiedlichen Gesten wächst dabei zunehmend. Mittlerweile ist ein ganzer Dschungel an verschiedensten Gesten vorhanden. Diese Arbeit soll Licht in das Dunkel bringen. Ein Überblick über die verschiedenen Gesten soll gegeben werden. Welche Geste wird für welche Interaktionen verwendet? Woher stammen die Gesten? Auf welchen Geräten werden die Gesten ausgeführt? (Mobiltelefone, TabletPCs?, Grafik Tablet, Magic Mouse, Multitouch Tisch…). Da es sich bei dieser Arbeit um eine theoretische Arbeit handelt, liegt der Fokus auf der Literaturrecherche und dem Auswerten der Literatur sowie dem der entsprechenden visuellen Darstellung der Daten. Letztenendes soll ein Leitfaden für den Einsatz von Gesten entwickelt werden. | Bachelor Thesis | Natascha Abrek | |
Gestenbasierte Informationsvisualisierung mit einem optisch getrackten Datenhandschuh (ART) | DA/MA/BA | ||
[[Students.DaGlanceAR][]] | |||
Glioma Modelling Using Spectroscopy and PET Images | DA/MA/BA | ||
Patient Specific and Image Based Modeling of Brain Tumor Progression | Master Thesis | ||
Optical and Magnetic Tracking The aim of this diploma thesis is a fusion of optical and electromagnetic tracking systems with the motivation of improving guidence techniques for minimal inversive surgery. I will combine different tracking systems with different properties like accuracy and availability. The intuitive solution for fusion of two systems would be using the more accurate one if available and switch to the second one when the data of the more accurate are not available. Within this work I will try to get an even better result by statistically combining the two systems with help of several (federated) Kalman Filters. | Diploma Thesis | Sintje Göritz | |
Experiments and Validation of a Three-Dimensional Accelerometer in a Belt Buckle, ActiBelt, for Continuous Multiple Sclerosis Patient Monitoring This diploma thesis is a further step in the development of a gait analysis instrument by the Sylvia Lawry Centre for Multiple Sclerosis Research, usable for almost every kind of physical activity, a three-dimensional accelerometer in a belt buckle, the ActiBelt. The aim of this diploma thesis is to evaluate how much of the real trajectory is to be obtained by the acceleration data and to check if the device is usable for path length analysis. The data is therefore validated by a highly accurate trajectory tracking system for data correlation and comparison, as well as a lot of other reference systems. Algorithms for Hand-Eye Calibration, as well as Point-Based and Feature-Based Registration were used to transform the data. Feature-Based Registration with a mean-square error of 0.03m and a standard deviation of 0.01m was achieved and the path length of a non-circular movement could be obtained to 95% of the original length. Further ideas to use the ActiBelt for trajectory analysis, like in pedestrian navigation, are presented, along with the technical specifications and motion models that describe the human gait. Also several other applications of the ActiBelt are introduced, which range from diving deep under to moving in zero-gravity in space. | Diploma Thesis | Thomas Gossner | |
Erweiterung der algebraischen Rekonstruktion auf der Grafikkarte | DA/MA/BA | Wolfgang Holler | |
Computer Supported Implantation of a Stent Graft Keyhole surgery is becoming a standard method to treat patients at a minimally invasive level. Especially in the field of cardiac and vascular surgery these treatments are fancied throughout hospitals. However, in order to enable clinicians to operate without opening the patient, certain imaging techniques are necessary to visualize regions of interest. Many procedures have been developed for this purpose, ranging from X-raying to computed tomography or magnetic resonance scanning. A problem that is raised within this context is the availability of the information created by imaging techniques since some can just be performed preoperatively whereas others are executed during an operation. The goal of this project is to enable clinicians to access imaged and processed data acquired pre- or intraoperatively during treatment, i.e. merging all visual information about a patient in order to disburden physicians when operating. In particular, a stenting operation, where leaks in the aorta are fixed by injecting a cylindrical wired tube (stent graft) that is guided to the problem region through the aorta via a catheter will be augmented virtually. Preoperatively acquired computed tomography images can be processed with the proposed system and the outcome will be visualized in the operation theatre. Moreover, intraoperative X-ray images are aligned with the CT data and hence navigational information is produced including a three-dimensional surface model, metrics, and current as well as planned locations of the stent graft. This shall help to avoid too much X-ray exposure as well as reduce damaging contrast injections and operation time. | Diploma Thesis | Martin Groher | |
Design, Development and Evaluation of a Multimodal User Interface for Medical In-Situ Visualization In-situ visualization of medical image data using a head mounted display allows the presentation of virtual objects within the Augmented Reality (AR) scene from the natural point of view. However, a major problem is to manipulate the visualization: parameters should be adjusted by the user to get the desired view of the region of interest. Also, for different stages in intraoperative navigational procedures, visualization of various instruments and navigational information has to be adjusted according to the needs of the operating surgeon. However, in operating rooms there is almost no room for classical interfaces like buttons, pedals, keyboards and mice. All tools close to the operation site have to be sterile and space around the operating table is reserved for surgical equipment. In my thesis, new concepts for interfaces to interact with the AR scene and to manipulate virtual objects are developed. The optimal user interface has to exploit the advantages of AR without hindering the user by too complex or space wasting tools, because they would drastically reduce the acceptance of AR in the operating room. As a result of this thesis, three input modalities based on hand detection, a foot pedal and voice recognition are implemented. A user study compares the three different interfaces, showing their strengths and weaknesses. | Diploma Thesis | Samuel Kerschbaumer | |
Modellierung der Gruppendynamik bei dem Einsatz von vorausschauender Fahrerassistenz | Master Thesis | Claudio Gusmini | |
Design eines für Blinde und Sehbehinderte geeigneten Navigationssystems mit taktiler Ausgabe | Diploma Thesis | Günther Harrasser | |
Interactive Heart Learning Using Augmented Reality Magic Mirror A basic framework of augmented reality magic mirror, mirracle, has been developed. We also have a simple game engine for organ rendering. Until now our previous works were always the basic work for the whole organ structure, but the medical students should learn more details about. The heart is always the first and important part in anatomy learning. We want to develop a "serious game" about heart learning for fresh medical student.A serious game for heart learning, including an interactive learning environment and all the basic medical knowledge of heart. | DA/MA/BA | ||
Real-time Computation of Depth Maps with an Application to Occlusion Handling in Augmented Reality An occasional problem encountered in state of the art Augmented Reality (AR) system is the problem of occlusions. Occlusion refers in this case to the problem of artificial images not being properly aligned in world space. To be more explicit they are missing a depth component and thus it may happen that artificial images are spuriously superimposed above real objects. This problem leads to the intend to find ways of detecting and handling occlusions. Most approaches in that direction are based on recovering depth information by means of stereo correspondence techniques. These techniques require an AR environment with two calibrated cameras mounted on a rugged stereo rig. In the AR domain this is usually a head mounted display, including two cameras, two displays (video see through, optical see through) and a tracking device. The main goal of this work was to evaluate different techniques for recovering depth maps. These techniques were evaluated in terms of accuracy, noise resistance, reproducibility and speed. Finally the task was to implement the most suitable one. | DA/MA/BA | Hauke Heibel | |
Optical See-Through HMD Calibration Evaluation of the performance of different algorithms for optical see-through head mounted dispalys (HMDs) thatare known from the literature under the same conditions and using a large set of different HMDs. Goal of the project is to provide recommendations on which algorithm to use and propose extensions and improvements to the existing algorithms. | Diploma Thesis | Sven Hennauer | |
Active Vision in Interactive Spaces | DA/MA/BA | Henning Herbers | |
Methoden und Metaphern zur Navigation in virtuellen Landkarten In den letzten Jahren konnten sich elektronische Landkarten vor allem wegen ihres praktischen Nutzens immer weiter verbreiten und durchsetzen. Von der einfachen zweidimensionalen Kartenapplikation bis hin zur virtuell komplett nachgebauten Erde gibt es verschiedene Ansätze für die Interaktion mit virtuellen Landkarten. Navigation ist ein wichtiger Faktor zur Orientierung in virtuellen Welten. Anders als in der Realität können wir uns hier ohne den Einfluss von Naturgesetzen bewegen und uns jederzeit an jeden Ort bringen lassen. Elektronische Landkarten bieten viel mehr Freiheitsgrade als ihre klassischen Pendants, aber bisher gibt es kaum Erkenntnisse wie diese Freiheitsgrade dem Benutzer intuitiv zugänglich gemacht werden können. Die in dieser Arbeit entwickelte Steuerung ist eine komplexe Flugsteuerung, die versucht den Benutzer durch den Einsatz bekannter Steuerungselemente zu unterstützen. Hierfür wurde besonders die Metaphern ''Lenkrad``- und ''Flugzeug``- Steuerung untersucht. Der Benutzer hält mit seinem Eingabegerät ein virtuelles Lenkrad, respektive ein Flugzeug in der Hand und steuert. In einer Benutzerstudie wurde untersucht wie weit sich die beiden Metaphern gleichzeitig in einem Eingabegerät kombinieren lassen. Ob Anwender intuitiv eine der beiden Metaphern bevorzugen, und welchen Einfluß die Neigung des Eingabegerätes auf die Wahl der Metapher hat war einer der Hauptfaktoren, die untersucht wurden. | DA/MA/BA | Nick Heuser | |
Entwicklung eines inverskinematischen infrarot-optischen Datenhandschuhs | Diploma Thesis | Gerrit Hillebrand | |
Interaction Management for Adaptive Augmented Reality User Interfaces We think that experiments with such user interfaces have to be conducted to find appropriate interaction techniques, metaphors and idioms. In this paper we propose a method for interaction management consisting of flexible integration of I/O devices at runtime and dataflow control. Our approach can be used to develop and control such user interfaces, which lets us try out new concepts quickly. | DA/MA/BA | Otmar Hilliges | |
Contributions for building specific CAD models for vision-based application This diploma thesis investigates the use of vision features for CAD models. The advantage of combining CAD models and vision features lies in the known 3D geometry of the CAD model and the relation of that geometry to vision features. Exemplary vision features are e.g. SIFT features, randomized trees and other well known methods of this area. A new approach will also be tested. The main goal in this diploma thesis is to use such an extended CAD model for pose estimation and therefore for tracking initialization. | Diploma Thesis | Stefan Hinterstoisser | |
Histology Fold Detection This project is part of a larger project that aims at the registration of histology volumes to in-vivo volumes. Before the registration is performed, first a histology volume needs to be reconstructed from histological slices. The reconstruction of histology volumes is performed in three stages first the embedding of a sample in paraffin block; second, the slicing (cutting) the block and imaging the slices; and third, the reconstructing a 3D volume from the 2D slices. One of the difficulties encountered in reconstruction is that the slicing causes deformations shears and tears and folds in the slices, which introduce "inconsistencies" between subsequent slices. The goal of this project is to classify the slices corrupted by such deformations and to detect the tears and folds in the image. | DA/MA/BA | ||
Focus and Context Visualization for Medical Augmented Reality This diploma thesis concerns focus and context visualization of anatomical data (obtained from different imaging modalities) for medical augmented reality (AR) and thereby the correct fusion of real and virtual anatomy. Medical AR is intended to be used for pre-operative diagnosis, intra-operative guidance and post-operative control, but is still in the stadium of research and was not practically applied yet. It is a technique which augments the surgeon’s real view on the patient with virtually visualized anatomy of the patient’s body. A medical AR system used for the purpose of focus and context visualization includes a tracking system and a video see-through, head-mounted-display (HMD) enabling a stereoscopic, augmented view on the AR scene. Focus refers to the part of the virtual anatomy, the observer (surgeon) is interested in, e.g. the operation site, which has to be perceived at the correct location respective the context of the real skin. In a possible future application of medical AR for surgical interventions, correct perception of position and depth of the focus has to be guaranteed. If the perception of the focus is disturbed or misleading, the danger exists that the surgeon operates at the wrong location and thus vitally important organs of the patient are hurt. Many visualization approaches for medical AR suffer from a misleading depth perception, since the normally hidden interior anatomy is just superimposed on the patient’s body. In these approaches the virtual anatomy seems to be located in front of the human body. Partial occlusion of the virtual anatomy by the real skin can solve this problem. Within this diploma thesis further methods for improving the perception of layout (arrangement) and distances of objects in the AR scene are discussed. Visual cues for the perception of layout and distances of focussed virtual anatomy can be enabled by the exploit of context information. Context information can be provided as well by a correct integration of the camera image, recorded by the color cameras mounted on the HMD, as by non-focus parts of the virtual anatomy. Within the scope of the practical work of this thesis a focus and context visualization framework for medical AR was implemented, which considers and exploits depth cues enabling a correct perception of the focussed virtual anatomy. Therefore general principles and methods for creating and designing focus and context visualizations are taken into account, which are mainly adapted from hand-made illustration techniques. The framework provides a correct fusion of real and virtual anatomy and realizes an intuitive view on the focussed anatomy of the patient. It includes a new technique for modifying the transparency of the camera image of the real body. The transparency is thereby adjusted by means of properties (e.g. curvature) of a virtual skin model. Additionally, a method for clipping parts of the anatomy, hindering the view onto the focus, is introduced. The framework also contains methods for integrating surgical or endoscopic instruments into the medical AR scenario. Instruments are virtually extended as soon as they penetrate into the patient’s body. Moreover, the penetration port is highlighted and virtual shadows are used to provide visual feedback from instrument interaction. The effectiveness of the developed techniques is demonstrated with a cadaver study and a thorax phantom, both visualizing the anatomical region of the upper part of the body, and an in-vivo study visualizing the head. | Diploma Thesis | Felix Wimmer | |
3D Human Pose Estimation From Sparse Data | Master Thesis | ||
Multi-View Human Poses/Shape Tracking without Background Subtraction: a bottom-up approach In this project, we investigate the possibility to find the middle ground between top-down and bottom-up strategies, in order to retain the merits from both. Specifically, one first recovers rough 3D skeletal poses with state-of-the-art bottom-up approach as initializations, and deform pre-defined human surfaces to fit to those poses. The generic human surface has to be adapted according to the size/scale of the skeleton. Skeleton-based animation such as linear-blend skinning, dual-quaternion-blend skinning is also worthwhile a look. | DA/MA/BA | ||
Multi-View Human Poses/Shape Tracking without Background Subtraction: a top-down approach A typical top-down human motion tracking pipeline requires good back subtraction, and therefore limits the applications to stay in indoor studios. In order to alleviate such a constraint, in this project we investigate the possibility to recover human motions directly with images. The 3D surfaces are expected to be textured, or at least contain per-vertex color information. Thus, the problem amounts to maximizing the similarity between the projected texture of surfaces and the image observations. | Hiwi | ||
Human Shape Adaptation with with skeletons and pressure measurement | DA/MA/BA | ||
Maintenance Support Using Hybrid Displays | Bachelor Thesis | Andreas Demmel | |
Illumination Compensation in Surgical Scenes Due to strong lights present in the operating room, specular highlights are present on the video stream captured by CamC?, especially the tools because of their high reflective property(metal), or on the patient (glossy texture) body or on the doctor gloves. Those strong specular highlights are a disturbance for the doctors because they hide or minimize the important information of the surgical scene. In endoscopy surgery, specular highlight removal has already been studied in order to improve the visualization or as pre-processing for further algorithms such as tool tracking. Those regions can be easily segmented in order to remove them from the regions of interest or either inpainted in order to improve to withdraw those artefacts. The type of specular highlights in endoscopic images are similar as the ones in CamC? visualization, they are due to a strong light at the tip of the endoscope and are present on the tissues and tools. | DA/MA/BA | Ivan Lebedev | |
Image Deconvolution for Microscopy Fill in abstract here | DA/MA/BA | Thomas Kasper | |
Spatial Tracking of a Laparoscope Using an Inside-out Tracker A laparoscope is a rigid endoscope that is used in minimally invasive surgery. Our current laparoscope augmentation system (see CARS and MICCAI 2005 papers) uses an external optical tracking system to estimate the pose of the laparoscopic camera. For augmented reality applications however it has been proven that inside-out tracking works more accurate in terms of accuracy of the image overlay. The current laparoscope augmentation system has to be extended by an optical camera, which estimates its pose using a visible pattern in the scene. The major task will be the implementation, testing, and evaluation of existing single camera pose estimation algorithms and integarte them into the current setup. The work requires implementations in C++. Matlab and Computer Vision knowledge will be of great benefit for this thesis. | DA/MA/BA | ||
3D Interaction Methods for Medical Applications using Low-Cost Hardware | DA/MA/BA | Korbinian Schwinger | |
Interactive Tomographic Reconstruction for Freehand SPECT using the GPU In modern medical practice, tomographic reconstructions offer a remarkable diagnostic tool. It started as a complement to already existing imaging techniques (like X-ray imaging) but soon spawned practices and therapies for which it has become invaluable.In this work we adapt existing algorithms used in emission tomography for many-core GPU architectures. | Diploma Thesis | Alexandru Duliu | |
Ultrasound-based Measurement of Anatomical Structures Medical navigation systems support the surgeon at the planning and execution of surgeries. In orthopedics three dimensional representations of the involved bones are provided by navigation systems. Preoperative data sets allow a precise planning of the surgical procedure. During the surgery the surgeon is guided by the navigation system. The navigation system is able to show the planning data according to the current position of the patient in the operating room. For this purpose a registration establishing a relation between the plan and the intraoperative situation is necessary. A development towards minimal invasive interventions can be observed in all areas of surgery. In general this means a minimization of cuts. At the same time, the access to relevant interior parts of the body is being reduced. To provide the surgeon with sufficient and precise information different solutions are developed depending on the respective application. In orthopedics notably the visualization and measurement of bony structures is important. For this purpose ultrasound (US) may be used as a non-invasive and almost harmless imaging modality. Ultrasound images may be acquired during surgery and provide useful information about the position and orientation of bones. At the start of the surgical procedure a registration between all involved data sets and tools is necessary. This thesis describes an workflow for the registration between ultrasound (US) and computed tomography (CT) data sets. Therefore a registration has to be found that produces the best correspondence between both data sets. Some approaches of other authors try to compare the voxel information directly. The approach of this thesis is the detection of bony structures in the data sets. Then, these structures can be matched. A basis for the registration is the accurate measurement of the bones. Therefore it is necessary to detect bony structures in a precise and reproducible fashion. The work starts with a systematic analysis of ultrasound images. A method for the preprocessing (artefact removal) of these images is tested. For the detection of bony structure a LiveWire-similar algorithm is developed. Another method will be described that is able to find the best position of a contour within an US image for a given preregistration. The mentioned algorithms are tested on several data sets of cadavers and a volunteer. Finally, the found structures will be matched with a surface model of the bone by an ICP algorithm to get a registration between US and CT data set. | Diploma Thesis | Jörg Jakobs | |
Ressourcenmanagement im Katastrophenfall Medizinische Katastrophen mit einer hohen Anzahl an verletzten Personen stellen für alle beteiligten Einsatzkräfte eine große psychische und physische Herausforderung dar. Das Ziel des gesamten medizinischen Personals vor Ort ist die Rettung und Behandlung aller von der Katastrophe betroffenen Personen. Die Einsatzkräfte werden jedoch in Katastrophen mit dem Problem konfrontiert, dass sich mehr behandlungsbedürftige Patienten am Schadensort befinden als mit den am Einsatzort befindlichen Rettungskräften behandelt werden können. Diese nicht-alltägliche Situation belastet alle involvierten Einsatzkräfte bis an ihre Grenzen, da sie sich häufig zwischen mehreren Patienten entscheiden müssen. Dies steht im starken Gegensatz zu der Alltagsarbeit von Ärzten, Rettungsassistenten und Rettungssanitätern, im Zuge derer jeder Patient unabhängig von der Schwere seiner Verletzungen die bestmögliche Behandlung genießt unabhängig von den dazu benötigten Ressourcen. Die im Vergleich dazu sehr nüchterne Herangehensweise in Katastrophen, Patienten im Rahmen der Sichtung Behandlungsprioritäten zuzuweisen ist einzig und allein eine Konsequenz der starken Ressourcenbeschränkung und der daraus resultierenden Problematik, dass nicht alle Patienten gleichzeitig behandeln werden können. Das Wohl aller erfordert die Zurückstellung von einigen Patienten zugunsten anderer, um so möglichst viele Menschen erfolgreich retten zu können. | Diploma Thesis | Simon Nestler | |
Automated Registration in Endoscopic Treatment of Aortic Aneurisms In the last years, minimally invasive surgery techniques found their way into the operating theater. One very promising application is endovascular stent grafting for aortic aneurysms or dissections. At present, the three-dimensional CTA (Computed Tomography Angiography) model is only used for preoperative planning but is not available during operation. This situation is unsatisfactory for CTA provides by far more anatomical information than the two-dimensional intraoperative fluoroscopic images taken by a C-arm. Our goal is to close this gap by a Computer Aided Navigation and Planning Tool (CANP). CTA data is to be visualized intuitively for planning the correct position of the stent and its properties. During operation itself, CTA serves as basis for augmentation with intraoperative data acquired by the C-arm. By this means, we hope that radiation exposure for both, surgeon and patient, as well as usage of contrast agent, can be reduced. This thesis deals with mathematical challenges arising in this context. Before three-dimensional CTA and two-dimensional fluoroscopic images can be merged in one model, the two modalities have to be registered. For this purpose, the perspective geometry of the C-arm was determined by a calibration. Artificial X-ray images computed from CTA, so-called DRRs (Digitally Reconstructed Radiographs) are then to be overlaid with intraoperative radiographs. An efficient volume rendering technique based on 3D texture-mapping is proposed for this computationally expensive task. The pose of the CTA model is altered within a virtual C-arm setup by rigid transformations (rotations and translations). When DRR and radiograph coincide, the patient is registered to his CTA model and information can be exchanged between both modalities. Iterative best neighbor optimization techniques for an automatic registration procedure were evaluated. The similarity between DRR and radiograph is estimated by pattern intensity and gradient difference quality measures. This combination yields promising registration results in affordable time, though differences between both modalities, especially the influence of contrast agent and other instruments, have to be decreased in the future in order to achieve a really robust registration. Furthermore, different direct and indirect volume rendering techniques were evaluated with respect to their suitability for a fast as well as intuitive visualization of the augmented model. | Diploma Thesis | Peter Keitler | |
Authoring of Augmented Reality based Maintenance Manuals | Diploma Thesis | Dimitri Khanin | |
Usage of Depth Camera xbox Kinect The Purpose of the project is to investigate the usage of Depth camera xbox “Kinect” to implement some gesture recognition in a Virtual Reality environment. | Hiwi | Hasan Elsherbiny | |
[[Students.DaKinectCADTracking][]] | DA/MA/BA | ||
Gesture Recognition in a VR environment - Kinect The Purpose of the project is to investigate the usage of Depth camera xbox “Kinect” to implement some gesture recognition in a Virtual Reality environment. The gestures are analyzed to perform 3D Menu selection task and Parameter adjustments in a dynamic simulation in VR. For this purpose Particle tracer software is used. Two parameters (Reynolds values , Viscosity) are to be adjusted through gesture recognition. | Bachelor Thesis | Tilman Eberspächerr | |
Object Recognition Using Microsoft's KINECT Released by Microsoft in November 2010, KINECT is a brand new technology introduced for new gaming experience with Xbox. Equiped with a RGB camera and depth sensor based on infrared structured light, KINECT permits to interpret 3D scene information. In this project, we propose to revisit the problem of object recognition based on both RGB and depth images. | DA/MA/BA | ||
Kinematic Registration Method for Image-Free Navigated Total Knee Arthroplasty Image-free orthopedic navigation software relies in general on a bone model that is intraoperative generated by identifying certain important landmarks on the actual bone and matching these landmarks to a model. In this thesis, a new registration method for a certain class of deformities will be developed that is based on the passive movement of the knee joint. The surgeon intraoperative defines the desired postoperative motion by moving the joint under certain pressure. From the kinematic data that is gathered using an intraoperative navigation system, the proposed software extracts the best position and size of the different components of the joint replacement and defines the necessary resection planes. The resections are then performed using the navigation system according to the plan. The proposed method involves extracting a desired virtual motion characteristic from the kinematic movement and matching these characteristics to the available implant surfaces using nonlinear optimization with certain constraints imposed by the implant and surgical technique. This leads to a kinematically optimized implant placement, which is expected to improve implant durability and function. | DA/MA/BA | ||
Presentation Concepts for Conformal Navigation Systems Integrated into the car, Head-up Displays (HUD) enable information presentation in the driver's windshield. This display combined with Augmented Reality enables extension of the street scenery with conformal (location-fixed) navigation aids. Navigational arrows then appear to be aligned onto the street and seemingly integrated into the real environment. The presentation of route guidance information generates new challenges for investigation. The main reason for the application of HUDs lies in reduced glance times. The focal plane of HUDs therefore must reach large distances so that the human eye does not have to adjust to another focal depth when viewing from the street scenery onto the HUD. To achieve the desired focal length, but to keep the opportunity of information presentation in near distance, a conformal HUD has been built placing the focal plane in a distance of about 13 meters in front of the driver. The virtual projection screen became blown up to cover a wide field of view. This approach lead to a large relative size of screen pixels. This setup impairs the distance at which AR information can be displayed. A navigational arrow for instance will have to less pixels to be perceivable in large distances. The development of an application for preliminary studies to identify distance limitations for different conformal visualization schemes therefore is scope of this thesis. Different design concepts for AR navigation aids will be evaluted in terms of their perception in large distances | Diploma Thesis | Leslie Klein | |
Entwicklung und prototypische Umsetzung eines Konzeptes zur Programmierung komplexer Roboterzellen durch 3-D-Interaktion in VR-Umgebungen | Diploma Thesis | Oskar Klett | |
Infrastructure Independent Self-Positioning Using Projected Patterns in the Environment | DA/MA/BA | Moritz Koehler | |
Development of a Single Touch User Interface for the Efficient and Intuitive Completion of Forms in the Area of Emergency Rescue Services Some of the data that is collected within the logging of emergency rescue services is needed in digital form for billing and statistical purposes. Up to now the data is usually collected by pen and paper and subsequently typed up; yet there are some attempts of initial digital recording with the aid of digital pens or with tablet pcs using the stylus and handwriting recognition which aim to eliminate the need of double recording. In my thesis I will present another approach of collecting the data directly on a tablet pc. Operation using only the left and right thumb is a special requirement in the context of the project. The thesis focuses on possible text entry techniques which meet this requirement and examines these with regard to text entry speed, error rate and learnability. | Bachelor Thesis | Daniela Korhammer | |
[[Students.DaKrause][]] | |||
Augmented Reality in user interfaces for industrial robots Teaching motion paths to a robot is a complex task. AR-based user interfaces have the potential of providing a more intuitive, direct way of teaching a robot arm where to go. This thesis will explore and prototypically implement various methods towards augmenting a teacher's view with motion path information. In particular, the thesis will focus on room-based or object-based, spatial display concepts (in contrast to head-mounted displays). | Diploma Thesis | Ingo Kresse | |
[[Students.DaKulas][]] | DA | ||
Automatic Calibration of Tracked Catheter Ultrasound | Diploma Thesis | Oliver Kutter | |
Towards Robust Markerless Tracking: Combining Feature-based and Intensity-based Approaches There exist several approaches for markerless tracking, including feature-based and intensity-based ones. However not every approach works equally well for a given object. The goal of this work is to combine intensity-based and feature-based approaches in order to improve the robustness of the tracking. | Diploma Thesis | Alexander Ladikos | |
Level Sets for Symbolic Reconstruction One of the more sophisticated segmentation methods is called "Level Set Method". It provides the programmer with a lot of options to define his segmentation goals. The segmentation is achieved by adjusting region boundaries. These adjustments are driven by user-defined forces. All this leads to partial differential equations which, in the end, have to be solved using finite differences. The IDP consists of implementing numerical solvers for PDEs arising from Level Set Methods. The PDEs are discretized using finite differences and solved with usual schemes thereafter. The only difference to usual solvers is that so-called "Narrow Band Methods" can be applied which operate on a local band around border pixels only, thereby reducing the workload dramatically. The IDP starts with an evaluation of existing narrow band implementations. Some selected methods will then be implemented by programming a new solver. Finally, the implementation has to be evaluated and tested extensively. The student should seek an IDP in mathematics and have some knowledge about the numerical solution of PDEs. This knowledge can be gained by attending the math lectures on Numerical Mathematics of PDEs (usually called "Numerik 4" at TUM) or sth. comparable. Knowledge about the theory of PDEs is not required. | Diploma Thesis | Jakob Vogel | |
Location-Based Augmented Reality Gaming | DA/MA/BA | ||
Non-rigid Registration Using Free-form Deformations | Diploma Thesis | Loren Schwarz | |
New Volume Rendering Techniques for Images Registration Registering a medical data sets is a computationally demanding task. It comprises repeated generation of images from both data sets, which are compared iteratively with one another. 2D/3D registration as well as 3D/3D registration both basically involve a volume rendering of the medical data set(s). This process can be done more efficiently using texturing features available on DirectX 9.0 compliant graphics hardware such as NVidia Geforce FX or ATI Radeon 9800 boards. Rendering object-aligned slice using 2D textures and rendering view-aligned slices using a single 3D texture are already implemented. Texture color depth can either be 8 or 16 bit. The use of 16 bit and 32 bit floating point textures respectively is to be implemented soon. As well as storing the gradient information set along the intensity information of a medical data in a texture. Support for rendering into 16 or 32 bit floating point pixel buffers is also planned. Raycasting on the GPU and Hardware accelerated splatting for medical image registration is planned to be implemented further in development. All implementations will be done using OpenGL and the shader language "C for graphics". | Diploma Thesis | Peter Luecke | |
Lymph Node Detection and Segmentation in MRI Lymph nodes are critical anatomical structures that reflect the progress of many diseases. One important application, for example, is to estimate cancerous metastasis status by observing their sizes in CT or contrast-enhanced images. To achieve accurate estimation, high-quality segmentation of the lymph nodes is necessary. Currently, the prevalent way of realizing this is through clinical experts’ detection and delineation manually. Unfortunately, this way is extremely time-consuming and heavily dependent on the experts’ experience. Automatic lymph node detection and segmentation that can provide consistent and accurate results are highly desired. | Master Thesis | ||
Mixed Reality Games and Special Effects The goal of this thesis is the creation of interactive mixed reality games and special effects using a multi-camera system. The system provides the 3D reconstruction of objects inside a working area and can be used to compose virtual objects and real objects in a mixed reality environment. This allows the user to interact with virtual objects and to create composite videos combining both real and virtual objects while correctly handling occlusions. Your task would be the design and the implementation of interactive games making use of this infrastructure as well as the creation of a short movie showcasing the system. This project requires good knowledge in C++. Previous experience with a 3D modeling software, e.g. Maya or 3D Studio Max, as well as a certain creativity and artistic sense would be advantageous. | DA/MA/BA | ||
Non Invasive Histology of Atherosclerotic Plaque | DA/MA/BA | ||
Entwurf und Integration eines kamerabasierten Trackingsystems für ein Flugzeugcockpit In light of the continuously increasing air traffic during the last decades, technical aids needed to support the pilot in navigation and flying has be- come more critical than ever to assure a flawless and safe flight. Primarily, this support is given to the pilot by a graphical presentation of sensor data to depict the actual flight condition (situational awareness). In this concern, visual displays have a crucial role to play nowadays. But the trend leads away from monitors on the centerpannel and head up displays to integra- ted head-mounted devices, which increase the pilot’s freedom of movement. But most of the devices used today are proprietary solutions, with many of them being based on electromagnetical tracking. In the application field of augmented reality, we encounter many of the same questions, but most of them in a more general context. Camera-based optical tracking is frequently applied in this field, due to some ma jor advantages over other tracking me- thods. The obvious drawbacks of optical tracking, such as highly complex and time-consuming image processing, may be compensated soon with the ongoing development of faster CPUs2 and GPUs3 . Moreover, by reducing a „one-fits-all“ solution to a problem-customized approach, several known problems of optical tracking can be avoided or at least reduced. The goal of this thesis is to design a tracking system for the LFM’s4 flight research simulator by using existing techniques from the research field of augmented reality. The approach described here adapts solutions from existing systems to the special requirements of the cockpit environment, which leads in se- veral aspects to constraints that need to be intentionally taken advantage of whenever possible. Otherwise, the cockpit environment introduces several limitations which inhibit the application of existing solutions without modi- fication. Hence, a camera-based optical tracking approach was chosen, based on acitvely emitting fiducials in the infrared spectrum. The requirements of a platform-independent implementation as well as the future option to migrate the system to other cockpit types were complied with as much as possible throughout. Even though the requirements for an approval by federal flight authorities could not be taken into account, the system was nevertheless in- tentionally designed beyond the exclusive use in the flight simulator, which is reflected particularly in the fiducial design, by assuming a wide range of lighting conditions in the cockpit. The results presented in this thesis may serve the reader as support in choosing a tracking system tailored to a spe- cific problem, and showing some difficulties and possible approaches to their solution. | Diploma Thesis | Franz Mader | |
Evaluierung markerloser Trackingverfahren zur Realisierung einer MagicBook-Anwendung | Diploma Thesis | Sebastian Lieberknecht | |
Error Correction for Electromagnetic Tracking ... | DA/MA/BA | Sukhbansbir Kaur | |
[[Students.DaManeuveringMetaphors][]] | |||
Fusion of Time-of-Flight Plane Features with Point Features for Camera-Pose Tracking The objective of this master thesis is to estimate the pose change of a time of flight (TOF) camera between consecutive frames in all six degrees of freedom. To gain visual highresolution information of the scene, the TOF camera is combined with an RGB camera. The proposed algorithm fuses 3D geometric and 2D visual features. Planar surfaces are extracted directly from the 3D point cloud while SURF is applied to the 2D projections of these surfaces. Feature positions and plane equations are both used to estimate all six degrees of freedom of the camera motion. The algorithm outperforms fast coarse pose registrations, that do not combine the 3D geometry with visual projections in accuracy while it is suitable for online processing. Fine registrations, that use the complete point cloud are more precise but also much more time consuming. | Master Thesis | Sebastian Marsch | |
Implementation and Evaluation of a Human Motion Capturing System The goal of this project (Bachelor Thesis, SEP or equivalent) is to implement a Motion Capturing system in C++ that is based on an existing framework developed at the chair. The system has an interface to the tracking system and computes - in real-time - an optimal fit of a skeleton to the tracking data. Implementation contains a number of challenging aspects, such as the optimization algorithm, that are open for ideas and research by the student. After implementation, the student can freely evaluate the system with respect to numerous parameters, e.g. efficiency, precision, etc. | Bachelor Thesis | ||
Tracking auf Mobiltelefonen für Augmented Reality Anwendungen | DA/MA/BA | Frieder Pankratz | |
Deep Learning for Depth Estimation from Single Images In recent years, deep learning based methods have been developed for accurate depth estimation of a scene, even from a single image. One limitation of these methods is that they require large data sets for training and, regardless of the generalization quality within the same distribution, previously unseen data from a new environment can still pose a challenge for a pre-trained model. Additionally, the data used for training is usually obtained by standard 3D cameras, like Kinect or ASUS Xtion. These devices have a limited range and can effectively be used for scanning a room, but they would not provide accurate results when scanning a long hallway or a building. The goal of this thesis is two-fold. First, we train a deep learning model for estimating the depth map from a single image, using a proprietary database of RGB images and their corresponding depth information acquired with a long-range 3D sensor. Second, we aim to improve the generalization capability of the model such that it adapts to depth distributions it did not encounter during training. | DA/MA/BA | Andrei Militaru | |
Multi-View Deconvolution of Biomedical Images | Diploma Thesis | Moritz Blume | |
Real-Time Respiratory Motion Tracking: Roadmap Correction for Endovascular Interventions | DA/MA/BA | Selen Atasoy | |
Development of a multitouch sensor for LCD displays In cooperation with Lumin, the leading manufacturer of back-projection displays, we offer the opportunity to develop, build and evaluate new concepts for multi-touch screens. | DA/MA/BA | Thomas Pototschnig | |
Multiple Myeloma Staging using PET and CT images | DA/MA/BA | ||
Bachelorarbeit: Development of a tangible multiplayer Tetris game on a Multitouch Table A new multiplayer tangible game should be implemented in this Bachelor Thesis. One, two or up to 4 players are able to play against each other or in cooperation on a multitouch table. Tangible 3d tokens are used for game playing, which have the shape of Tetris tokens should be used. A basic concept for the design exists already. This should be refined. New ideas can improve the concept. | Bachelor Thesis | Simon Schenk | |
Identifizierung effizienter Nutzer von Augmented Reality Systemen Diese Arbeit setzt eine exzellente Arbeit zur Evaluation verschiedener Möglichkeiten der Bereitstellung von Handlungsanweisungen fort. Dabei werden verschiedene Varianten von Displays (HMD, Hand-held Displays und Spatial Displays) und Darstellungen (1D/2D/3D) gegeneinander evaluiert. Bis jetzt existiert eine Testumgebung, mit welcher bereits intensive Benutzerstudien durchgeführt wurden.Diese Arbeit befasst sich mit Persönlichkeits- und Intelligenzmodellen, dem Alter, dem Geschlecht, der Bildung, dem räumlichen Vorstellungsvermögen und den dazu gehörigen Tests. Ferner wird der Zusammenhang zwischen diesen Tests und der Arbeit mit AR untersucht. | Master Thesis | Erwin Yükselgil | |
Needle Detection in 3D Ultrasound | DA/MA/BA | Guillaume Houel | |
Attentive User Interfaces for DWARF This diploma thesis covers the problems associated with the tracking of users’ attention during human computer interaction(HCI), and how the attention can be used to improve the HCI. A system to track, visualize and use attention as an input has been implemented. From the attention focus it can be understood what people do, how they interact with the system and which objects are currently of interest for them. We are going to use this information to simplify the human computer interaction. Nowadays, every user is surrounded by many devices (i.e. PC, cell phones, PDA’s, note book etc.), each of them requesting immediate user’s attention. This requests are often interfering with the current user’s task and result in an unwished interruption. The main problem lies in the user interfaces for these devices because they use the old WIMP (windows, icons, menus, pointers) paradigm, which has been developed 20 years ago. At that time, one user was interacting with at most one computer, and WIMP developers assumed, that the whole user’s attention is set to this one device. Presently, such behavior is often fatal, because users get bombarded by requests for attention from different devices. We use Attentive User Interfaces (AUI) to solve this problem by augmenting the devices with the capability to sense and reason about users’ attention. The viewing direction of humans tells a lot about their attention focus. This includes which person, device or task the user is paying attention to, the importance of that task, and how eyes are used to open communicational channels in interpersonal communications. Researches in the cognitive psychology showed that there is a close relation between the eye direction and attention focus. Therefore, we will use eye tracking in combination with headtracking to compute eye direction in the 3D space and use it as the input for AUI. Our main goal is to develop a working application to demonstrate the advantages of the AUI versus the WIMP paradigm. | Diploma Thesis | Vinko Novak | |
Intraoperative Ophthalmic Scene Reconstruction for surgical AR Ophthalmic microsurgical interventions require a high level of handling precision for the surgeon as even little errors can damage intricate structures inside the human eye. Furthermore, the microscopic top-down view through the microscope is a limiting factor for the surgeon's depth perception, motivating the use of augmented reality to enhance the surgical view. With the use of volumetric intraoperative Optical Coherence Tomograph (OCT), 3D imaging can be performed during the surgery, yielding an additional source of information to guide the surgeon. However, to display this additional data to the surgeon, proper integration into the surgical view has to be considered. One very intriguing way to integrate the information is to use a focus and context visualization method already used by Bichlmeier et al. [1], see Figure 1. To be able to provide perceptually plausible in-situ rendering of the acquired volumes as a focus-and-context visualization, first the 3D surface of the retinal surface needs to be reconstructed as a 3D surface. This can be achieved by applying stereo block matching using a calibrated stereo image pair. To properly position the iOCT overlay, the surface must then be aligned to the coordinate system of the OCT volume. This, however, cannot be done preoperatively due to the complex optical setup involving the patient's own eye. Therefore, the second part of this project is to devise an online alignment that can compensate for the (potentially changing) optical pathway affecting all imaging modalities. This online alignment can involve a simple surface alignment step as well as more complex algorithms to account for deformations due to the different optical pathways between the two modalities. | Master Thesis | Ekaterina Kanaeva | |
Tracking und Kalibration mit einem Infrarot-Optischen Mehrkamerasystem Ziel dieser Diplomarbeit ist es, für ein (vorhandenes) infrarot-optisches Mehrkamerasystem Algorithmen zum Tracking und zur Kalibrierung zu implementieren und zu vergleichen. Die Kaeras liefern über USB Scharz-Weiss Bilder der Szene die sie mit ihrem eingebauten Infrarotblitz ausleuchten. | Project | Daniel Muhra | |
Optical Screening Methods for Quantitative Evaluation of Progression in Skin Displasia | Master Thesis | Rebekka Mayr | |
Reconstruction and Visualization of 3D Freehand Ultrasound 3D freehand ultrasound scans consist of a large number of position-encoded 2D slice images from the respective anatomy. One means to visualize them is to directly display the stack of slices using certain blending techniques e.g. with OpenGL?. On the other hand the data can be reconstructed into a volume with regular spacing, which in turn can be presented using Volume Rendering techniques. This so-called spatial compounding can be done using a variety of different approaches, some of which can improve the image quality or even eliminate deformation and errors in the original image data. In this Diplomarbeit, different existing and novel compounding methods are to be implemented and evaluated. In addition, other representations for better three-dimensional display of such data will be examined, especially with respect to a fused visualization involving Computed Tomography (CT) data of the same anatomy. | Diploma Thesis | Fabian Pache | |
Hepatic Vessel Extraction for 2D/3D Registration | DA/MA/BA | Nicolas Padoy | |
Diplomarbeit: Software based planning and information management on mobile devices for supporting the administration of rescue operations If there is a disaster, it usually requires a lot of resources, which are used for rescue operations, among other things. If there are a lot of persons injured or killed within the accident or attack, we call it an MCI, a mass casualty incident. For disaster management, these situations are particularly challenging because usually more resources are needed than are currently available. For the medication of the injured being efficient, the director of operations needs information about the location and the emergency responders as accurately as possible. Within this thesis, a software based prototype for supporting the director of operations in managing the crisis is implemented. Concepts for information management are developed, especially in the form of user interfaces. These concepts are used for collecting data about the injured and emergency responders and keeping track of all this information. Likewise, efficient communication with the responders at the disaster site should be possible. The director of operations himself must also be mobile and has to rely on mobile devices. The thesis is realized in cooperation with the TUM Feuerwehr. If possible, the prototype will be evaluated and the results of this evaluation will then be analyzed. | Diploma Thesis | Peter Pichlmaier | |
Diplomarbeit: Supporting emergency responders using software‐based information management on mobile devices A catastrophic event brings out special challenges to all involved staff, but especially to the rescue workers and firefighters. If there are also many injured or dead people, this is called a mass casualty incident, an MCI. This scenario is one of the most challenging events that emergency responders are confronted with. In this thesis concepts for an IT system are developed. Using these concepts, an emergency responder is able to support the director of operations by transferring information. The resulting data can then be used for coordination and planning, but the responders will also get helpful information. Fast and efficient interaction with the software system is of vital importance. Existing processes should not be slowed down. They will not be replaced and the new concepts will simply be integrated into the existing ones, if replacing them is not necessary. However, new paths are struck if the efficiency of responders can be increased by using them. It is also important that the handling of the mobile devices does not impede the emergency responders ‐ neither physically nor temporally. In order to fulfill all these conditions, an efficient user interface is needed. Within the limited display area of the mobile device, only relevant information must be shown so that it is utilized optimally. The project is carried out in collaboration with the TUM Feuerwehr. If possible, an evaluation should be held and the results of this evaluation should be analyzed. | Diploma Thesis | Thomas Endres | |
Towards Fusion of IVUS and OCT Images | Diploma Thesis | Olivier Pauly | |
3D Pedestrian Detection and Pose Estimation Autonomous driving systems are right on the corner and one key concern around the development and social acceptance of such systems is safeguarding. In this project, we want to look at the task of pedestrian detection from LiDAR? point clouds and their 3D pose estimation from the RGB camera input. 3D object detection from sparse point cloud data and multiple pedestrian 3D pose estimation are two challenging tasks and therefore active research fields in both academia and industry. In this project, we want to integrate the state of the art deep learning methods, train models on synthetic renderings and improve their performance based on safeguarding KPIs defined. | Hiwi | ||
Tracking Scissors using reflective Lines Tracking plays an important role in Augmented Reality (AR) applications. In medical augmented reality tracking systems must allow to accurately determine the position of medical instruments without influencing or disturbing the surgeons working environment. Retro-reflective material combined with infrared light presents an interesting and promising method of optical tracking. This thesis analyzes the special case of tracking medical scissors using retro-reflective lines. The particular structure of this instrument permits its identification in an image only through the position of the two legs. During this project the whole process of image generation line detection camera calibration and 3D reconstruction via line triangulation was examined. The used methods problems and undertaken optimizations are presented here. Furthermore some ideas for the part of tracking the instrument over a series of images are briefly discussed. | Diploma Thesis | Katharina Pentenrieder | |
Interaktiver elektronischer Leitstand zur Disposition von Fertigungsaufträgen | Diploma Thesis | Steven Pessall | |
Volume Flow Rate Determination in the Cranial Vessel Tree Based on Quantitative Magnetic Resonance Data Quantitative MR techniques based on phase-contrast flow sensitive MR-sequences are from increasing interest for cardiac volume flow measurements. They are not only non-invasive but usually also yield more meaningful results than for example CT scans using contrast agents. In the cranial vessel tree, however, they are still far from prevailing. This is mainly caused to the fact that fro quantitative measurements two MR examinations are required: in the first examination a volume for segmenting the cranial vessel tree is acquired; based on this the attending physician decides where to place the usually double-oblique slices for a quantitative MR-recording. The placement is chosen to receive flow information about specific vessels. This thesis will develop a method to generate quantitative information about the blood flow for all major vessels in a full 3D dataset without manual interaction during data acquisition. This will be achieved by combining sets of parallel phase-contrast images in different orientations with an abstract representation of the segmented vessel-tree. | Master Thesis | Jürgen Sotke | |
Optimierung und Evaluierung der Interaktion und der Visualisierung eines Pick-By-Vision Systems | DA/MA/BA | Xueming Pan | |
Mounting an Augmented Reality Laser Projector on an Order Picking Trolley | DA/MA/BA | ||
Statistics on AR Design Principles Es werden die Anzeigeprinzipien von AR-Anwendungen in sechs verschiedene Dimensionen unterteilt; Kriterien sind hierbei hauptsächlich der Bezug des virtuellen Objekts zur realen Welt und zum Benutzer. Jede Dimension bietet mehrere Möglichkeiten der Unterteilung, wodurch eine Vielzahl an unterschiedlichen Klassifizierungsmustern entsteht. Unter Zuhilfenahme dieser Definitionen werden Arbeiten, die in den letzten Jahren veröffentlicht wurden, untersucht und auf ihre Muster überprüft. Mögliche, aber dabei nicht gefundene, Kombination werden daraufhin auf ihre reale Umsetzbarkeit bewertet. Diese Arbeit schließt mit einem Ausblick auf eine weiterführende Vertiefung dieses Themas. | Diploma Thesis | David Plecher | |
Optimized Workflow for Integration of Pre-operative Images into a Intra-operative Imaging Application Mobile imaging applications have become a common appliance in hospitals all around the world. Furthermore server based storage systems, so called Picture Archiving and Communication Systems (PACS), for images generated during clinial work are widley used. But nevertheless the connection between those two parts is often missing. Either is it often impossible to view previously created images during a procedure on a mobile device nor is there seldom a convinient way to archive the images generated on this device. The goal of this thesis is the comparative examination of different possible workflows to transfer existing images to such a mobile imaging device and the implementation of one of those. First step during this process is the retrieval of a so called worklist entry containing the basic information about the patient in order to being able to map all work done on the device back to the correct entries in the database. With this information the image database can be searched for previously taken images of the patient. These images should be transferred to the device as convinient as possible for the physician and displayed registered to the correct patient’s position on the live camera image. After the procedure the new images can be brought back to the PACS by a similar technique. | Diploma Thesis | Dominik Zaeuner | |
Augmented Reality Projektor-Kamera System | Diploma Thesis | Main.AdnaneJadid | |
Fast and efficient error correction of electromagnetic tracking and its application in prostate cancer treatment Electromagnetic tracking systems are very popular for medical applications because they are able to track flexible instruments inside the human body. Unfortunately electromagnetic tracking systems are very sensitive to distortion caused by ferromagnetive metal and electromagnetic fields in their range. When the sources of distortion cannot be removed from the setting because they are needed to perform a task; error correction algorithms can be used to compensate the distortions. In this diploma thesis an improved method for error correction algorithm was implemented and tested. The errror correction method is based on a lookup-table containing correct positions of a magnetic sensor in an distorted setting and the corresponding distorted positions. The correct position of the sensor is acquired with the help of an optical tracking system and benefits from its high precision. The algorithm uses the lookup-table for online error correction of the position of the magnetic sensor in an distorted setting. An important requirement for this is the synchronization of the used magnetic and the optical tracking system. The application was motivated by the medical domain of prostate cancer treatment. The current treatment is confronted with the problem of a prostate gland that moves slighly during radiotherapy. A magnetic sensor shall be used to track the position of the prostate gland and the cancer within. This diploma thesis provides an application that can be used for this purpose. The results of my experiments are very promising. In every experimental setting the error correction method could improve the distortion of the tracking data. In the setting that was the most distorted by metal, the distortion could be reduced from 40.55 mm to 0.60 mm by the implemented method that is based on Hardy's multiquadric method. | Diploma Thesis | Claudia Thormann | |
Handling Error in Ubiquitous Tracking Setups In order to allow automatic conversions between different coordinate systems, the product of multiple tracker measurements can be computed. In heterogeneous tracking setups however, measurements by different sensors generally are not made simultaneously and therefore require pre-processing before such a combination is possible | Diploma Thesis | Daniel Pustka | |
Automated Segmentation of Sentinel Lymph Nodes from Freehand SPECT Images of Breast Cancer Patients Freehand SPECT is a 3D imaging modality for emission tomography based on data acquisition with a tracked gamma probe that is moved around a localized region in freehand fashion, in contrast to conventional SPECT systems with fixed gamma cameras. After validation in pre-operative studies, first intra-operative studies have commenced. In this thesis an automatic 3D segmentation algorithm for Freehand SPECT reconstructions is developed. The segmented regions are then investigated using an iterative leveling algorithm to find topology changes in various threshold intervals. These algorithms are used to analyze real patient data to find relations between regions using information about size, activity and distances. For an easier evaluation, a graphical user interface is designed and further improvements are applied. A reproducibility test is done using 33 acquisitions of the same phantom to validate the segmentation and to evaluate the distance measurement between regions. Furthermore, a patient data analysis is done with a set of 25 acquisitions. The deficiencies of the automatic approach are identified and further improvement options are suggested so that the program can estimate optimal threshold values for visualization (semi-) automatically. | Diploma Thesis | Asli Okur | |
Design & Development of a Flexible User Interface for Radiation Therapy Planning Radiation therapy is one main cancer treatment method, which uses ionizing radiation to control malignant tumor cells. With modern computer technology, intensity-modulated radiation therapy (IMRT) can be applied to precisely deliver dose to target tumor while normal tissues can be spared. It is not only geometry conformal but also biology conformal. Through optimization and treatment simulation, the beam shape and dose can be well planned before the therapy. A user friendly interface of planning system is of great help for radio oncologists and physicists. The goal of this project is to design and develop the user interface for a research radiation planning system. It includes the interaction of display and modification of medical images, tumor contours, radiation beams, dose profiles and so on. | Bachelor Thesis | ||
Radiomics for Coronary Plaque Analysis from Cardiac Computed Tomography Angiography | DA/MA/BAe | TBD | |
2D/3D Registration of CT to an X-ray Mosaic During long bone fracture reduction surgery, one X-ray image mosaic is created by stitching several individual fluoroscopy images in order to show the entire bone structure of interest to the surgeons. Preoperative assessment and decision making is usually based on standard 2D X-rays of the injured limb using a planning software. The system allows lines to be drawn by the surgeon and merged directly on the X-ray. While navigating the reduction of the fracture intraoperatively, these lines provide graphic and numerical descriptions of the reduction. Surprisingly, 3D information from computed tomography (CT) is not used intraoperatively in order to help guide surgery. The aim of this project is to develop a new clinical protocol involving the 2D/3D registration of preoperative CT data to the individual X-ray images and the final X-ray mosaic. We hypothesize that the creation of multiple DRRs from CT fused together with the X-rays will help guide proper positioning and alignment of the patient’s injured bone. As an additional constraint to correct re-alignment of bone, we also incorporate the pre-planning lines drawn by the clinicians prior to intervention. (requirement: C++ knowledge). | DA/MA/BA | ||
Intraoperative 2D-3D Registration for Knee Alignment Surgery In this project a system is developed, which assists surgeons in verifying a surgical result by comparing interventional X-Rays to a preoperative plan. The targeted surgical procedure is knee osteotomy in which a bone (usually the tibia) is cut at a specific point and the two segments are repositioned to correct knee alignment. The 2D X-Ray images are to be registered to the 3D preoperative plan in order to compute the achieved 3D configuration between the two bone segments. This allows the surgeon to make sure that the plan is carried out accurately, or make adjustments if necessary, as correct knee alignment leads to improved patient outcome. | DA/MA/BA | ||
Registration between ultrasound and fluoroscopy | DA/MA/BA | ||
Rehabilitation game for the stability of the patient's leg A basic framework of augmented reality magic mirror, mirracle, has been developed. We want to develop several game for rehabilitation. Our medical partners told me that a lots of patients have to do some exercise to recover the stability of their legs after some surgeries. The rehabilitation scenario is that the patient needs to stand on only one of his legs and do some big movement with his both arms and another leg. Develop a simple but interesting game for patients to motivate patients to do more exercise. Some game like "Catching fruits or kicking ball". | DA/MA/BA | ||
Online Error Correction for the Tracking of Laparoscopic Ultrasound In abdominal surgery, laparoscopic ultrasound is widely used for minimally invasive procedures. Because of the missing visual feedback, it is often difficult for the surgeons to determine the flexible ultrasound transducer's pose, its position and orientation. Utilizing instrument tracking techniques for navigation and augmented visualization can therefore provide great benefits for minimally invasive procedures. Electromagnetical systems are the only currently available means to determine the pose of the transducer tip inside the patient. However, the electromagnetic field can be distorted in various ways, leading to erroneous measurements. Different error correction techniques have been developed, but their application to laparoscopic ultrasound is either difficult or they require an additional calibration procedure before each intervention. Additionally, no techniques have yet been proposed for the compensation of dynamic sources of error. In this thesis two new methods for online error detection and correction for the tracking of flexible laparoscopic ultrasound probes are presented. The first method is hybrid magneto-optic tracking of the ultrasound transducer shaft and electromagnetical tracking of the transducer tip. Deviations between optical and electromagnetical tracking of the transducer shaft are used to estimate the distortion of the electromagnetical field at the transducer tip. The second and more sophisticated method involves a mathematical model of the movements of the flexible transducer tip. All necessary parameters are computed offline in a distortion-free environment and remain valid until the sensors are repositioned. During an intervention the model is fitted to the measurements of the electromagnetical sensor at the transducer tip. Both methods were rigorously tested in experiments and comprehensively evaluated in comparison to related work. Our results are very promising and especially the model based approach improves the current state of art for both error detection and correction. | Diploma Thesis | Tobias Reichl | |
Benutzerorientierter Datenfilter für Umgebungsinformationen als Erweiterung eines Navigationssystems für sehbehinderte und blinde Mitmenschen | Diploma Thesis | Florian Reitmeir | |
Remote Collaboration for Field Service - Defining Specification and Prototyping - in coop with Siemens CT Service and maintenance of industrial plants and products has become more and more complex in recent years. Quite often, plant subsystems are delivered by different vendors, raising addl. problems for fault diagnosis and repair.Therefore, field service technicians must either have a very high and specialized skill level to solve these tasks in a reasonable time / within a reasonable cost frame, or other support technologies are provided. One method to mitigate these problems is to send universally trained technicians on-site and to support them by expert technicians via so-called Remote Collaboration. The on-site technician would be equipped with a wirelessly connected mobile pc, e. g. a tablet PC or a wearable PC, camera, microphone, etc. The goal is to present the remotely located experts the same “view” on the problem as the on-site technician has. Depending on the tasks to solve, this may require visual-/ acoustic- communication, data / application sharing, etc.There are a number of systems in the market, which potentially support this kind of remote collaboration scenario (MS Life Communication Server, Interwise, Arel, etc.). However, these systems rather aim at remote collaboration or video conferencing within an office environment. Therefore, a careful examination of such systems must be carried out, to decide which system fulfils best the requirements associated with the scenario described above. For missing Features a solution has to be found or implemented. | DA/MA/BA | ||
Maintenance Support using Hybrid Displays As every system becomes more and more complex, the according maintenance becomes more and more difficult. This diploma thesis describes a maintenance assistant system which simplifies the maintenance action and makes it more fail-safe. In particular, a workflow editor and the runtime engine of the maintenance assistant system are developed. This system divides any information physically into the socalled where-to-act and the what-to-do display. The where-to-act display provides simple directional information directly on the desired object in the environment and is presented by a laser projection. The additional what-to-do display tells the user the complex instruction. In the end, the system is tested with an application analysis. | Diploma Thesis | Main.AndreasDemmel | |
Recording and Analyzing Tracking Data for a Sheet-Metal Processing Robot In cooperation with the institute for metal forming and casting at the faculty for mechanical engineering, we are participating in a project in which a Kuka robot is used to process sheet metal in a driving machine. The goal of this thesis is to analyze the raw tracking data which is used to teach the robot, synchronize it with the driving machine and simplify it where possible. | DA/MA/BA | ||
Increasing Robustness of Real-Time Edge-Based Tracking In this project, a real-time edge based visual tracking system is developed. It associates visible edge features of three-dimensional CAD model with the target object's representation on the two-dimensional image plane. Therefore a video stream is processed. Current algorithms make it possible to estimate a camera projection from the deviation between image and its projection. Lie groups and corresponding Lie algebra are used because of their efficient parameterization; and simple representation of the motion computations. A least-square approach estimates the projection between two consecutive video frames. Two standard outlier detection algorithms are used to robustify the system. | Master Thesis | Kenan Bektas | |
Cartographic metaphor for interaction in an automotive environment Der Umfang und die Komplexität digital gespeicherter Daten ist so groß geworden, dass viele innovative Interaktionssysteme konzipiert werden, um die Datan beherrschbar zu machen. Diese Arbeit entwickelt ein Anzeige-Bedien-Konzept für große Datenmengen mit bewährten Metaphern der Kartographie für den automobilen Einsatz. Die Visualisierung soll dem Fahrer einen umfassenden Überblick über Daten geben, so dass dieser mit wenigen Bedienschritten die gewünschte Information erhält. Die dafür entwickelte Metapher wird exemplarisch durch ein Interface zur Bedienung eines großen Musikarchivs prototypisch umgesetzt und durch Nutzertests evaluiert. Das Interface stellt die Musikstücke des Archivs auf einer Musiklandkarte nach Genres und Künstlern geordnet dar. Der Nutzer kann über einen BMW iDrive Controller die Karte beliebig skalieren und Ausschnitte wählen, um eine zufällige Playliste des gewählten Bereichs zu erzeugen und abzuspielen. Die Evaluierung zeigt, dass die kartographische Metapher gut verstanden wird und die Bedienung in den meisten Fällen signifikant schneller ist als bei einem listenbasierten Interface. | Master Thesis | Christopher Rölle | |
Konzeption, Implementierung und Evaluierung einer Schattenmetapher für Fahrerinformationssysteme Durch die beständige Zunahme der Funktionen zur Fahrerinformation (Navigation, Audio, Telefon Kalender,...) sind klassische Schalterkonzepte zur Bedienung dieser Funktionen nicht mehr praktikabel. Die Entwicklung geht immer mehr in die Richtung integrierter Anzeigebedienkonzepte, wobei die Funktionen in einem Display über ein Menü angeboten und mittels eines Controllers manipuliert werden. Erste Systeme, die bereits ihren Einsatz gefunden haben, ermöglichen zwar die Bedienung einer großen Anzahl von Informationen, überforden aber besonders technisch unerfahrene Benutzer während der Fahrt. Im Rahmen dieser Arbeit soll ein Konzept entwickelt werden, dass mithilfe einer Metapher die textbasierte, hierarchische Listendarstellung auflösen und ein benutzerzentriertes, stark reduziertes Menü anbieten soll. Über verschiedene Lichtschalter können zusätzliche Funktionen/Informationen in den Schatten dargestellt werden, die dem Benutzer eine individuelle und intuitive Bedienung des Systems ermöglichen. Die Konzeption beinhaltet eine Analyse artverwandter Displaysysteme (Handy, PC, PDA) um auslagerbare Funktionen zu identifizieren sowie eine prototypische, beispielhafte Umsetzung anhand des Audiomenüs. Eine abschliessende Evaluierung in Form einer Benutzerstudie in einem sogenannten Lane Change Test soll die Einsetzbarkeit und Praktikabilität des Konzepts überprüfen. | DA/MA/BA | Horst Süggel | |
Predicting the Accuracy of Optical Tracking Systems There exist several optical tracking systems on the market that use infrared reflective balls as markert targets for six degree of freedom tracking. Examples are the dtrack system by A.R.T. and the optotrack or polaris system by Northern Digital. However, there is no possibility to estimate the quality of a specific marker configuration in advance only from the geometric layout of the marker balls.The goal of this project is to build a geometrical accuracy and reliability model for such marker configurations and then evaluate the model by empirical measurements. | Diploma Thesis | Michael Schlegel | |
Gameentwicklung zum Thema Informationssicherheit | DA/MA/BA | ||
Segmentation of Liver and Stroke Lesions in CT Scans via Phase Field Separation | Master Thesis | ||
Serious Games | DA/MA/BA | ||
Shape Guided Segmentation of cardiac boundaries Prior shape information has been shown to be invaluable for segmenting cardiac boundaries. We develop new methods of exploiting such prior information to guide the segmentation by using techniques of machine learning or formulating the segmentation problem to fit our requirements in segmentation of 4D cardiac data. In this project, in collaboration with the German Heart Center in Munich, new methods for accurate are provided and applied to heart images from different moralities. | DA/MA/BA | ||
Entwicklung eines 3D-Sichtfelds der Fahrzeugumgebung Moderne Fahrzeuge können, mit verschiedenen Kamerasystemen ausgerüstet, die Fahrzeugumgebung erfassen und den Fahrer beim souveränen Manövrieren des Fahrzeugs unterstützen. Unerlässlicher Bestandteil ist dabei die geeignete Darstellung der Fahrzeugumgebung auf dem Fahrerbildschirm. Neben einer 2D-Vogelpersektive wäre es wünschenswert, dem Fahrer eine Sicht auf das Fahrzeug aus jedem gewünschten Blickwinkel zu ermöglichen. | DA/MA/BA | ||
High Accuracy Tracking for Medical Augmented Reality Augmented Reality is an emerging technology combining virtual reality media with perception of reality in order to present information in a very intuitive way. This young technology offers a wide range of use from supporting any professionals in performing complex actions to mere entertainment. One of the major problems is tracking the user’s position for projecting virtual objects in the expected place. There are several different tracking techniques with different advantages and drawbacks. In the environment of medical Augmented Reality a way of tracking is needed that yields high and dependable accuracy and robustness while a large spatial range may be neglected. For this purpose the RAMP Project at Siemens Corporate Research takes advantage of visual inside-out tracking of a single camera. In this thesis it will be shown how the needs can be met. Instead of tracking natural markers in images, artificial landmarks are introduced into the setup. By choosing a very simple shape for these fiducials the image has less complexity which enhances robustness of the algorithms and allows for fast computation time. Fast calculation allows for adding more redundancy for more accuracy. As a drawback of the simple fiducials new algorithms had to be created in order to extract implicit information about visibility, partial occlusion and identification of each fiducial. | DA/MA/BA | Tobias Sielhorst | |
Importance of Gaze Awareness in Augmented Reality Tele Conferncing In this thesis I describe a new approach to maintain gaze awareness between users and on a shared workspace, that means every user knows about the gaze direction and the point the other participants are looking at. | DA/MA/BA | Main.MichaelSiggelkow | |
[[Students.DaSiggelkowFormerWork][]] | |||
Development of Silent Diffusion MR Acquisition Schemes with Reduced Distortion The aim of the project is as threefold: first, to implement the silent diffusion MR sequence. We plan to work on echo planar imaging (EPI) sequence with sinusoidal readout and the zero echo time sequence, which has the advantage of reduced distortion and low acoustic noise compared to traditional EPI. However, the controlling the motion induced phase error in multi shot sequence is quite complex, which is the major challenge of the current project. The second goal is to develop proper reconstruction frame for the silent MR sequence. The third goal is to validate the method on volunteer and patients studies as soon as the sequence is developed. | DA/MA/BA | ||
Simultaneous Deformation of Medical Images The registration – spatial alignment – of images is of great interest for medical image processing and general computer vision tasks. Often, not just the registration of two images, but the registration of multiple images is needed. An example is the creation of a medical brain atlas, for which a number of brain volumes have to be aligned. Since the human brains vary significantly, a deformable registration or warping between the images is necessary to correctly align the images. In the scope of this thesis, we will implement an algorithm for simultaneous deformable registration. | DA/MA/BA | Mehmet Yigitsoy | |
Skull Stripping Method for Glioblastoma scans Master thesis or IDP project | IDP | ||
Speckle tracking for endosonography Endoscopic ultrasound is a routinely used imaging modality for gastrointestinal diagnosis. For various applications it is desirables to track the spatial relation of sequential ultrasound images, i.e. movement of the ultrasound transducer. Due to the high resolution of the ultrasound images tracking might be possible using image content, in particular speckle patterns. This work will be done in close collaboration with the gastroenterological department of hospital rechts der Isar. If you have questions please contact Tobias Reichl. | DA/MA/BA | ||
Bachelorarbeit: Entwicklung einer Applikation auf einem Multi-touch Tisch zur Überblickslage in Großschadensereignissen Im Rahmen des Forschungsprojektes SpeedUp ist eine Applikation und ein User-Interface zu entwickeln, das von mehrere Benutzern gemeinsam verwendet wird um sich einen Überblick über ein Großschadensereignis zu machen und dieses managen zu können. Vorranging wird es sich dabei um eine Karte handeln mit der mehrere Benutzer intuitiv interagieren können. Auch die Kommunikation mit den Einsatzleitern vor Ort (Tablet PC) soll ermöglicht werden. | Bachelor Thesis | Martin Schanzenbach | |
Confidence visualization and robust inference based on tracking for beta probe surface reconstruction The main problem of current beta probe surface reconstruction is the sparse nature of the data used for it. A proper reconstruction scheme and visualization could assist the surgeon in the acquisition of measurement points that guarantee a valid reconstruction and thus minimize the risk of false interpretation and wrong therapy planning. This diploma thesis will evaluate strategies to solve this problem by means of grid discretization (rebinning), robust inference, and determination of a value of confidence for every defined grid point. Furthermore, a strategy the diploma thesis will investigate a proper visualization method to display the confidence level at grid points and the robustly reconstructed surface. | Diploma Thesis | Alexander Hartl | |
Detection and 3D Recovery of Stent Grafts in 2D Xray Sequences In the current clinical workflow of endovascular abdominal aortic repairs (EVAR) a stent graft is inserted via an introducer system through one femoral artery into the aneurysmatic aorta under 2D angiographic imaging. Due to the missing depth information in the X-ray visualization, it is highly difficult in particular for junior physicians to place the stent graft in the preoperatively defined position within the aorta. Therefore, methods for accurate stent graft recognition or segmentation in fluoroscopy images are highly required. | DA/MA/BA | Radhika Tibrewal | |
Stereo Vision with Conical and Circular Features The topic of this master thesis is stereo vision with conical (elliptical) and circular features. Stereo vision is a major subject in the scientific field of computer vision. The main task in stereo vision is the 3D reconstruction of a scene given images of the scene taken by two cameras. But first, the geometrical relations between both cameras, represented by the so called epipolar geometry, need to be found. The traditional algorithms that estimate the epipolar geometry between two images are based on point image features. Ellipses and circles as image features have certain advantages compared to point features: they can be extracted more easily and with higher accuracy from the image, even when they are partially occluded, and they can be identified more easily in multiple images. Such features can be found in many man-made objects, even if point features are completely missing. It is theoretically proven that four conical features in general position are enough for estimation of the epipolar geometry, but, unfortunately, not in a linear manner. Innovative ideas for handling linearly this estimation problem for certain specific cases are developed in this thesis. We propose an algorithm, which finds groups of ellipses that lie on one and the same plane in space. Based on the results of this algorithm, we develop linear methods for estimating the epipolar geometry. We show that the minimal configuration of three ellipses suffices, given that two of the ellipses lie on one plane. Furthermore, we show that the epipolar geometry can be computed from two circles on one plane. For this we deploy selfcalibration methods, which were not used in combination with binocular stereo vision so far. In the case when the cameras are known ( i.e. their parameters like focal length and relative position) we show that two coplanar ellipses or two circles in general position are also sufficient. | DA/MA/BA | Veselin Dikov | |
Bootstrapping of Sensor Networks in Ubiquitous Tracking Environments Augmented Reality (AR) applications fuse the real and the virtual world together. Tracking of real objects is a major part of any AR application because it must know their spatial relationships to interact with the user. Ubiquitous Computing applications try to integrate intelligent devices into the environment. By combining concepts of Ubiquitous Computing and Augmented Reality, tracking technologies can be dynamically integrated in new applications. Ubiquitous Tracking aggregates different trackers into one sensor network where all different types of mobile and stationary sensors contribute to the whole system. This thesis deals with the integration of mobile tracking set-ups (clients) into stationary or mobile ones. The client itself does not have any initial knowledge of its ambient infrastructure. The focus of this thesis is to find a conceptual component-based model for dynamically integrating the mobile clients into large-scale sensor networks. In order to accomplish this in a scalable way, the network has to be divided into smaller, self-manageable parts, e.g. by introducing a hierarchical location context. I analyze the requirements for a Ubiquitous Tracking system and give a complete overview over all possible scenarios where two separated systems get connected at runtime. The assumption is that at least one of them is a mobile set-up which has no initial knowledge of the other. By exchanging configuration, information the two systems can (re-)configure their hardware equipment to enable the tracking of each other. If this information does not describe the whole sensor network completely, the sensors can merely track unknown objects. One of the major issues is to recognize and identify these unknown objects by their own motion patterns. By investigating the speed and angular velocity of two objects from two different trackers, static relationships between them can be discovered without an actual measurement of a sensor. The main idea is to find correlation by analyzing the frequencies of those two values. I present results of the investigation of different tracking technologies, such as the ART tracking system, ARToolKit? and Intersense. They show how far these sensors are suitable for the frequency analysis. | Diploma Thesis | Franz Strasser | |
Structured learning with random forests | DA/MA/BA | ||
[[Students.DaSturm][]] | |||
Integration and evaluation of light- and shadow metaphors for In-vehicle Information Systems | Diploma Thesis | Horst Süggel | |
Deformable Human Shape matching via Volumetric representation Recently there is a significant progress in human pose estimation with depth data. Thanks to decision forest , now people are able to infer correspondences (either sparse or dense) between a surface model and single depth image. Inspired by these works, in this project we would like to investigate the possibility to infer the correspondences between a surface model and a point cloud/visual hull obtained from multiple camera environments. | DA/MA/BA | Bibiana do Canto Angonese | |
Automatic workflow retrieval in surgery In a previous paper by Sielhorst/Blum/Navab, a surgeon’s hand movement was captured over time and in three dimensions. Several methods were compared of how to synchronize the resulting 3D trajectories over time. With the help of the most promising algorithm called "Dynamic Time Warping" the operating procedures of different surgeons could be compared in an objective manner. In this thesis, the existing approach using DTW will be expanded in order to automatically retrieve the workflow of a whole surgical procedure. The underlying surgery, a laparoscopic cholecystectomy, is a standard procedure with a distinct and systematic sequence of events. Nevertheless, time and workflow variations occur due to the surgeon’s experience and anatomic conditions of the patient. The goal of this work is to create a system which synchronizes the data recorded from multiple operations. With the help of this, the system then should try to identify key events in the workflow of the operations. In order to achieve this, a set of video and audio signals will be recorded during multiple operations. The synchronization will be performed using a signal telling which laparoscopic instrument is in use at every moment during the operation. Another challenge is to synchronize several operations, since the DTW so far only allows for the synchronization of two signals at a time. Training systems, surgeon assistance, workflow optimization inside the OR and an "intelligent" OR are possible scenarios of a successful outcome of this project. | Bachelor Thesis | Ahmad Ahmadi / Ralf Stauder | |
A Tabletop Display as a Multi-Modal and Multi-User Interface for Collaborative Patient Data Analysis For preoperative planning physicians being in charge with a particular patient meet in order to discuss the medical case and plan further steps for therapy. Some of the topics to be discussed during this meeting base on medical imaging data of the patient such as previously captured CT or MRI data that is displayed on monitors. However, the navigation through the stacks of slices of such volumetric data sets is performed by only one physician with standard input devices like a mouse or keyboard. Usually, several physicians sit or stand behind the computer to examine the patient’s anatomy within the region of interest in the medical imaging data and give commands, to the physician having control over computer input devices, for the data manipulating, such as browsing through the stack forward and backward or adjusting color parameters such as contrast or brightness of grey scale images. Current user interfaces have several limitations, such as the data visualization for only 2D images and the single user interaction. The long communication pipeline between various physician and the one controling the slice viewer makes analysis of imaging data difficult and time consuming. This thesis aims at the development of a user interface, based on tabletop system with a FTIR-technology, for the collaborative visualization and analysis of medical imaging data. Its goal is to improve the collaborative work of a medical team consisting of up to 6-8 persons. A large horizontal interactive and multi-touch capable display provides enough place for all participants of the meeting to be positioned around the table. During the collaborative discussion of the patient’s imaging data, each of them can simultaneously interact with the data projected on the table surface by simply touching the surface with his or her fingers. Moreover, the interactive and intuitive user interface is based on two sensors systems. Besides a multi-sensitive table surface allowing for interaction in 2D space based on gestures due to finger motion, the integration of an optical tracking system placed above the tabletop system allows for interaction in 3D space. | Diploma Thesis | Latifa Omary | |
Werkstudententätigkeit: Entwicklung einer Testsoftware für Lokalisierungssysteme Lokalisierungssysteme werden in vielen Bereichen wie der Logistik oder der Patientenüberwachung in Krankenhäusern eingesetzt. Diese Systeme basieren auf optischen und drahtlosen Technologien wie RFID, WLAN oder Infrarot. Zum Testen von verschiedenen Lokalisierungssystemen wurde eine Software in C# entwickelt. Diese Software erlaubt es dem Benutzer, Testreihen mit diversen Technologien und Methoden zu erstellen. Nach Abschluss eines Tests wird ein Testprotokoll automatisiert erstellt, welches eine Zusammenfassung mit allen wichtigen Parametern beinhaltet. Diese Software gilt es in ersten Schritten mit fehlenden Technologien und Methoden zu ergänzen und anschließend weiterzuentwickeln. | Hiwi | ||
A Framework for Evaluation of Time Measurement based Tracking Approaches | Diploma Thesis | Stefan Machleidt | |
[[Students.DaToF3DReconstruction][]] | |||
[[Students.DaToFSurfaceReconstruction][]] | |||
Data Management for Augmented Reality Applications A critical issue of AugmentedReality applications is the large quantity of data, which must be managed in a distributed system. This data must be reliably delivered to services that provide user access on that data. The handled data must be in a consistent state allover the system. Database systems can be used for parts of the management of data to guarantee persistence and efficient handling. This thesis deals with data management for distributed AugmentedReality systems. The main contribution of this thesis is a novel approach of dynamic services, that give the user transparent access to all necessary data. The developed design has prototypically been implemented and tested within the ProjectArchie application. ProjectArchie states a project to support architectural modeling by AugmentedReality. By further development a set of general usable services can be realized that allows to use AugmentedReality in new areas. | Diploma Thesis | Marcus Tönnis | |
Digital Tomosynthesis in Phase Contrast X-Ray Imaging | Diploma Thesis | Lorenz König | |
Tracking in Interactive Spaces (in Barcelona, Spain) The task of people tracking in the interactive space XIM is solved by using Particle Filters, one type of Bayesian estimation, for estimating humans' states within the environment. The Particle Filter uses a four dimensional motion model to predict the human behavior. An innovative approach to evaluate images from a ceiling mounted infrared camera is applied to calculate the probabilities for each of the Particle Filter's hypotheses about the human's state. Additionaly, a simple concept to detect newly arriving visitors and leaving visitors is introduced to enable the system to work complete automatically without manual interaction using the pressure sensitive floor. | DA/MA/BA | Christian Waechter | |
Vision-based Tracking of Multiple Persons in an Emergency Room The Institute for Emergency Medicine and Management in Medicine (INM) offers facilities to perform professional trainings for emergency personnel. A fully equipped emergency room allows to simulate various critical situations and to record the performance of emergency personnel by means of multiple, synchronized modalities, such as digital patient data streams, video cameras, and microphones. The goal of this Master's Thesis is to implement a vision-based tracking system that allows to follow the position of multiple persons within a room. The desired output is 2D-positional information for each person with respect to the room coordinate frame that can be visual-ized in the existing interactive virtual floor plan. Video cameras as present at the INM are to be used as an input. | DA/MA/BA | ||
Design of an Intra-Operative Augmented Reality Navigation Tool for Robotically Assisted Minimally Invasive Cardiovascular Surgery Minimally invasive or totally endoscopic cardiac surgery is an operation technique in which physicians operate through small incision points at the cardiac region, in contrast to open chest surgery. A master-slave telemanipulator system, like the da Vinci, allows the insertion of very small instruments and imitates physicians’ hand and finger movements from their remote console. In order to perform a precise operation these incision points and their orientation need to be placed in such a way that there will be no conflicts. Thus, a planning tool in combination with intra-operative navigation support for the port placement is required. The intra-operative navigation is implemented in this work. Current scenarios lack the presentation of preoperative aquired image data, like computed tomography images. Thus, these image data has to be registered in three dimensions with the patient’s thorax. Augmented reality technology enables the projection of the previously planned optimal pose for the port placement on top of the view of the real world. The proposed application will provide physicians with a usefull visualization interface to see inside the patient. A registration procedure is required in order to perfectly align the modalities, both real and virtual, in the spatial domain. Therefore, a point based algorithm, on the basis of fiducials, is used to compute the transformation. Those external markers are placed on the patient’s skin and are visible in the computed tomography scan and in the tracking environment in the operating theater. The developed prototype superimposes the planned port data on top of a video frame of the real world after a perfect spatial registration. Additionally, a tool was implemented that shows the tracked telemanipulator arms during the entire operation in the virtual model of the patient. | Diploma Thesis | Jörg Traub | |
Gameentwicklung zum Thema Traumwelt/ Luzides Träumen | DA/MA/BA | ||
Design of user evaluation of two travel metaphors in a VR environment. Design of user evaluation to be able to compare and evaluate the two implemented travel metaphors: Phone Based Motion Control Vs Steering and Driving. The two metaphors will be evaluated to each other quantitatively and qualitatively. To do that both metaphors should be integrated. An android phone has to be used for both metaphors (instead of tablet Pc in the second metaphor) and ART tracking has to be used in both metaphors (instead of orientation sensor in the first metaphor). | Bachelor Thesis | Anastas Tanushev | |
[[Students.DaTrendsAndGaps][]] | |||
Tumor Induced Brain Deformations | Master Thesis | ||
Tumor segmentation based on dynamic PET measurement Modern cancer therapy has extended the frontline to tumor substructures. Molecular imaging makes it possible to identify underlying biological features of malicious lesions. As one important clinical imaging modality, positron emission tomography (PET) reveals the metabolic and pathological properties. Dynamic PET measurement has its potential to explore more information based on the kinetic difference of tumor microenvironment. This information could be used to assist cancer diagnosis, therapy planning and so on. After injection, radio labeled tracers experience different delivery, metabolism and clearance in Patients. The variant shapes of time activity curves (TAC), which are responses according time detected by PET, indicate different pathological characteristics of the targets. The goal of this project is to generate classification of substructures of solid tumors based on the quantitative and morphological features of TACs. Through trying different algorithms and criteria it is possible to have the clusters of different tumor features, such as hypoxia, necrosis and so on. The clusters could be validated through other imaging modalities and further with microscopic images. | Master Thesis | ||
Bachelor- Master-/Diplomarbeit: Entwicklung eines User-Interfaces auf einem Tablet-PC zur Unterstützung von Einsatzkräften (Rettungsdienst) in Großschadensereignissen Im Rahmen des Forschungsprojektes SpeedUp ist ein User-Interface zu entwickeln, welches u.a. die besonderen Anforderungen von sogenannten Großschadensereignissen erfüllt. Als Plattform soll ein Tablet-PC verwendet werden, dass mit beiden Händen gehalten und gleichzeitig vollständig bedient werden soll. Es sind entsprechend Lösungen auszuarbeiten, die es dem Benutzer ermöglichen, die Software vom Rand des Tablet-PCs aus intuitiv und effizient zu bedienen. | DA/MA/BA | ||
Bachelorarbeit: User-Interface für digitale Formulare Im Rahmen des Forschungsprojektes SpeedUp ist ein User-Interface zu entwickeln, welches u.a. die besonderen Anforderungen von sogenannten Großschadensereignissen erfüllt. Als Plattform soll ein TabletPC (Touchscreen) verwendet werden. Diesbezüglich sind unterschiedliche Daten von beispielsweise Verletzten oder Rettungskräften nur mit Hilfe der Touch-Screen Oberfläche des TabletPCs einzugeben , so dass auch in zeitkritischen, instabilen Stresssituation die Einsatzkräfte in der Lage sind, die Formulare direkt am TabletPC auszufüllen und Daten vom Einsatzort an den Einsatzleiter weiterzugeben. | Diploma Thesis | Christian | |
Gestyboard Backtouch 2.0: Weiterführung eines bestehenden Texteingabekonzeptes auf der Rückseite eines Android Tablet PCs Im Rahmen des Forschungsprojektes SpeedUp ist ein User-Interface zu entwickeln, welches u.a. die besonderen Anforderungen von sogenannten Großschadensereignissen erfüllt. Als Plattform soll ein TabletPC (Touchscreen) verwendet werden. Diesbezüglich sind unterschiedliche Daten von beispielsweise Verletzten oder Rettungskräften nur mit Hilfe der Touch-Screen Oberfläche des TabletPCs einzugeben , so dass auch in zeitkritischen, instabilen Stresssituation die Einsatzkräfte in der Lage sind, die Formulare direkt am TabletPC auszufüllen und Daten vom Einsatzort an den Einsatzleiter weiterzugeben. | Project | Christoph Bruns | |
Super Resolution Ultrasound Ultrasound imaging systems are more accessible, mobile and inexpensive compared to other imaging techniques. Furthermore, its real-time image formation, low cost as well as non-invasive nature make it very attractive. However, due to the inherent process of ultrasound, generated images are view-dependent and subject to noise. In particular, the dominant noise, referred to as speckle, decreases the resolution of ultrasound image. Therefore it is typically quite difficult to directly use common image processing methods in ultrasound imagery. By recording a sequence of ultrasound images, however, an enhanced image can be generated. This is of particular interest in the domain of transcranial ultrasound (ultrasound images of the brain through the temporal lobe), where the quality of the images is bad due to the transmission through the skull bone. | DA/MA/BA | ||
Implementation of a virtual practice tutor for music learners This thesis is in cooperation with a psychology student of LMU Munich. The project results in an evaluation of the pedagogical concept and its prototypical technical implementation. This evaluation is scheduled for May 2011; until then the Prototype should be ready for tryout on actual learners. The type of platform can be freely chosen, the student can use Android, Java, C#, web-2.0 or else according to his or her preferences. | DA/MA/BA | Siehe Unterprojekte | |
Automatic Feature Computation for Endoscopic Image Classification This work addresses the recognition of phases in minimally invasive surgeries. These phases are sequential in time and should be recognized using observations which are acquired automatically. The endoscopic view of the surgeon provides information about the surgical field. This work focuses on the recognition of surgical phases using image features computed from these endoscopic images. It is a complex task to select efficient features and few literature exists about features that discriminate surgical phases in endoscopic views. In this work, a new Genetic Programming approach is proposed to automate the search for efficient features. A feature is modeled as a program and those programs are evolved to improve the recognition rate. For the representation of the programs a programming language was defined, specialized on computation of image features. Programs of this programming language are evaluated by executing them on a virtual machine with labeled sample images as input. Once the programs are evaluated and achieved a so called fitness the best programs are selected as parents for a slightly changed and probably improved new generation of programs. Finally, the resulting features are compared with several standard image features to show their performance in distinguishing between two phases using an image.With a selection of the best features a multi-class classifier is built. It is compared with an early approach which is based on a neural network fed with a set of standard image features. | Diploma Thesis | Uli Klank | |
Ultrasound Imaging A variety of student projects in the area of ultrasound imaging. The projects touch different areas of computer science and engineering. | DA/MA/BA | ||
2D-3D ultrasound registration for tracking of endosonography | DA/MA/BA | ||
Ultrasound Registration for Brain Shift Analysis Using a Cryo-Gel Phantom Planning of a trajectory in neurosurgery (e.g. for biopsy or Deep Brain Stimulation, DBS) is a delicate task. Typically, a path is defined by analyzing a MRI image of the patient and setting an entry and a target point (in the case of a linear trajectory). The resulting linear path typically passes critical areas such as blood vessels or functional areas in very close proximity (e.g. <5mm). During surgery, after opening the skull and the protective dura mater of the brain, a pressure change and loss of cerebro-spinal fluid (CSF) can lead to a tissue shift of the brain – the so-called Brain Shift. Although this shift can be very small, e.g. 2mm in the deep brain areas, very little changes can deteriorate the reliability of the pre-operative plan significantly. In this thesis, we investigate how to use ultrasound for assessment and compensation of brain shift. </td> | DA/MA/BA | ||
Phase Aberration Compensation for Transcranial Ultrasound Ultrasound is a non-invasive imaging technique that can be easily and cheaply integrated into the OR environment. We plan to use ultrasound for neurosurgical interventions. Transcranial ultrasound means imaging the brain through the skull of patient (i.e. through the temporal lobe, or temples, in German “Schläfe”) of the patient. This technique is especially appealing because it is entirely non-invasive and even does not require sterilization. However, scanning through human bone modifies the ultrasound waves by changing their phase, hence the name “phase aberration”. Recent literature suggests that this “phase aberration” can be corrected. For that, we track the ultrasound optically and derive the skull thickness at the scanned position from a previous CT or MRI image. | DA/MA/BA | ||
Entwicklung eines Rauminformationssystem für die TUM Universitätsbibliothek (Web 2.0) Im Rahmen der Erneuerung der Webpräsenz der Universiätsbibliothek (http://www.biblio.tu-muenchen.de/) ist ein Rauminformationssystem zu entwickeln. Dieses ermöglicht die Standortanzeige der Bestände direkt aus dem Bibliothekskatalog heraus. Der Standort z.B. eines Buches wird auf einer Karte angezeigt. | DA/MA/BA | ||
Development of an Augmented Reality Application on the base of Ubitrack and a 3D Game Engine Modern game engines offer state-of-the-art graphics along with an easy to learn high-level user interface. In combination with Ubitrack, these engines might become powerful tools for the development of augmented reality applications. This thesis is about integrating Ubitrack in a 3D game engine and creating an example application. The example application is a game inspired by Farmville, where the user can plant and harvest crops in an augmented reality environment. For the visualization a Video-Seethrough-HMD is used and the tracking is realized using flat markers. Therefore, in order to connect the game engine with Ubitrack, the following tasks have to be completed: Showing the camera video, adjusting the virtual camera settings, and calculating the camera position. The last chapter contains a discussion of the results and possible improvements. | Bachelor Thesis | Konrad Pustka | |
Upper Body Tracking using a Time of Flight Camera for a Magic Mirror | DA/MA/BA | ||
User Distraction through Interaction with a Wall-sized Display Wall-sized displays are conquering their place in the everyday applications in offices and command and control centers. They benefit from the capability to display more information and promote collaboration and exploration better than common single and double monitor setups. The development of suitable interaction techniques is lagging behind, because standards, e.g. mouse and keyboard for a desktop PC, are yet to be defined. With advancement in spatial recognition and reconstruction the research is shifting from simple pointing to spatial interaction. Researchers try to determine the best suitable interaction techniques for different tasks on such a screen. A technique must not only perform fast, but allow the user to perform tasks efficient by not requiring the user to focus completely and solely on the interaction. In this thesis we observe the applicability of different techniques for the drag and scale interaction on a huge display under the aspects of speed and attention requirements. We designed a test which allowed us to observe such application. Through a test series with ten participants we determined which techniques are best suited for this task. We discuss the emerging errors under the aspect of the repetition and missing and offer an explanation of possible causes of these. | Master Thesis | Alexander Plopski | |
Tracking von Verkehrsteilnehmern in Multi-Sensor Systemen Die BMW Group bietet Ihnen ein spannendes Thema für Ihre Diplomarbeit im Bereich Projekte Connected Drive. Thema ist das Tracking von Verkehrsteilnehmern in Multi-Sensor Systemen. Ziel dieser Arbeit ist es das Potential eines neuartigen Ansatzes (Partikel Filter) zur Verfolgung von Verkehrsteilnehmern in Multi-Sensor-Systemen zu untersuchen. Hierzu soll ein bereits bestehendes Fahrumfelderfassungssystem, das auf einem konventionellen Kalman Filter basiert, um einen Partikel Filter ergänzt werden. Die abschließende Evaluierung des betrachteten und implementierten Verfahrens findet in einem BMW Versuchsträger statt. Wesentliche Bestandteile der Arbeit sind: Implementierung und Integration eines Partikelfilters in ein bestehendes Multi-Sensor-System. Evaluierung und Vergleich des umgesetzten Verfahrens mit einem konventionellen Kalman Filter. | DA/MA/BA | ||
Erkennung von Verkehrsteilnehmern in Videobild und Radar Die BMW Group Forschung und Technik GmbH bietet Ihnen ein spannendes Thema für Ihre Diplomarbeit im Bereich Projekte Connected Drive. Thema ist die Erkennung von Fußgängern und Zweiradfahrern durch bildgebende und entfernungsgebende Sensorik. Das Ziel der Arbeit ist es, geeignete Algorithmen zur Bildanalyse und zur Klassifikation zu recherchieren, prototypisch zu implementieren und zu vergleichen. Wesentliche Bestandteile der Arbeit sind: Die Recherche und prototypische Implementierung von zwei bildbasierten Verfahren zur Erkennung und Klassifikation von Fußgängern und Zweiradfahrern unter Berücksichtigung der radarbasierten Aufmerksamkeitssteuerung. Die Evaluierung und der Vergleich der umgesetzten Ansätze sowie die Integration in eine bestehende Live-Umgebung. | DA/MA/BA | ||
Advanced In-Situ Visualization for Vertebroplasty This bachelor thesis concerns the process of bringing over our existing medical augmented reality system from research stage to its first practical use at a real intervention. In today’s medicine, more and more operations are performed using minimally invasive or keyhole surgery. In this technique, all surgical instruments are inserted through a tiny cut in the patient’s skin. As a consequence of this technique, the operating surgeons heavily rely on navigation techniques to compensate for the missing direct view on the operation site. This means that, now, a surgeon has to operate in two locations at the same time: the real operation site and the display of the navigation system that shows him what he is actually doing. This dilemma was the starting point for the navigation and augmented reality visualization (NARVIS) project at the chair for computer aided medical procedures and augmented reality (CAMPAR). The core technology of this project is a medical AR system, consisting of a video see-through head-mounted display (HMD), an optical tracking system and the corresponding software framework, that allows one to visualize anatomical structures in situ. In situ visualization means, that a surgeon, wearing our HMD, sees the virtual anatomical structures exactly where their real counterparts are located. This facilitates him to operate, without constantly switching his focus between the actual operation site and the display of the navigation system. This allows a very natural operation procedure with all the advantages that modern navigation systems offer. Although much research has been invested in the possibilities and challenges of such a medical augmented reality system, there has never been a project that tried to compile all the results and tailor the system for one specific operation. Based on a suggestion from one of our cooperating surgeons at the Klinikum Innenstadt, the vertebroplasty intervention was chosen to be this first target operation. In my work, I analyzed the vertebroplasty and the existing AR system. Then, I formulated a research vision of how our technology could be used in this operation. Based on the research vision, I further analyzed what research and development steps needed to be taken. After developing this outline, I started my research and development work in the most important or unexplored areas. To evaluate and further analyze the operation and the progress of the AR system, my supervisor and I conducted an experiment at the beginning and at the end of my thesis work. Furthermore, I surveyed surgeons from Klinikum Innenstadt, to measure their acceptance of the system, define development priorities, and explore new possibilities for our technology. | Bachelor Thesis | Arno Scherhorn | |
Object Detection for Industrial Inspection The project deals with the implementation of an algorithm for checking the presence/absence of circular flat objects stacked on top of one another in the box. The objects are separated by the soft pads which can deform and has also be detected. The system has to return 3D pick-up location of the objects. Kinect-like industrial RGB-D camera will be used and developement will be based on HALCON library. | Hiwi | ||
Diplomarbeit: Entwicklung von virtuellen Patienten zur Evaluation unterschiedlicher Triagesysteme At mass casualty incidents (MCI) relief units are often facing a large number of injured persons. Triage systems are required to sort and treat the victims. In preparation for emergencies paramedics and physicians have to be trained in the application of these triage systems during special disaster control exercises at which mimes pose as injured persons. In this diploma thesis a virtual patient is to be designed and developed that can be diagnosed with various triage systems. The focus of the project is on designing an intuitive user-interface providing all essential interactions with the virtual patient. In addition to large-scaled disaster control exercises the virtual patient allows further triage practice. | Diploma Thesis | Andreas Dollinger | |
Model-based Segmentation of Abdominal Aortic Stent Grafts in 2D and 3D Interventional Images Automatic detection of the stent graft in interventional 2D and 3D image data is the essential basis to achieve these goals and, thus, constitutes the main concern of this diploma thesis. The use of model-based rigid and deformable registration techniques for stent graft extraction from medical images is evaluated along with the necessary preprocessing steps to find a proper initialization for the registration algorithm. The latter includes dominant local direction analysis using steerable filters as well as the use of distance maps to improve the registration results. The results are evaluated using synthetic test data and real patient data. | DA/MA/BA | Markus Urban | |
Interactive Volume Exploration in VR using Unreal Engine Volume rendering and exploration is of high importance in the medical context, where many different modalities produce high resolution volumetric data. Even with high quality volume rendering, exploration of these data sets is still challenging, due to the limited 3D perception on a screen and challenging interaction with the volume. Unreal Engine provides out of the box support for rendering to VR headsets, easing the technical challenges associated with serving these display devices. The goal of this project is to evaluate the suitability of Unreal Engine 4 with a HTC Vive VR headset for volume rendering and volume exploration. The first step towards such a system is to implement a method for loading medical volumes into a format suitable for the unreal engine. In a next step, a recently published plugin for volume rendering [1] shall be extended to support classical cutting planes and general transfer functions. Depending on the outcome of the previous steps, another goal is to potentially propose new or adapted interaction methods leveraging the Vive controllers to support volume slicing, annotations, and/or interactive editing of visualization parameters. [1]https://forums.unrealengine.com/showthread.php?143716 | Bachelor Thesis | Thomas Gamper | |
Implementation and Evaluation of an innovative multitouch-keyboard based on gestures This theses is about implementing a multitouch-keyboard based on gestures which can be used with 10 fingers at the same time. A well-known problem in this field is the lack of haptic feedback. This is the reason why the performance of a virtual keyboard is much lower than the performance of a real keyboard. The goal of this thesis is to get to know if users are able to be as performant with the new innovative multitouch keyboard as with the real keyboard. Therefore, different alternatives of the same concept should be implemented and evaluated. There is already a concept defined but students are welcome to bring their your own ideas into the project. | Hiwi | Lorenzo Piritano | |
Gestyboard 3.0: Weiterführung und Evaluiierung einer innovativen Texteingabe Technik In dieser Arbeit soll Optimierungen einer innovativen Multitouch-Tastatur entwickelt, implementiert und evaluiert werden. Die Tastatur soll das Problem von Software-Tastaturen (Touchscreen-Tastaturen) lösen, dass durch das fehlende haptische Feedback entsteht. Mit unserer innovativen Tastatur, dessen Grundkonzept bereits steht, soll blindes Tippen ermöglicht werden. Sie basiert im Wesentlichen auf intelligente Gesten und der gleichzeitigen Verwendung aller 10 Finger. Für weitere Informationen EMail an coskun@in.tum.de This theses is about implementing a multitouch-keyboard based on gestures which can be used with 10 fingers at the same time. A well-known problem in this field is the lack of haptic feedback. This is the reason why the performance of a virtual keyboard is much lower than the performance of a real keyboard. The goal of this thesis is to get to know if users are able to be as performant with the new innovative multitouch keyboard as with the real keyboard. Therefore, different alternatives of the same concept should be implemented and evaluated. There is already a concept defined but students are welcome to bring their your own ideas into the project. | DA/MA/BA | Thomas Faltermeier | |
Gestyboard 3.0: Long term evaluation In dieser Arbeit soll eine bereits bestehende neuartige Multitouchtastatur (Gestyboard 3.0) in einer Langzeitstudie mit multiplen Sitzungen evaluiert und mit der der kömmlichen Touchtastatur sowie der klassischen Hardwaretastatur verglichen werden. | DA/MA/BA | Phillip Schmitt | |
Gestyboard 4.0: User-centered iterative development of an innovative 10-Finger-System-based text input concept for touchscreens In this thesis, the innovative 10-Finger-based touchscreen keyboard goes into its forth iteration. To overcome the lack of tactile feedback, we use unique gesture-to-key mappings for each finger according to the ten-finger touch-typing method. As a key feature, the Gestyboard only accepts keystrokes when they are performed with the finger corresponding to the ten-finger touch-typing method. This way, missing a keystroke is not possible, and therefore blind typing is naturally supported by the concept. The goal of the thesis is to implement different new input concepts and to conduct an user-centered evaluation to the first tendencies of the new input concepts. | Master Thesis | ||
Multitouch Keyboard Kinect basiert In dieser Masterarbeit soll eine virtuelle Tastatur entwickelt werden, die mit Hilfe der Kinect von Microsoft arbeitet. Die Erkennung der einzelnen Finger und der Druckpunkte soll durch die Kinect realisiert werden. Der Student kann sich hier kreativ einbringen und selbstständig Lösungen erarbeiten. Bei Interesse bitte EMail an coskun@in.tum.de. | Bachelor Thesis | Johannes Roith | |
Three-Dimensional Ultrasound Mosaicing The creation of 2D ultrasound mosaics is becoming a common clinical practice with a high clinical value. The next step coming along with the increasing availability of 2D array transducers is the creation of 3D mosaics. In the literature of ultrasound registration, the alignment of multiple images has not yet been addressed. Therefore, we propose registration strategies, which are able to cope with problems arising by multiple image alignment. We use pair-wise registration with a consecutive Lie normalization and simultaneous registration, which urges the usage of multivariate similarity measures. In this thesis, we propose alternative multivariate extensions based on a maximum likelihood framework. Due to the higher computational cost of simultaneous registration, we describe possibilities for speeding them up, among others, the usage of a stochastic pyramid. We also present methods to reduce speckle noise and to detect shadow in ultrasonographic volumes, to improve the overall registration performance. For the compounding of the final volume, a variety of approaches are listed, ranging from general to advanced ones. Experimental results, on multiple ultrasound data sets, show the good performance of the proposed registration strategies and similarity measures. | Diploma Thesis | Christian Wachinger | |
Recognition of dynamic gestures considering multimodal context information Particularly in the automotive environment where standard input devices such as the mouse and keyboard are impractical, gesture recognition holds the promise of making man-machine interaction more natural, intuitive and safe [5]. But especially in a dynamic environment like the car, visionbased classification of gestures is a challenging problem. This thesis compares a probabilistic and a rulebased approach to classify 17 different hand gestures in an automotive environment and proposes new methods how to integrate external sensor information into the recognition process. In the first part of the thesis, different techniques in extracting the hand region out of the video stream are presented and compared with regard to robustness and performance. The second part of the thesis compares a HMM-based approach by Morguet [1] and a hierarchical approach by Mammen [2] to recognize gestures and the integration of external context knowledge into the classification process. The final system achieves person independent recognition rates of 86 percent in the desktop and 76 percent in the automotive environment. | Diploma Thesis | Main.LeonhardWalchshaeusl | |
Weakly supervised semantic segmentation via multiple hypothesis prediction During the past few years, remarkable progress has been made in the field of semantic image segmentation, thanks to deep learning and CNN advances. However, it remains challenging to reach a high accuracy using training data with image-level annotations only, i.e. in a weakly supervised fashion. In this project, we address the problem of semantic segmentation from image-level labels together with a small percentage of pixel-wise labeled images. Our approach will begin from recent work in estimating class-specific saliency maps [1]. Our goal is to further enhance its training pipeline, introduce a multiple hypothesis prediction approach [2] for improving the quality of the semantic outputs and additionally explore its usability for instance segmentation. [1] Shimoda, Wataru, and Keiji Yanai. "Distinct class-specific saliency maps for weakly supervised semantic segmentation." In European Conference on Computer Vision, pp. 218-234. Springer, Cham, 2016. [2] Rupprecht, Christian, Iro Laina, Robert DiPietro, Maximilian Baust, Federico Tombari, Nassir Navab, and Gregory D. Hager. "Learning in an uncertain world: Representing ambiguity through multiple hypotheses." In International Conference on Computer Vision (ICCV). 2017. | DA/MA/BA | Marat Seroglazov | |
Diplomarbeit: Development of complex applications in a plugin-free web browser environment Jeder PC, alle Smart-Phones und neuere Spielekonsolen besitzen einen Webbrowser inklusive eines AJAX-fähigen Javascript-Interpreters. Dadurch können auf all diesen Endgeräten dynamische Homepages angezeigt und asynchrone Serveranfragen gestellt werden. Es wird im Rahmen der Diplomarbeit gezeigt, dass es möglich ist, in dieser eingeschränkten Umgebung komplexe Anwendungen und Spiele entwickeln zu können und untersucht, ob auch alle gängigen Metaphern aus der für den Nutzer gewohnten Desktopumgebung in der plug-in-losen Webbrowser-Umgebung implementierbar sind. Ein im Rahmen dieser Arbeit entstandenes Framework, dass eine effiziente GUI-Entwicklung in dieser Umgebung ermöglicht, wird diskutiert. Die möglichen Servertechniken werden kurz vorgestellt. | Diploma Thesis | Philipp Kemmeter | |
Implementation and Validation of Intensity Based 3D-2D Registration Algorithms for Radiation Therapy One of the main applications for computers in medicine is to digitally merge patient data that originates from different sources. For our 2D-3D Registration problem, this means finding the right spatial alignment of a three-dimensional Computed Tomography (CT) data set and one or two X-Ray images from the same patient. An automatic algorithm has to create a digitally reconstructed radiograph (DRR) for a specific orientation in the CT data set, compare the result with the X-Ray image, alter the position and orientation of the DRR and repeat the process until the optimal alignment is reached. A key issue is to assess the alignment of these two images in every step. We present a comprehensive examination of all intensity-based similarity measures known in literature, and use them with a number of different optimization algorithms. We developed a prototypic application for both manual and automatic registration. The performance is evaluated for different data sets, and validation is done using marker-based ground truth information. The results show that we are able to do very precise and fully automatic registration within a couple of minutes. We propose to use this technique to find the correct patient position on the treatment couch for radiation therapy. This would replace the common, often very inconvenient immobilization methods. | Diploma Thesis | Wolfgang Wein | |
Weitwinkel-basiertes Tracking für Portable See-Through Displays | DA/MA/BA | ||
Entwicklung eines Widgets zur Steuerung einer Karte auf einem Multitouch Tisch Ein Widget, zum Verschieben, Zoomen und Drehen einer Landkarte soll entwickelt werden. Eine erste Version des Widgets existiert bereits (siehe Bild). Weitere Alternativen sollen entwickelt und evaluiert werden. Dieses Widget stellt eine Alternative zur Gesteninteraktion auf einem Multitouch Tisch dar. | DA/MA/BA | ||
Contactless control of a minimal invasive surgical robot system Minimally invasive surgery is a special technique of surgery, in which robot systems can be used for assistance. As an example, the robot arms can be controlled via special input devices that are situated at a remote console. In this case, the surgeon is seated at the console and commands the desired poses to the robot system. Thus the robot arms accomplish the movements that are given by the surgeon. Such a robot system is developed at the Institute of Robotics and Mechatronics of the German Aerospace Center. Two different user interfaces are intended for this system. One solution is to use haptic input devices with force feedback to control the robot arms and to give feedback about occurring forces. The other solution doesn’t use haptic input devices and is therefore called contact less. In this case, occurring forces are visualized in the endoscopic video by using Augmented Reality techniques. Goal of this diploma thesis is to implement the control of the contact less user interface. Usual surgical grippers are used as input devices. The grippers are equipped with passive marker spheres of an optical tracking system to make the detection of the instrument’s movements possible. Pose and angle of the grippers are computed out of the tracking data. After some processing, the data is committed to the robot system. | Diploma Thesis | Miriam Wiegand | |
Bachelorarbeit: Bedienung einer 3d Applikation durch einen Wii Controller Eine 3d Visualisierung soll mit Hilfe einer Wii Fernbedienung intuitiv gesteuert werden. Als 3d Visualisierungssoftware könnte das Partikel System des Lehrstuhls Computergraphic verwendet werden. | DA/MA/BA | ||
Hand Tracking with the Wiimote The past few years have seen many developments in computers move from the realm of research - even the realm of imagination and movies - into the hands of end users. However, two of the innovations that captured the imagination of both users and developers worldwide in the past few years were multi-touch surfaces and gesture- controlled devices. This paper addresses a project, carried out in the Fachgebeit Augmented Reality Lehrstuhl at the Techniche Universiät München (TUM), which aimed at combining these two technologies in one single software library. The tisch library, created by Florian Echtler at TUM, is aimed at keeping developers of multi-touch software fo- cused on the software development through providing an Application Programming Interface (API) that handles all low-level processing and hardware communication. The aim of the project was to integrate the use of the Wii Remote into this library. Two methods were devised for the integration of the Wii Remote with multi-touch surfaces. The first uses in-air hand gestures and the Wii Remote’s Infra Red (IR) camera capabilities, while the other integrated the Wii Remote’s normal hand-held use into the library. It was concluded that, while the integration of the Wii Remote into multi-touch hardware was not feasible due to its hardware limitations, its addition to the software library extended its use from multi-touch surfaces to include vertical displays that can be used alongside the multi-touch hardware it was originally designed for. | Bachelor Thesis | Amir Beshay | |
Design, Implementation and Evaluation of a Workflow System for Augmented Reality Applications | DA/MA/BA | Stefan Misslinger | |
Workflow analysis for tumor resection procedures and evaluation of potential applications for radio-guided surgery Cancer is the second main cause of death in Europe after circulatory diseases. Malignant neoplasm is responsible for approximately a fourth of all deaths accounting for 28.5% of male deaths and 22.0% of female deaths. The early detection of cancer and the optimization of treatment is the best way to improve the length and the quality of life of several millions European patients every year. Surgery is nowadays still the best way to completely treat tumor tissue. New medical methods and instruments thus should be developed with the aim of helping the surgeon to complete the operation in a more effective and less invasive way. In the last decades, an alternative method to the conventional operation has been developed to detect the tumor and control the tumor ground and margin by including so called nuclear probes. Their application in what is called radio-guided surgery succeeds commonly after the injection of a radiotracer to the patient. This radiotracer enriches in areas of particular interest and can thus be located in the operation room using proper the said nuclear probes. These probes are a relatively new tool in cancer resection. Of particular interest are high-energy gamma probes and beta probes, as they are highly suited to use with new positron emission tomography tracers. This study is devised theoretical to examine the feasibility of using such a hand-held probe in urological cancer to intra-operatively differentiate normal from tumor-bearing tissue. Based on detailed analysis and observations of prostatectomy, TUR-P, TUR-B, nephrectomy and orchiectomy and on the resulting statistics the feasibility is discussed. Moreover, this work gives proposals for the next required steps toward the introduction of these novel concepts in Urology. | Master Thesis | Xin Xu | |
Enabling Personalized Augmented Reality Display Technology in Cars This thesis' focus resides on developing technologies that enable incorporation of AR techniques into cars to enable driver assistance systems. For this purpose the rendered virtual content is presented on a conformal Head-Up Display (HUD) that uses the car's windshield. The construction of the HUD introduces two major sources of distortion on the image that have to be overcome: lens distortion caused by the fresnel lens, which is used to stretch the distance between the driver's eyes and the focal plane of the HUD, and distortion due to the curved shape of the windshield. Every windshield is unique in its shape so first a calibration procedure that can extract the distortion parameters of any given windshield should be developed, and then distortion correction can be performed with the help of a piecewise projective warping function. A calibration procedure for fresnel lenses should also be implemented, and with its help the image can be undistorted by using a radial distortion warping function. Since the image distortion presented by a lens depends on the distance and position of the viewer with respect to the center of the lens, the driver's head position has to be taken also into account when developing the correction function. For tracking the driver's head we intend to use a 6DOF-Tracker, for example UbiTrack?. The tracked driver's pose is also needed for continuously adapting the projection and viewing matrix of the rendering process, so that virtual objects are always aligned in the physical world. | DA/MA/BA | Michail Yordanov | |
Calibration of Virtual Cameras for Augmented Reality In this thesis different calibration techniques for AugmentedReality applications are evaluated. Hence techniques for spatial registration of visual output devices, the tracking system, and tracked objects are considered. This work focuses on the calibration of output devices. An AugmentedReality system should be designed as a dynamically configurable system i.e. a system which is able to connect new objects such as output devices on demand. Thus new user friendly methods need to be developed to achieve satisfiable calibration results instantly in an intuitive way. In this thesis, methods allowing any user without knowledge about calibration procedures to perform the calibration task without imposing a great burden on him or her have been evaluated and integrated. I developed a calibration component integrated in the DWARF framework and demonstrated it within the ProjectArchie. The ProjectArchie project's goal was to evaluate the possibilities of AugmentedReality for architectural planning. The resulting calibration method provides a recalibration possibility for the output devices of multiple users for any application based on DWARF. | Diploma Thesis | Bernhard Zaun | |
Automatic assignment of paramedics on a multitouch table Within the project SpeedUp (http://www.speedup-projekt.de/) we develop a map application for mass casualty incidents. This map application shows patients and paramedics. The application should assign automatically the best possible paramedic to the patient with highest priorization. TISCH, a multitouch table, which includes an ftir multitouch sensor and shadow tracking developed by Florian Echtler should be used as hardware. As development framework Microsoft Blend/ VisualStudio? / WPF should be used and implemented in C#. | DA/MA/BA | ||
[[Students.DanielPustka][]] | |||
[[Students.DarkoIstDoof2][]] | |||
Reinforcement Learning for Biomechanical Modeling In this project we are going to train an agent to control the motion of a human musculoskeletal model using deep reinforcement learning. | IDP | ||
BA/MA/IDP: Feature Visualization for Skin Lesions Deep neural networks have proven to give an outstanding performance in the classification task. However, understanding the learned features and the learning process is still a challenging task. In this project, we aim to investigate and visualize the relevant learned features that networks learn in the context of skin lesion classification. The project consists of developing software tools to assist dermatologist in the diagnosis of a skin lesion by providing both the classification result and a visualization of relevant feature for the diagnosis. | DA/MA/BA | ||
Evaluation of real-time dense reconstruction for robotic navigation In recent years different methods for real-time dense reconstruction have been proposed. In this thesis we want to investigate the requirements of 3d representations for the task of navigation. Therefore we want to compare state-of-the-art methods for 3d reconstruction with different 3d representations and different post processing steps like interpolation and completion. | Master Thesis | ||
Self-supervised Monocular Depth Estimation with Geometric Regularities | Master Thesis | ||
[[Students.DomainAdaptSynthBrainTumor][]] | |||
[[Students.DownloadLinks][]] | |||
Vergleich zwischen direkter und indirekter Interaktion auf einem Tablet PC In dieser Arbeit sollen verschiedene Interaktionskonzepte entwickelt werden, die ein mal direkt (per Touch auf dem entsprechenden Element) und einmal indirekt vom Rand eines Tablet PCs aus bedient werden können. Anschließend soll eine Evaluation ausgearbeitet und durchgeführt werden, in der die Probanden Aufgaben am Tablet PC zu erledigen haben, während sie in Ihrer realen Umgebung ebenfalls Aufgaben ausführen sollen. | DA/MA/BA | Gel Han | |
[[Students.EmbeddedQuickTime][]] | |||
[[Students.Evaluation][]] | |||
test | DA/MA/BA | ||
Deep Learning ... | DA/MA/BA | ||
[[Students.ExampleSchedules][]] | |||
[[Students.ExampleX3D][]] | |||
Exergotchi Social Park In this thesis the student's task is to write an android application which is able to connect to a server. The time and the position have to be transferred to this server. This way, the server knows the position of those Android devices. The purporse is to organize a sport event where people are running together in a park. The time and the place of this event can be spreaded out via Facebook for example. The server computes the direction to the closest other android device and transfers that to the corresponding android device. By following that direction the user finds other joggers who take part of this event. Once they are together, the server considers them as a group. If everything works well, we will have one big group running through the park. At the end of this event, the users should be able to fill out a questionnaire which should be implemented inside the Android app. | DA/MA/BA | Md.Raihanul Islam | |
Exergotchi 3.0: Using the tamagotchi effect for sports In this thesis the tamagotchi effect will be used to increase the motivation of people to do sports. The exergotchi's (EXERcise+tamaGOTCHI) physical and emotional state depend on the users' activity. The student in charge of this project gets an unique opportunity to cooperate closely with people from the company eGym. eGym bountifully provides access to their workout machines and also helps supervising the student to get familiar with their machines. | DA/MA/BA | ||
[[Students.FelixLoewContDoku][]] | |||
[[Students.FeuersteinDaPlanning][]] | DA | ||
[[Students.FirstWorkNResultsMichaelSiggelkow][]] | |||
[[Students.FranzStrasser][]] | |||
[[Students.Friday_19_03_2004][]] | |||
Investigating 3D Capsule Networks | DA/MA/BA | Dilara Gökay | |
Fourier Neural Networks for 3D Scene Understanding | DA/MA/BA | Enis Simsar | |
Evaluation of the first Gestickboard Prototype This thesis is about the evaluation of the first Gestickboard prototype. The Gestickboard is based on the Gestyboard concept which is a novel text input approach designed for multi-touch devices. | DA/MA/BA | Natalia Zarawska | |
[[Students.GrKindl][]] | |||
3D Sketching in a Virtual Environment Sketching is a frequently used tool in the workflow of architects and engineers, which is used to draft and exchange ideas with others. However, sketching is still performed mainly on a sheet of paper. Thus, a 3D drawing remains fixed in the 2D plane of the sheet, and can be observed only from the view point it was originally sketched from. This research project introduces a new approach to sketching, where the sketch is transformed into a 3D model in real time. The user sketches on a digital multi-touch input device, while the reconstructed 3D shape is simultaneously visualized in the background. Furthermore, the user can change the point of view during the very process of sketching. It is also possible to use other structures from the 3D environment to align the sketch better. The implemented user interface resembles closely the classical pen-and-paper metaphor, there the user interacts with the surface using a simple 2DOF pen and no additional controls. | IDP | Violin Yanev | |
[[Students.HauptseminarSS04AR][]] | |||
[[Students.HauptseminarSS04MedImg][]] | |||
Interactive heart learning using augmented reality magic mirror A basic framework of augmented reality magic mirror, mirracle, has been developed. We also have a simple game engine for organ rendering. Until now our previous works were always the basic work for the whole organ structure, but the medical students should learn more details about. The heart is always the first and important part in anatomy learning. We want to develop a "serious game" about heart learning for fresh medical student.A serious game for heart learning, including an interactive learning environment and all the basic medical knowledge of heart. | DA/MA/BA | ||
A New Reconstruction Algorithm for High-resolution Micro PET Positron emission tomography (PET) is a widely applied clinical and research device enabling the visualization, characterization and quantification of biologic processes taking place at the cellular and subcellular levels within intact living subjects. Various tracers have been developed in cardiology, oncology and neurology. The imaging has demonstrated significant value in precision medicine. Preclinical PET device is a powerful tool for investigating biology and pathology on murine models of disease and other small-animal models. However, the accuracy of PET reconstruction needs to compromise between resolution and noise. The proposed master thesis will develop a novel regularization approach to integrate additional prior information to improve the PET reconstruction. | Master Thesis | ||
3D Scene Graph in OR Dataset and Benchmarking | Hiwi | ||
[[Students.HiWiDoerfler][]] | |||
[[Students.HiWiFeng][]] | |||
[[Students.HiWiFrimor][]] | |||
Implement and test various features under the C++ Light Field Recon framework Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera, allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. | Hiwi | ||
[[Students.HiWiMuhra][]] | |||
[[Students.HiWiNeuro][]] | DA/MA/BA | ||
[[Students.HiWiPlecher][]] | |||
[[Students.HiWiPopiv][]] | |||
[[Students.HiWiRoediger][]] | |||
[[Students.HiWiWiegand][]] | |||
Hiwi wanted for system administration duties The Chair for Computer Aided Medical Procedures is currently looking for at least one Hiwi (studentische Hilfskraft, 8-10 hours/week) for system administration duties. There are a variety of tasks, ranging from server maintenance, user management to inventory management, along with, optionally, web development using Typo3 and Wikis. Prerequisites are knowledge about networks, Linux-based servers and the ability to quickly learn new things. For more information please contact Tobias Lasser. | Hiwi | ||
Hiwi wanted for Basic Math Tools course (winter 2018/19) The course Basic Math Tools in winter 2018/19 is looking for a Hiwi (studentische Hilfskraft, 8h/week) to create a series of interactive Python notebooks implementing relevant examples from medical imaging, to augment both the lectures and the exercises. For more details please contact Tobias Lasser. | Hiwi | ||
Robust bone segmentation in CT We are looking for a motivated student to develop a bone segmentation approach in CT images. The method will have to be robust enough to cope with different pathologies (arthrosis, calcifications, ...). The student should have good programming skills in python and/or C++. Medical imaging experience is not required. | Hiwi | ||
Hiwi Position for Image Processing in Navigated Bronchoscopy We are looking for people extending our possibilities in navigated bronchoscopy. In particular, we'd like to have implementations of several medical image processing algorithms (segmentation, skeletonization, quantitative analysis) as building blocks for researching further approaches. This project will involve developing in C++ using our medical augmented reality framework CAMPAR, probably using Insight Segmentation and Registration Toolkit (ITK) components and possibly prototyping algorithms in Matlab. You should have good programming skills in C++ and Matlab and preferably working knowledge of ITK. Based upon this work there will probably be opportunities for further projects in this area at CAMP, e.g. IDP or Bachelor/Master/Diploma thesis. If you are interested or if you have questions please contact Tobias Reichl. | Hiwi | Wangxin Liu | |
HiWi position for computer aided techniques in retinal microsurgery Retinal microsurgery is a delicate surgery which requires high handeling precision under limited visual feedback. Computer vision techniques hold a great potential for supporting the surgeon and improving the outcome of the surgery. Usually, the acquired image data is very challenging due to high magnification, occlusions and blur; thus creating an interesting field for methods such as image registration, tracking and detection. The goal of this project is to support our team in developing and integrating novel approaches into a bigger framework. Based upon this work there will probably be opportunities for further projects in this area at CAMP, e.g. IDP or Bachelor/Master thesis. If you are interested or if you have questions, please contact Nicola Rieke. | Hiwi | ||
HiWi position for endosonography imaging Endosonography is a widely used imaging modality for diagnosis and intervention. Navigation systems might support both novices and experts, but a fundamental problem is tracking the endoscope tip within the patient. This project is located at our interdisciplinary laboratory IFL at university hospital Klinikum rechts der Isar. A camera and ultrasound calibration method is to be developed, which integrates into the clinical workflow. Existing methods need to be evaluated in lab and OR settings, new methods may need to be implemented and evaluated. For the tracking of the endoscope electromagnetic tracking is going to be used. Implementations exist for both camera calibration and ultrasound calibration, and phantoms for both are available. Based upon this work there will probably be opportunities for further projects in this area at CAMP, e.g. IDP or Bachelor/Master/Diploma thesis. If you are interested or if you have questions, please contact Tobias Reichl. | Hiwi | ||
Aufbau der Hardware des ersten Gestickboard Prototypen In dieser Arbeit soll die Hardware für den ersten Prototypen des Gestickboards 1.0 gebaut und dessen Treiber implementiert werden. Beim Gestickboard handelt es sich um eine weitere Alternative Eingabetechnik das auf dem Multitouch Texteingabe Konzept des Gestyboards basiert. | Hiwi | Natalia Zarawska | |
[[Students.HiwiHmdCalibration][]] | |||
Software Developer for Multimodal Interventional Imaging Framework Based on prior development within our group your job would be to take over development tasks for our Multimodal Interventional Imaging Framework, implemented in CAMPVis. You will work in close collaboration with our group and clinical partners at Klinikum Rechts der Isar, attend interventions, identify the required improvements of our framework and implement the necessary features yourself. We require good knowledge of C++ and the ability to work independently, some experience with GUI development or image processing is an advantage. | Hiwi | ||
Hiwi Job: Augmented Reality Magic Mirror We have created a first prototype of a augmented reality magic mirror using the Microsoft Kinect. A video of this project was very popular and got more than 150.000 views on YouTube?. Now we want to build a version of this mirror that can be shown e.g. on conferences. For this we are looking for a motivated student who wants to work on this project. | Hiwi | Prof. Nassir Navab | |
HiWi position for medical image registration Accurate alignment of intra-operative 2D and pre-operative 3D images may have crucial impact on procedural success rates. Most of existing methods, however, do not provide sufficient accuracy and robustness over a patient population. Despite the rising influence of machine learning approaches to various medical imaging applications, they have not yet found their way into 2D-3D image alignment. It is the main goal of this project is to organise for medical image data and its consistent storage in form of an image database. Besides efficient storage, this database is also supposed to perform automatic image preprocessing and image data mining based on available algorithms. Based upon this work there will probably be opportunities for further projects in this area at CAMP, e.g. IDP or Bachelor/Master thesis. If you are interested or if you have questions, please contact Stefanie Demirci. | Hiwi | ||
Mobiel AR Navigation for victims in disasters In dieser Hiwi-Position sollen verschiedene AR basierte Navigationskonzepte konzeptioniert, entwickelt und miteinander verglichen werden. Die Arbeit findet im Kontext des Projektes CRUMBS statt. | Hiwi | Sebastian Klingenbeck | |
Modelling and Comparison of different 3D metaphors for mobile AR Navigation In dieser Hiwi Position sollen mit dem Open Source 3D Modelling und Animationstool Blender verschiedene 3D Metaphern modelliert und animiert werden, welche zur mobilen AR Navigation verwendet werden sollen. Die verschiedenen Alternativen sollen dann in einer Expertengruppe miteinander verglichen und auf verschiedene Aspekte hin bewertet werden. | Hiwi | Teodora Velikova | |
Improving AR Farm | Hiwi | ||
[[Students.HiwiPankratzMultiMarker][]] | |||
[[Students.HiwiPankratzTrackman][]] | |||
[[Students.HiwiPankratzUbitrackComponents][]] | |||
UbiTrack installer | Hiwi | ||
3D Object Detection and Segmentation from Point Clouds Autonomous driving systems are right on the corner and one key concern around development and social acceptance of such systems is safe-guarding. In this project, we want to look at the task of Pedestrian detection from LiDAR? point clouds and their pose estimation from the RGB camera input. 3D object detection from sparse point cloud data and multiple pedestrian 3D pose estimation are two challenging tasks and therefore active research fields in both academia and industry. In this project, we want to integrate the state of the art deep learning methods, train models on synthetic renderings and improve their performance based on safe-guarding KPIs designed. | Hiwi | ||
HiWi position in DFG supported Surgical Data Science project Surgical Data Science is a relatively new field that aims at analyzing and automatically understanding the situation in the operating room at any time during a surgery. This can enable further solutions, such as smarter robotic assistance systems, automatic action triggering, and predictions on future actions. These predictions of course need datasets of recordings and medical annotations by experts, so that automatic systems can be trained and improved. Since SDS is still a relatively novel research area, there is a lot of potential for ground-breaking, pioneering activities. | Hiwi | ||
[[Students.HiwiSkiMessTechnik][]] | |||
Ultrasound application development for Image-Guided Neurology Transcranial ultrasound is a relatively new method for early diagnosis of Parkison within the domain of neurology. For this and other applications of ultrasound we need software for real-time data acquisition and processing. Apart from software development, patient studies are planned for which we require assistance. | Hiwi | ||
Integrating Ultrasound into an International Framework for Image-Guided Neurosurgery ROBOCAST is a multi-national project comprising several institutes which aims at outlining and implementing a prototype system for advanced, robot-assisted keyhole neurosurgery. In order to validate the pre-operative plan intra-operatively and offer additional navigation and orientation for the surgeon, we incorporate 3D Freehand Ultrasound into the ROBOCAST system. | Hiwi | ||
Virtual Reality Visualization of Complex Multi-Dimensional Relationships of Biological Phenomena The objective of this project is to visualize complex multi-dimensional relationships of biological phenomena using a Virtual Reality (VR) application and a commodity Head-Mounted-Display (HMD). The Institute for Computational Biology (ICB) at the Helmholtz Institute develops machine learning algorithms to cluster single cell images of flow cytometry (link). As one goes deeper into the Convolutional Neuronal Network (CNN) layer, single cells of different types are better grouped into different clusters. We plan to show this classification process in 3D and enable interactive exploration of clusters and to display information about single cells such as their image, cell cycle phase, label, and topology. Additionally, we plan to visualize other datasets, for example the results of a recent article in Nature (link). The VR system will be implemented using a game engine (Unity or Unreal Engine). The main focus of the VR application will be on the visualization and the interactive exploration of the dataset. Users will be immersed in virtual scene that allows them to navigate in the dataset, to control the visualization parameters interactively using tracked hand controllers. A focus will be on high quality 3D rendering with minimal latency and user-friendly interaction. | Hiwi | ||
Hiwi for Surgical Signal Recording and Analysis We are looking for an Hiwi to assist us in the recording and synchonisation of surgical signals acquired in operating rooms. The project is done in collaboration with Klinikum Rechts der Isar and Klinikum Innenstadt. Please find detailed information in the PDF document and do not mind contacting Nicolas Padoy or Tobias Blum for any further details. | Hiwi | ||
Visual Marker Based User Interface for the Operating Room In the last decades, many advanced computer-based navigation solutions, e.g. using external camera tracking for navigation and augmented reality visualization, and intra-operative visualization systems e.g. Camera Augmented Mobile C-arm (CamC), have been introduced and deployed in the operating room (OR) for surgical practice. However, very few computer-based systems have succeeded to become clinically accepted and even a small number of them were integrated into daily clinical routine. Interactions between surgeons and these advanced complex computer aided intervention (CAI) systems play an important role of successfully deploying systems in the OR. Traditional computer-user interaction hardware, e.g. mouse and keyboard, is very difficult and impractical to be used, since they are hardly sterilized. A cheaper and practical solution based on visual marker detection is proposed for the user interface of CAI systems. In this project, a visual marker based robust and friendly user interface must is designed and developed. The developed user interface is integrated into a CamC system and its functionalities are evaluated with the CamC system. | IDP | Oleg Simin | |
Development of an Actibelt Ecosystem The objective of the project is to alter the current actibelt system and extend it. Therefore it is necessary to redevelop parts of the infrastructure and map them on a cloud computing based system. The focus is set on a high level of security to ensure the pseudonymised patients privacy. Thus it is crucial to strictly separate analysis and demographic data to prevent any conclusions from the cloud to the patient. | IDP/Klinisches Anwendungsprojekt | Monika Nill, Benedikt Engeser, Sebastian Pretscher | |
Constrcution of a Ultra-Bright Display Based on the work of Helge Seetzen (see paper 'High Dynamic Range Display Systems'), this project is intended to construct and realize a display based on a LCP-Display in combination with a LED-Array to display ultra-bright visualizations. | IDP | Florian Birnthaler | |
Augmented Reality Bone Puzzle Augmented Reality systems are useful for edutainment system: 3D visualization tells us more than 2D visualization does and people can keep their motivation with edutainment systems rather than education systems. The aim of this project is to develop a medical augmented reality system for learning human anatomy, especially the structure of bones. The project is managed by following agile development style. There exist several tools for learning human anatomy. One of the state-of-the-art learning materials is 3D visualization text book. The feature of such material is to visualize data in 3D so that the users can get more information than we do from 2D one. Edutainment systems encourage their users to learn actively by providing video game like systems. The motivation of this project is to develop a more intuitive system for learning human anatomy by introducing the concept of augmented reality and using a Kinect. Augmented reality concepts provide 3D visualization and the possibility of developing edutainment systems while Kinects let us interact with computers by gestures. Therefore, we can develop more intuitive visualization and interaction edutainment system. | IDP | Alexander Schoch (TUM), Naoki Shimizu (Keio University, Japan), and Motoko Kanegae (Keio University, Japan) | |
CamC Artificial Fluoroscopy In trauma and orthopedic surgery, imaging through X-ray fluoroscopy with C-arms is ubiquitous. This leads to an increase in ionizing radiation applied to patient and clinical staff. Placing these devices in the desired position to visualize a region of interest is a challenging task, requiring both skill of the operator and numerous X-rays for guidance. We propose an extension to C-arms for which position data is available that provides the surgeon with so called artificial fluoroscopy. This is achieved by computing digitally reconstructed radiographs (DRRs) from pre- or intraoperative CT data. The approach is based on C-arm motion estimation, for which we employ a Camera Augmented Mobile C-arm (CAMC) system, and a rigid registration of the patient to the CT data. Using this information we are able to generate DRRs and simulate fluoroscopic images. For positioning tasks, this system appears almost exactly like conventional fluoroscopy, however simulating the images from the CT data in realtime as the C-arm is moved without the application of ionizing radiation. Furthermore, preoperative planning can be done on the CT data and then visualized during positioning, e.g. defining drilling axes for pedicle approach techniques. Since our method does not require external tracking it is suitable for deployment in clinical environments and day-to-day routine. An experiment with six drillings into a lumbar spine phantom showed reproducible accuracy in positioning the C-arm, ranging from 1.1 mm to 4.1 mm deviation of marker points on the phantom compared in real and virtual images. | IDP | Philipp Dressel | |
Automatic definition of cardiac axis in emission tomography images Emission tomography (PET and SPECT) are functional imaging modalities showing the three-dimensional distribution of radiolabelled molecules in the body. A major application is for cardiac imaging, mainly to evaluate myocardial perfusion. Cardiac images are reoriented before their clinical reading in order to have short- and long-axis views of the left ventricle. The reorientation is currently performed manually by defining the cardiac axis on two different views. The goal of the project is to develop and evaluate automatic methods to define the cardiac axis based on the image content and prior anatomical knowledge. | IDP | Steffen Strobel | |
Cell Nucleus Classification & Segmentation The group “molecular cytogenetics” at the Institute of Human Genetics employs fluorescence in situ hybridization (FISH) for the analysis of human chromosomes. FISH offers the opportunity to specifically stain any region within a genome and to visualize the respective region on both metaphase spreads and in interphase nuclei. A specialty of the group is multicolor-FISH, i.e. the simultaneous hybridization of multiple probes each labeled with a different color. The application of appropriate microscopy equipment and imaging software allows sophisticated 3D microscopy. Several images at different z-levels are collected and the images processed with deconvolution and 3D-reconstruction software. However, multicolor-FISH in intact interphase nuclei poses an especial challenge, as several image analysis problems have not yet been solved. These problems include the segmentation of hybridized DNA-probes, which may be labeled with a single color or with a combination of multiple different fluorochromes, and the correct classification of these probes. Thus, the aims of this project are the development of algorithms for the segmentation and color classification of fluorescent signals in a 3D-space (cell nucleus). The perspectives include a tool, which will be of importance for both, basic research and diagnostic applications. | IDP | Uli Klank | |
Coronary Arteries | IDP/Klinisches Anwendungsprojekt | Sai Gokul Hariharan | |
IDP: Visualization of Medical Cryosections for Augmented Reality | IDP | Stefan Hessel | |
MR Based Attenuation Maps for PET Measurements The goal of the project was to propose and evaluate alternative methods for generating an attenuation map for use in PET. Instead of using the standard approach of utilizing CT scans or transmission measurements we try to determine the attenuation map from MR images. | IDP | Darko Zikic | |
Detectability Indices in Directional X-ray Dark-field Tomography Medical imaging modalities such as X-ray Computed Tomography (X-ray CT) or Positron Emission Computed Tomography (PET) have been the basis of accurate diagnosis in clinical practice for decades, one particular example being the detection of tumors. But medical imaging also plays a central role during the therapy, for example when planning complex surgeries or when planning and monitoring radiation therapy treatments. Advanced X-ray imaging contrast modalities, such as phase-contrast or dark-field contrast, have recently demonstrated very promising fields of application both in diagnosis and therapy. The dark-field contrast in particular promises advanced imaging capabilities that are unprecedented and not available in other medical imaging modalities. Thanks to its directional dependence, it enables resolving micro structure orientations below the detector resolution, allowing insights into various anatomical and physiological processes, such as the connectivity of the brain. However, for practical clinical application several technical issues still have to be addressed in order to reach feasibility in terms of experimental setup, acquisition and processing times as well as dose considerations. While the directional dependence and anisotropy of the X-ray dark-field signal enables new applications, it also requires more acquisitions sampled all around the object, and thus longer acquisition times, higher dose and longer processing times compared to traditional tomographic imaging modalities. | IDP | Theodor Cheslerean Boghiu | |
Independent Component Analysis for Device Detection in X-ray Images Most catheterization procedures are performed under constant 2D X-ray imaging. Image-guided intervention (IGI) solutions aim at enhancing this 2D information by integrating the preoperatively acquired 3D patient scan into the intervention room. The automatic image-based tracking of such devices requires a prior detection of these within the intraoperative images. Instead of employing existing filtering approaches, the idea in this project is to analyze the applicability of Independent Component Analysis (ICA). | IDP/Klinisches Anwendungsprojekt | Daniele Volpi | |
Dynamic Medical User Interfaces Graphical user interfaces of medical devices have high requirements regarding their clarity and usability, the latter playing an ever increasing role as such devices offer more and more functionality. In parallel, with the widespread availability of mobile touch-based devices, the need for cross-platform interoperability has advances software platforms for graphical user interfaces. This project aims at taking advantage of advancements in such software packages, in order o improve the usability of multi functional medical devices. | IDP/Klinisches Anwendungsprojekt | David Nguyen, Papastergiou Theofanis | |
Evaluation of freehand SPECT image quality depending on different reconstruction parameters | IDP | Alexander Zhdanov | |
Modeling and implementing accoustic feedback of a Bluetooth gamma probe This project is performed in an innovative area of an intra-operative 3D nuclear imaging which o ffers a novel approach for robust and precise localization of functional information to facilitate less invasive, image-guided surgery. The base of this project is the use of nuclear probes in the operation room for localization of target structures and control of the surgery outcome. This radiation detectors provide an 1D signal that allows the surgeons to get information about the distribution of a radioactive labeled structure. The project shall be integrated within the declipseSPECT which is a system used for generating 3D images of radioactivity distributions inside a human body. In order to accomplish the task, the system acquires readings of a conventional gamma probe together with positions and orientation of the probe at the time the readings were required. The outcome of the given project is the software implementation of the acoustic feedback of a gamma probe. Stand alone gamma probes are manufactured by various companies (RMD Instruments, NeoProbe, Care Wise Medical) which can be used as starting points for modeling such a tool. A software solution is necessary for the seamlessly integration within the declipseSPECT system. The first step is de fining the acoustic model for the translation of the numeric values read from the probe into sounds. The project is strongly focused on research of possible implementation models and existing libraries to achieve the goal. Starting with a simple implementation model, the process should be incremental until a satisfactory sound navigation is achieved. The process includes experimenting with different models, different sound libraries and software sound synthesizers. | IDP | Andrei Mituca | |
Lightweight Framework for Medical Workflow Video Analysis Understanding the clinical workflow is the first step towards new technological developments. As such, it is important to have a framework that allows to analyze procedures, both surgical and non-surgical. The aim of this project is the development of a lightweight and extendable software for the analysis of synchronized videos in order to gain a better understanding of medical workflow. Its performance will be tested in various scenarios. | IDP/Klinisches Anwendungsprojekt | Johannes Klein | |
Intuitive, high-level GUI for Segmentation of Aortic Aneurysm Thrombus In many medical treatment and diagnosis processes, extraction of certain organs is an important preprocessing step. As automatic algorithms are only applicable for the segmentation of very few organs and structures, most extraction is done manually by the physicians or their assistants themselves. Especially for the segmentation of aneurysm thrombuses of abdominal aortic aneurysms, complete manual interaction is highly cumbersome. By an intuitive GUI and semi-automatic algorithms, we would like to facilitate the doctor’s work. | SEP | Michael Emmersberger | |
Workflow and statistical analysis and workflow optimized design of graphical user interfaces for thyroid examinations | IDP/Klinisches Anwendungsprojekt | David Weisgerber | |
Gamma Probe Modelling and Calibration for SPECT Reconstruction When dealing with cancer a resection of the malignant tissue is an often made choice of therapy. During the last steps of the diagnosis the tumor is marked with a proper tracer and localised in a PET/CT scan on which the surgical procedure will be planned on. As the anatomy changes until and during the intervention the use of the preoperative PET/CT scan is suboptimal for localizing the tumor during the surgery. Our project aims toward a system that allows to examine the tissue ’on the fly’, using the information of the PET/CT scan combined with data from a gamma-probe, to update changes in malignant anatomy. With this system finding the tumor will be easier, faster and more accurate so the surgery will be less invasive. Moreover it will not only take less time but the surgeon will also know how malignant anatomy has deformed from the moment of the acquisiton of the preoperative data allowing him to find and resect tumors that have moved or were not visible in the preoperative images or very small tumors that are normally not resected as the suregon cannot find them only with the preoperative images. Further the surgeon can identify tumors that are not longer alive and thus do not need to be resected making the procedure less invasive. As the gathered information of the PET/CT is 4D (activity + position) and the one of the gamma-probe and tracking system is 7D (activity + position + orientation), models for gamma activity aquisition will be needed, to make a transformation that will allow to combine the measurements of both devices. The output of this work should be a family of models that allow the transformation for the measurements of the gamma-probe, with different levels of complexity and accuracy, into a frame where their comparison with the PET/CT data set is possible. These models will thus allow an intraoperative reconstruction of the radioactivity distribution in space and as a consequence monitoring changes in the anatomy of malignant tissue during the surgery | IDP | Alexander Hartl | |
Design and Implementation of a Graphical User Interface for Acquisition and Evaluation of Freehand SPECT for Lymphatic Mapping in Breast and Skin Cancer | IDP/Klinisches Anwendungsprojekt | David Weisgerber | |
Deformable Reconstruction of Histology Sections using Tensor Voting | IDP | Markus Müller | |
Analysis of freehand SPECT reconstructions for phantom scans in different configurations using I-125 seeds and a low energy gamma probe Freehand SPECT is a novel imaging modality that enables 3D nuclear imaging in the operating room. In freehand SPECT, hand–held 1D gamma detectors are tracked with spatial positioning systems in order to reconstruct localized 3D SPECT like images, for example in the breast, pelvis or neck region. Until now, freehand SPECT is mainly tested and used for sentinel lymph node biopsy (SLNB) procedures using technetium-99m as radioactive tracer and a conventional gamma probe. However, after getting satisfactory results in this first usage, further clinical cases come to mind that could be also very relevant for freehand SPECT modality. Using different radioisotopes or different types of probes require different reconstruction parameters and therefore would result in different qualities. One possible clinical application could be using I-125 seeds for tumor localization and a low energy gamma probe. To see if it is feasible, phantom tests need to be conducted and a proper analysis is required. The goal of this project is to validate the freehand SPECT reconstruction algorithm for a different possible clinical scenario, using I125 seeds and low energy gamma probes. Before conducting actual clinical studies, the usage needs to be analyzed and validated by phantom scans. | IDP/Klinisches Anwendungsprojekt | Kanishka Sharma | |
Instrument Detection from External Cameras in the OR Knowledge about the instruments in use at every point in time during a surgery allows a very detailed analysis and recognition of the surgical workflow, and therefore predictions and numerous other applications. Several approaches exist to detect instrument usage in the OR, often involving additional sensors attached to the instruments. This has additional hard constraints due to the surgical sterility requirements, but is also prone to noise due to its makeshift nature. One approach currently under research by several groups is the detection of instruments from the laparoscopic view directly, but with the extreme optical challenges, that this method poses, more work is still required before reliable results can be expected. The goal of this work is to detect the instruments on the mayo stand through an external, ceiling-mounted camera. No additional sensors or markers are to be attached to the instruments or the surgical staff. The developed method should be able to detect multiple instruments in the same image, possibly several individual instruments of the same type. The approach should be robust against partial occlusions (e.g. by the hands of the scrub nurse) or overlapping instruments. Real-time capabilities up to 1Hz are beneficial, but not a requirement of this project. | IDP | Richard Voigt | |
Interface and Workflow Integration of a C++ Statistical Reconstruction Toolbox for X-ray Computed Tomography in Radiology Reconstruction of X-ray computed tomography (CT) data enables insight into the human body without a surgical procedure. The basic concept comes down to sending X-rays through the human body and measuring the changed X-ray on the other side of the patient. Such methods are called projective imaging methods. For a long time the sole quantity that has been measured was X-ray absorption, i.e. how much energy the electro-magentic wave lost while traversing the body. Recent advances have enabled the measurement of complementing phase-contrast and dark-field signals. In order to reconstruct a 3D volume of the human body providing a map of the physical properties which led to the according projective measurements there exist several algorithms. Additionally, prior assumptions or prior knowledge can be incorporated into this reconstruction process, for example as regularization. Parameter tuning, comparison and evaluation of these different approaches is key for clinical and scientific purposes. Our research group develops a C++ software framework called CAMPrecon which enables a flexible and abstract structure to model and compute numerical tomographic reconstructions. | IDP | Christian Grimm; Dominik Vinan | |
Efficient Interpolation Methods for Physical Models in Medical X-ray Computed Tomography X-Ray Computed Tomography (CT) is one of the cornerstones of medical imaging for many decades now. The tomographic reconstruction of CT is quite well understood theoretically and practically, but many open research issues remain. A central point for any reconstruction method is the projector and back-projector pair, which models the interaction process of X-rays with matter, the detection process in the detector and the acquisition geometry. Several standard methods for this are described in the literature, each with specific advantages and disadvantages. Common to all these methods are high computational requirements, necessitating the use of parallel computing. | IDP/Klinisches Anwendungsprojekt | Christoph Hahn | |
IDP: Human-Robot Collision Prevention in the OR The "operating room of the future" will in many situations be equipped with various robotic assistance systems, ranging from imaging and support systems all the way to fully robotic surgery suites. In all of these situations, though, human actors will still remain active around the patient, either as assistant and nurse for surgeries with only minimal or temporary robotic support, or as technical assistants in fully robotic interventions. As robotic systems gain more context awareness, intelligence, and autonomy with increasing processing and algorithmic power, the interaction between robotic and human actors becomes an increasingly more important research area, both for the safety of involved humans, as well as for the acceptance of the robotic systems. As a first step towards this goal, the robotic systems must know the pose of all humans in its vicinity. The aim of this project is to detect and track people and the robot in the complex environment of the operating theatre through depth sensors. | IDP | ||
Development and Evaluation of a Remote User Interface for Intra-operative Imaging Devices Due to the rapid technical development in recent years, studying the usability of a system has become increasingly popular. Usability is seen as a crucial factor for the success of many systems and products. In a complex domain like the operating room (OR) creating systems with high usability is even more important, as deficiencies in the design can have catastrophic effects. Due to the complexity of medical devices and the associated environment, it is very challenging to build system with high usability. In this work we try to build a generic and highly usable framework for remote controls of medical devices. The framework will be developed for the Apple iPod Touch, so surgeons can remotely control a medical device. The device analyzed is the declipseSPECT, which is manufactured by the SurgicEye GmbH. With help from the declipseSPECT device, 3D SPECT images can be created intra-operatively by scanning the patient with a tracked radiation detector. This information is displayed on a touch screen monitor, which is also used for the user interaction. The goal of the IDP is to analyze the current clinical workfow with declipseSPECT and make a user study to identify a subset of the most important user interaction elements. Depending on the results, a mobile application will be developed which allows the surgeons to remotely control the medical device themselves by putting the mobile device in a sterile foil and place it on the instrument table, in contrast to the usual routine, where a technical assistant is needed who operates the touchscreen of the medical device by following the surgeons instructions. Since surgeons have limited time during surgery it is very important for them to have a user interface that is simple enough so that it includes only the most needed components and is easy to understand. For this reason, the focus of this work is to develop a highly usable mobile application, which needs to be evaluated by conducting proper usability experiments. | IDP | Max-Emanuel Hoffmann and Tobias Konsek | |
Regularization Terms for Deformable Registration | IDP | Christian Konrad | |
Development of an Automatic Segmentation, Measuring, and Planning Tool for the Aorta and Aneurysms More and more aneurysms of the thoracic and abdominal aorta are treated minimally-invasive by implanting a stent graft inside the aorta. However, without opening the patient the clinicians rely on imaging techniques, e.g. computed tomography, magnetic resonance imaging and X-ray, in order to visualize the region of interest and for intraoperative navigation. Unfortunately not all imaging modalities are available during operation, some are only available preoperatively. In 2003 the STENT project explored the prospects of using computer aided imaging techniques for preoperative planning and intraoperative navigation. Within the project a first prototype application was developed, but due to time constraints never made it to the operating room. This project is the continuation of the 2003's STENT project and its goal is to provide the clinicians with a preoperative planning tool and an intra-operative navigation and visualization tool. Preoperativly acquired computed tomography images can be used for segmentation, visualization and metric measurement of the aorta and aneurysm. Intra operatively taken X-ray images are aligned with the CT data, thus aiding navigation by supplying a three-dimensional visualization of an anatomical detailed model, metrics and current as well as planned locations of the stent graft. | IDP | Oliver Kutter | |
Building and Programming an FPGA-Based LED Sensor/Display Numerous researchers have explored the possibility to use LEDs also as light sensors. One extensive paper with lots of background information can be read here. However, in order for this technique to be of practical use in large displays, a fast controller with a large amount of IO lines is necessary. As this goes beyond the capabilities of common microcontrollers, an FPGA-based solution is our goal here. | IDP | Andreas Dippon | |
Investigation of Local Phase for medical imaging | IDP | ||
Development of a Matlab analyzation toolkit for 3D lookup tables Gamma cameras are frequently used nowadays in nuclear functional imaging procedures in cancer diagnostics and interventions. In order to be able to obtain 3D reconstructed images of a radioactive distribution using mini gamma cameras, tracking and modeling of these devices is needed. Tracking is currently performed optically, but for robotic SPECT acquisitions mechanical tracking will be used due to its higher accuracy. To model the gamma cameras acquisition characteristics, a big amount of measurements of a radioactive point source in front of the gamma camera at different positions has been acquired and now this large dataset called lookup table has to be analyzed to improve future acquisitions and quality of the reconstructed 3D images. We are interested in evaluating this dataset to extract important parameters of the camera and aid in characterizing different cameras or different collimators without having to repeat all these measurements, which is very time-consuming (it takes a couple of weeks) and error prone. It also helps us to come up with an analytical mathematical model, which we could use to further improve the 3D reconstructions in speed and quality. More specifically, a Matlab based graphical user interface should be developed to load, process, analyze and visualize lookup table datasets. Parameters like the homogeneity, ellipsoidal shape of single pixels at different distances and sparseness should be obtained and conclusions regarding lookup table acquisition and its usage should be derived. | IDP | Chen Hsuan Shih | |
Human-Machine Interaction in Clinical Environments based on Wearable Inertial Sensors | IDP | Andreas Schaumeier | |
Meta-learning for Medical Image Segmentation | Project | ||
Online Workflow Recovery Workflow recovery deals with the problem of identifying related phases of two recorded processes, given one of them has been annotated as desired. Once the relation has been established, the information can be used for, e. g., documentation purposes or process optimization. On the other hand, training feedback can be given by synchronizing a trainee’s 3D hand movements to those of an expert surgeon [7]. In the latter case, dynamic time warping (DTW) has been used, which has also been applied successfully in the context of statistics, speech recognition or error detection in industrial processes. Our motivation stems from the analysis of surgical operations. The offline recovery of the workflow allows for, e. g., automated documentation, but also for general OR workflow optimization. The importance of this issue in the OR of the future has been underlined in the OR2020 workshop. Further work on this subject is based on hidden Markov models to analyze the skills of the surgeon. However, their focus is on extraction and analysis of single tasks, whereas here the analysis of the whole process, i. e., the complete surgery, shall be emphasized. | IDP | Michael Tautschnig | |
Fusion of in-vivo and ex-vivo microscopy volumes of mouse brain samples Confocal and sectioned microscopy are two complementary techniques which can, if properly fused add valuable information to each other. The goal of this project is to develop techniques for image fusion and enhancement after aligning confocal microscopy volumes with volumes reconstructed from a stack of sectioned microscopy. | IDP/Klinisches Anwendungsprojekt | Eleni Siampli | |
Mosaicing Multi-spectral Images of Heritage Paintings In this work we propose using multispectral imaging as an aid to the restoration process of heritage paintings and also to help peel back centuries of discoloration and layered impurities. To ensure that small enough details can be recovered, mosaicing will be employed to reconstruct a very high resolution image of the artworks in question. | IDP/Klinisches Anwendungsprojekt | Anne-Claire Morvan | |
MR-Based Attenuation Correction for PET | IDP | Loren Schwarz | |
Adaptive Multiple Camera Calibration Goal of this work is the implementation of a robust approach to multiple camera calibration. This type of calibration has gained significant attention in past years due to its application to 3D model recovery. | IDP/Klinisches Anwendungsprojekt | Amin Abouee | |
Camera Calibration for a Multi-Camera Environment Camera calibration is an important topic for any vision-based system and has applications in a wide range of engineering fields. It deals with finding the intrinsic parameters of a camera such as its focal length, its extrinsic parameters, i.e. its location in a world coordinate system, and the coefficients for correcting the radial lens distortion. When multiple cameras are involved the calibration task becomes more challenging because in general it is not possible for all cameras to see the target at the same time. This means that there are no correspondences across all the cameras. In order to obtain a calibration for all cameras methods have to be applied which can deal with missing data during the calibration process. These methods include algorithms which try to fill in missing data as well as algorithms which perform partial calibrations using a subset of the cameras and then merging these results either sequentially or hierarchically. In this work several of these methods will be implemented and compared in terms of ease of use and accuracy. | IDP | Martin Dummer | |
Development of a GUI for multimodal interventional imaging The development of multimodal, interventional imaging requires intuitive user interfaces that combine and display data from different imaging modalities (Ultrasound, SPECT, PET) and other sources (system control messages). Our group has a focus on prostate diagnosis and participates in various projects for the development of new detectors and the definition of new workflows. | IDP | Jakob Weiss | |
ABCDE Rule for Multispectral Images of Nevi | IDP/Klinisches Anwendungsprojekt | Christoph Baur | |
Texture mapping with Multi-spectral Images for dermatology The student would implement a modified texture mapping approach for multi-spectral images, that should also work with regular RGB images. In this project the student would also work with real Images acquired at dermatology as well as synthetic Images | IDP/Klinisches Anwendungsprojekt | Sabahattin Giritli | |
Visualization of multi-spectral image data for dermatology | IDP/Klinisches Anwendungsprojekt | ||
User interface development and evaluation of a navigation system for bronchoscopy In bronchoscopy a physician examines the bronchial tree of a patient with a camera and medical instruments. To access regions outside the view of the camera, the only feedback of the position of the instrument is x-ray imaging. In these x-ray images the final lesion is not visible. The preoperative CT scan plus a electromagnetical tracking device is used to provide feedback of the current instrument location in real time. The work will be conducted in the pneumology department of the Klinikum rechts der Isar, Max-Weber-Platz. A first fully functional and tested prototype was developed by Julian Much based on the software framework CAMPAR. This software is based on C++, OpenGL and QT. The work will be in refining the user interface for navigation purposes and conduct together with Dr. Hautmann clinical experiments. | IDP | Arne Wirtz | |
3D Object Detection and Flow Estimation in Dynamic Surgical Enviroments | IDP | Dilara Gokay, Enis Simsar | |
[[Students.IdpOmaryStefanMeeting150305][]] | |||
[[Students.IdpOmaryStefanMeeting200305][]] | |||
[[Students.IdpOmaryStefanMeeting200415][]] | |||
Computer-assisted Spectral Quantification of Disease Progression for Cutaneous T-cell Lymphoma | IDP | Alexandru Duliu | |
Image Registration and Merging for a new Optical Tomography | IDP | Moritz Blume | |
Development and Implementation of a User-Friendly Tool for Analysis of the Patella Cartilage Using Magnetic Resonance Tomography The detection and development of arthrosis (degenerative arthropathy) is a major issue for physicians in clinical routine due to very frequent occurences especially concerning older people. In Klinikum Grosshadern, a newly developed MRT (Magenetic Resonance Tomography) sequence provides information about the bio-chemical composition and architecture of the patella cartilage (joining kneecap and the cartilage of bones). In order to prove the clinical significance of this novel MRT sequence the reproducability shall be shown for patient data acquired in short time intervals. This involves segmentation of the patella cartilage and registration of different image sequences. The segmentation will be done model-based since contrast differences between patella and bone cartilages are rather small. Registration will first be performed rigidly, i.e. only rotation and translation parameters are to be recovered. Also, it has to be evaluated if an elastic approach would give better results once the patient data is acquired during longer time intervals. The IDP covers the implementation and evaluation of the above mentioned segmentation and registration techniques. Talking about requirements and results to doctors is crucial in this IDP due to the novelty of the approach. It emphasizes the interdisciplinarity of the project and gives interesting insight into radiological routine. | IDP | Lorenz König | |
Patient Data Privacy in ORUse Framework In clinical studies, there is an overload of medical data, which needs to be treated cautiously due to patient data privacy issues. Although the patients sign documents to confirm that the medical/personal data related to their disease might be distributed to 3rd persons such as different departments of the hospital, it still needs to made sure that the critical parts are at least pseudo-anonymized, meaning that at least the names are kept secret and only the responsible physicians for the study can map the names to the study IDs of the patients for quality assurance. In Klinikum rechts der Isar, TU M"unchen, there is currently a clinical study started called "SLN3D", where the nuclear medicine, gynecology departments from the medical side and the CAMP chair from the technical side are involved in. In this study, 150 patients are planned to be recruited, where the Freehand SPECT (fhSPECT) system developed by the CAMP chair and the company SurgicEye GmbH is to be used intraoperatively in SLNB procedures of breast cancer patients. For this study, the medical data such as preoperative nuclear imaging like SPECT/CT, scintigraphy and fhSPECT are to be saved in addition to the intraoperative fhSPECT. All of these different modalities have different format and characteristics, which should be analyzed and investigated for proper anonymization and saving. In case of preoperative fhSPECT, the screenshots generated by the system even involves the faces of the patients, which needs to be considered, as well. For evaluation of intra-operative imaging devices, an extensible and flexible data gathering and evaluation framework called ORUse has been developed in the CAMP chair. The first device modeled in this framework is the Freehand SPECT system, which is mentioned above. Until now, only the automatic output of the Freehand SPECT system is included in the model, with no effect of patient characteristics and additional available information. However, involving the preoperative data inside the framework would provide a great platform for further investigations such as to analyze the surgeries based on different parameters much faster and more intuitively. In order to do this, the data should be pseudo-anonymized beforehand. %This framework is being used to analyze the data which is saved based on the OR domain model, which consists of 3 distinct views, namely target device, human roles and the surgical workflow, in order to reduce the complexity of the OR room domain. The goal of this interdisciplinary project is to develop an application, which will be used later for fast (pseudo-)anonymization of multiple acquisitions involving different modalities, datatypes or requirements such as scintigraphy, SPECT/CT and Freehand SPECT by the study investigators. The programming environment will be C# and Visual Studio. This tool then will be used to upload and integrate the preoperative medical imaging data inside the ORUse framework. | IDP/Klinisches Anwendungsprojekt | Kristina Bayer and Peter Maximilian Hirschbeck | |
Attenuation map estimation in PET/MR reconstruction Attenuation correction of positron emission (PET) data is mandatory for the reconstruction of diagnostic images. In the context of future simultaneous whole body PET/MR devices, it would be desirable to derive attenuation information from magnetic resonance (MR) images. However, the field of view of the MR is limited, leading to incomplete images. Recently, several approaches have been proposed to obtain the missing attenuation information from the raw PET data. The main goal of this project will be the investigation and implementation of a pre-selected algorithm from the state-of-the-art literature. | IDP/Klinisches Anwendungsprojekt | ||
Cardiac PET-SPECT Registration Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography (SPECT) are two imaging modalities which show complementary aspects of the myocardial perfusion and metabolism. Registration of PET and SPECT cardiac images of the same patient acquired in different days would allow a clinical advantage by effectively combining the information from both modalities. The purpose of this IDP is to establish a platform for accurate and robust PET-SPECT registration. This project is a collaboration between CAMP-AR and the Nuclear Medicine Department, Klinikum Rechts der Isar der TU München. | IDP | Brian Jensen | |
Development of a flexible PET system model class The main goal of this project will be the design and implementation of a C++ class representing a generic system model. This class should encode the position and properties of all the detectors of the PET system, as well as provide the basic geometrical primitives for the computation of the system matrix. A simple interface to define new system geometries is required. | IDP/Klinisches Anwendungsprojekt | ||
Clinical evaluation of freehand SPECT for sentinel lymph node biopsy in breast carcinoma This article highlights the clinical evaluation of free-hand SPECT(fhSPECT) for sentinel lymph node biopsy in breast cancer. It contains some prepared tasks for the application of fhSPECT in the surgery room and an application for image processing between different modalities and evaluation of measured results. The first part is to observe the workflow of normal sentinel lymph node biopsy and record the duration of different processes in order that it can provide a reference for developing the fhSPECT in the aspects of hardware and software and suggestions for the practical application in the surgery room. The second part is to design the experiment for applying the fhSPECT with the patients. It shows that acquisition of using fhSPECT takes only 8.77 minutes on the average and it will not cause damage or uncomfortable feeling for the patients. The reconstructed image from the measurement data shows a good matching in terms of detected sentinel lymph nodes compared to the planar scintigraphy. The last part describes the detection of 3D position of sphere centers using 2 phase Hough transform which appear in the CT slice images. They are reference points for image processing between different modalities. A simple application is developed for the detection and Some practical issues are discussed. | IDP | Mei-Chuan Chen | |
Multimodal Prostate Segmentation The objective of this IDP is to develop and implement advanced segmentation algorithms in order to allow automatic segmentation of the prostate gland in MR and US volumetric images. Your task would be to familiarize yourself with the challenges posed by MR and especially trans-rectal ultrasound (TRUS) imaging, as well as state-of-the-art algorithms suited to tackle a detailed segmentation. You will then implement the algorithms in CAMPVis, the visualization framework developed at our chair, and iteratively evaluate the algorithms' performance with an extensive available dataset. | IDP | ||
Prostate MRI and Ultrasound Registration The objective of this IDP is to develop and implement advanced registration algorithms for Prostate MRI and US volumetric images. Your task would be to familiarize yourself with the challenges posed by MR and TRUS imaging, as well as state-of-the-art registration algorithms. You will then implement the algorithms in CAMPVis, the visualization framework developed at our chair, and iteratively evaluate the algorithms' performance with an extensive available dataset. | IDP | ||
Segmentation of Psoriasis with multispectral Images Erythema is redness of the skin or mucous membranes, caused by hyperemia of superficial capillaries. It is the primary symptom of diseases such as psoriasis with the extent of the lesion indicative of both the stage in its development but also its response to treatment. In this work we investigate multiple state-of-the-art methods to reliably segment such lesions from multispectral images under realistic lighting conditions. | IDP | Main.SaahilOgnawala | |
Radiation exposure and protection in radionuclide-guided surgery Radionuclide-guided surgery is becoming more and more popular, due to the emergence of novel intra-operative imaging modalities like, but not limited to, FreehandSPECT. Intra-operative modalities like this provide real-time guidance to the OR staff, especially the surgeon, in making correct decisions, and therefore have a growing impact on the workflow. Thereby one question to be answered is: What is the dosis and effect of the radiation exposure of the OR stuff? | IDP/Klinisches Anwendungsprojekt | Radu Diaconescu | |
Clinical evaluation of freehand SPECT for 3D thyroid scintigraphy | IDP | Xinxing Feng | |
Intraoperative Registration and Visualisation for NOTES Natural Orifice Transluminal Surgery (NOTES) is an increasingly used surgical technique for minimally invasive procedures. Due to limited field of view and restricted motion, orientation of the endoscope can be difficult and navigation systems promise valuable support to endoscopists. In this project, a method is to be developed to register an electromagnetic tracking system with the actual patient position, and an appropriate visualisation method is to be developed. The work will be performed at university hospital Klinikum rechts der Isar, in close collaboration between the groups "Minimally invasive Interdisciplinary Therapeutical Interventions" (MITI, Prof. Feußner) and "Computer Aided Medical Procedures" (CAMP, Prof. Navab). If you are interested or if you have questions, please contact Tobias Reichl. | IDP | Ayah Haidar | |
Regularization of spherical functions in medical imaging of X-ray anisotropic dark-field signals Medical imaging modalities such as X-ray Computed Tomography (X-ray CT) or Positron Emission Computed Tomography (PET) have been the basis of accurate diagnosis in clinical practice for decades, one particular example being the detection of tumors. But medical imaging also plays a central role during the therapy, for example when planning complex surgeries or when planning and monitoring radiation therapy treatments. New X-ray contrast modalities, such as phase-contrast and dark-field contrast, are being developed in the last few years, based on a break-through in grating interferometry, with many promising clinical applications, ranging from breast cancer detection to diagnosis of osteoporosis. | IDP/Klinisches Anwendungsprojekt | Stefan Haninger | |
Respiratory Motion Estimation Respiratory motion has a big impact on medical imaging of the thorax area and needs to be studied and corrected. Different sensors are clinically available to noninvasively measure the respiratory state of the patient, including an infrared camera system, a pressure detector, a spirometer and a temperature probe. Each sensor uses different physical properties and measures respiratory signals which need to be correlated to the internal organ motion. The goal is to evaluate and compare the different sensors. Experimental measures will be done on volunteers and signal & image processing techniques will be used to analyze the data. The results will then be applied and evaluated for clinical studies. If this goal is achieved on time, the feasibility of a more general patient motion correction using the infrared camera system will be studied. | IDP | Michael Riedel | |
Bringing a robotic SPECT/CT prototype into the OR The objective of this project is the refinement and preparation for its first clinical use of an interventional robotic SPECT/CT prototype. This entails the development of a user-friendly GUI and the optimization of the core data processing module. | IDP/Klinisches Anwendungsprojekt | ||
Investigation of the logfiles of freehand SPECT acquisitions for usage characteristics and surgical phase determination In Klinikum rechts der Isar, TU München, there is currently a clinical study started called "SLN3D", where the nuclear medicine, gynecology departments from the medical side and the CAMP chair from the technical side are involved in. In this study, 150 patients are planned to be recruited, where the Freehand SPECT (fhSPECT) system developed by the CAMP chair and the company SurgicEye GmbH is to be used intra-operatively in sentinel lypmh node biopsy (SLNB) procedures of breast cancer patients. Until now, more than 50 patients are recruited for the study and underwent a SLNB with fhSPECT already. For evaluation of intra-operative imaging devices, an extensible and flexible data gathering and evaluation framework called ORUse has been developed in the CAMP chair. The first device modeled in this framework is the Freehand SPECT system, which is mentioned above. For each of the acquisitions the fhSPECT system automatically saves all available information into text files, such as tracking, user interaction or activity logs. Even though these files include almost all the information about how the users used the system, they are hard to interpret by the end users due to their sizes, the redundancy in the data and synchronisation via timestamps. Until now, only the automatic output of the Freehand SPECT system is included in the ORUse, with no filtering or analysis of the huge amount of the log files. Furthermore, proper analysis of these log files can be useful to approximate the phases of the surgical workflow and the time estimation. Also due to the freehand nature of the device different scan patterns and/or speeds can be identified and investigated for influence in resulting images. The goal of this interdisciplinary project is to develop an application, which will be used later for analysis of the medical data for usage characteristics and surgical workflow durations by the study investigators. The programming environment will be either C++ or C# and Visual Studio. This tool then will be used to upload and integrate the results inside the ORUse framework. | IDP/Klinisches Anwendungsprojekt | Richard Voigt | |
Skin Segmentation The student is tasked to prepare a documented software package which segments skin from either color or gray-scale images. | IDP/Klinisches Anwendungsprojekt | Alexander Schoch | |
Segmentation of CT images of shot soap blocks Projectiles from firearms are among the penetrating foreign bodies in the routine forensic medicine. Commonly used techniques for the assessment of the extent of the injury and for the localization diagnostics are conventional radiography and CT. The localization of a projectile is possible by means of tomographic techniques in three-dimensional space. During the reconstruction of shooting channel, the fundamentals of ballistics cannot be ignored. This project is about so-called "soap blocks" that have been shot with different calibers from various distances, and were then examined with CT. The work will be done in close collaboration between the Radiology Department of LMU Klinikum Inennstadt and "Computer Aided Medical Procedures". | IDP/Klinisches Anwendungsprojekt | Anastasia Tseneklidou | |
Development of an automatic multimodal in-situ visualisation tool For the treatment of spine fractures, that fulfil certain instability criteria, the minimally invasive approach has become widely accepted over the last decade. It was the reduction of access morbidity without the need to accept diminished surgical effectiveness that led to the development of endoscopic operations from representing exceptional interventions to becoming standard procedures in spinal surgery. In minimally invasive surgical approaches, the pre-operative planning phase is of utmost importance, as the positions of the portals in relation to one another and to the operating site significantly affect the entire course of the operation. The surgeon has to find - using the X-ray image amplifier in combination with the preoperatively acquired CT or MRI data - a proper projection of the target lesion onto the surgical area. This requires the surgeon to look on a screen, posing the problem of mentally mapping the medical images onto the patient's body. Particularly the lack of the third dimension makes this procedure very difficult. The goal of this project is the development of a visualisation tool, which provides the surgeon with preoperatively acquired imaging data registered in 3D and augmented directly onto the surgical object, using a video-see-through HMD. This would supply the surgeon with a more intuitive view and a better spatial feeling, hence yielding better results for the placement of portals. | IDP | Latifa Omary, Philipp Stefan | |
Detection and 3D Recovery of Stent Grafts in 2D Xray Sequences In the current clinical workflow of endovascular abdominal aortic repairs (EVAR) a stent graft is inserted via an introducer system through one femoral artery into the aneurysmatic aorta under 2D angiographic imaging. Due to the missing depth information in the X-ray visualization, it is highly difficult in particular for junior physicians to place the stent graft in the preoperatively defined position within the aorta. Therefore, methods for accurate stent graft recognition or segmentation in fluoroscopy images are highly required. | IDP/Klinisches Anwendungsprojekt | Radhika Tibrewal | |
3D Surface Reconstruction for freehand SPECT The freehand SPECT system developed originally at the Chair for Computer-Aided Medical Procedure and the Nuclear Medicine Department at Klinikum recht der Isar is an intraoperative imaging and navigation device for applications in the field of Surgical Oncology. It enables the creation and visualization of images of radio labeled tissue of the patient, for example tumors and lymph nodes. The purpose of this IDP project is to extend this device with a 3D surface reconstruction of the surface of the patient. The availability of the surface information may allow improvements in image quality as well as a joint display of anatomy (surface of patient) and functional information (reconstructed radioactivity distribution inside of the patient). For example, if the bounding surface of the patient is known only activity spots in the patient can be reconstructed. This will improve the speed of the reconstruction and improve the qualitiy of the reconstructed images. Further if the video feed is not available, the reconstructed SPECT image has no relationship to the patient anatomy. If we have a 3D model of the patient, one could still see where they are located in the patient. Another possibility to improve the device would be to project the image directly onto the patient. For doing so, a 3D surface model of the patient is absolutely necessary. These are only some possible advantages of many, which can be achieved through this work. The result of the project is an application, which can reconstruct the surface of a virtual 3D object from rendered images of two different views. The program is capable of showing the corresponding images (segmented or not) as well as the reconstructed surface as a mesh in 3D. In this sense, this project will prepare the basis for the integration of surface information in freehand SPECT. | IDP/Klinisches Anwendungsprojekt | Gel Han, Christopher Resch & Christian Wiesner | |
Survey and Analysis of Mathematical Methods in Texture Classification with Application to Ultrasound | IDP | Florian Schulze | |
Semiautomatic segmentation of tumors in Positron Emission Tomography Ziel der Arbeit ist die semiautomatische Tumorsegmentierung in Positronen Emissions Tomographie (PET) Daten. Der Basis-Teil des Projektes besteht aus der Implementierung von bereits ausgesuchten Segmentierungsalgorithmen, dem Auffinden weiterer Algorithmen (Literatur- Recherche) für die spezifische Problemstellung, der Erweiterung zur Analyse von dynamischen (4D) PET Daten. | IDP/Klinisches Anwendungsprojekt | Patrick Wucherer | |
Decoupled resolution aware operators for 3D reconstruction of Light Field Microscopy data Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera, allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. | IDP | ||
Extraction of Vessels from Angiograms Many applications in computer-aided surgery require an automatic and exact localization of vessels in contrasted radiographs (see Fig. 1). However, the segmentation of 2D images is more difficult than segmenting 3D images because of less information contained in the data. In addition, radiographs are projections leading to occlusions (of vessels with other anatomical structures like bones as well as with other vessels). The objective of this project is the implementation and test of different methods for vessel segmentation based on eigenvalue analysis. | IDP | Titus Rosu | |
Visual Servoing for Camera Augmented Mobile C-arm | IDP | Thomas Weich | |
Augmentation of Segmented Liver Data onto the Endoscope View One way to treat liver tumors is the laparoscopic surgery. To help the physician finding these tumors on the laparoscopic view a preoperatively genrerated model of the liver gets fused with the video from the laparoscope. This application is developed to work on a phantom currently developed at MITI. | IDP | Stefan Wiesner | |
Workflow analysis of beta-probe guided surgery In minimally invasive tumor resection, the desirable goal is to perform a minimal but complete removal of cancerous cells. In the last decades interventional nuclear medicine probes supported the detection of remaining tumor cells. However, scanning the patient with an intraoperative probe and applying the treatment are not done simultaneously. In the past we extended the one dimensional signal of a nuclear probe to a four dimensional signal including the spatial information of the distal end of the probe (current status). This signal can be then used to guide the surgeon in the resection of residual tissue and thus increase its spatial accuracy while allowing minimal impact on the patient. The next step is to prepare clinical experiments and integrate the solution into the clinical workflow. The student in charge of this project will contribute in that step by analysing the clinical workflow of beta-probe-guided surgery according with the perpective of our group (CAMP research on workflow). | IDP | ||
(Collaborative) Design Plattform Kreative bzw. entwerferische Tätigkeit am Rechner ist noch immer schwerfällig und ineffizient. Betroffen hiervon sind sowohl Einmann, als auch Mehrpersonen-Szenarien. Eines der entscheidenden Probleme hierbei stellt die unangepasste Human-Computer-Interaction aktueller Computersysteme dar. Im Rahmen eines interdisziplinaren Projektes am Fachgebiet Augmented Reality (Prof. Gudrun Klinker, Ph.D.) und Lehrstuhl für Architekturinformatik (Prof. Dr.-Ing. Frank Petzold) soll eine (Collaborative) Design Plattform entwickelt und im Maßstab 1:1 umgesetzt werden. Ausgehend von den Anforderungen der Architekten in frühen Entwurfsphasen des Entwerfens und den Problemstellungen der Verknüpfung von Digitaler und Analoger Welt sollen neue Möglichkeiten der Bedienung untersucht und erforscht werden. Darauf aufbauend wird ein Prototypenszenario entwickelt und im Massstab 1:1 umgesetzt, um so das kreative Entwerfen am Rechner effizienter und intuitiver zu gestalten. Das Fach kann als Diplom- bzw. Masterarbeit und IDP Projekt im WS 2010/2011 belegt werden und wird in Teams bestehend aus Informatikern(innen) und Architekten(innen) bearbeitet. | IDP | Evi Andergassen-Sölva, Violin Yanev | |
[[Students.IlluminationCompensationinSurgicalScenes][]] | |||
Design and development of Information presentaton (prioritization and transformation) manager in Driving simulator For supporting driver in driving task and increasing comfort in driving, advanced driving assistance systems (ADAS) has been developed (Active cruise control, Lane departure warning, Navigation, Collision warning). With the increasing number of separately developed ADAS, also the amount of information increases, which the driver needs to handle. In addition, in case of uncoordinated ADAS systems, the danger of simultaneous feedback exists, which especially in crisis situations can be fatal. For preventing simultaneous feedback, information management (prioritization and transformation) is necessary. As part of your work you will do literature research of what kind of information is available. Situations should be classified on the base of its complexity and driver's state. Algorithm for information presentation should be developed and implemented in fixed-based driving simulator of the Chair of Ergonomics (Faculty of Mechanical Engineering). | DA/MA/BA | ||
In-situ Visualization of Surgical Instruments in Medical Augmented Reality Medical augmented reality enables in-situ visualization of medical data. Tissue and bones e.g. the spinal column can be presented at its proper position at the patient using a stereoscopic head mounted display (HMD) or monitor based. Instruments, tracked by an external optical tracking system, can be augmented and guided inside the body of the patient where vision of the observer, in this case the surgeon, is restricted by the skin, blood and tissue. Beside navigated surgery such a system can be very helpful for teaching anatomy and for diagnosis. This project aims at the improvement of perceiving the relative and absolute position of tracked surgical instruments like a drill, resection tools, endoscope, etc inside the patient. After different experiments where in-situ visualization overlaid on phantoms was shown to the surgeons of our clinical partners, we identified the presentation of surgical instruments beside the presentation of medical data as one of the most important tasks to make this kind of data presentation acceptable. Here the surgeon is able interact with the medical data and knows about the position of the tip of the instrument when parts of the visualized medical data occlude the augmented instrument or the other way round. However, this is only one approach to provide visual perceptive cues due to interaction. You're task will be the integration of known interaction paradigms from computer graphics and create new ones aligned to the AR scenario. Presentation of medical data taken from CT or MRI scans can only be superimposed on real objects recorded by the cameras attached to the HMD. This means that virtual tissue and bones always occlude the real skin and seem to be located outside the human body. Your project will also aim at this problem. However, we have solutions for this task like a virtual window overlaid onto the skin, which can be integrated to your application. Finding the right visualization of surgical tools is very important to make Augmented Reality acceptable for different surgical tasks e.g. implantation of pedicle screws regarding spine surgery. | Diploma Thesis | ||
[[Students.LabCourse][]] | |||
[[Students.LabCourseSS04][]] | |||
[[Students.LabCourseSS042][]] | |||
[[Students.LabCourseSS042Meeting1205][]] | |||
[[Students.LabCourseSS042Minutes0406][]] | |||
[[Students.LabCourseSS042Minutes0705][]] | |||
[[Students.LabCourseSS042Minutes1706][]] | |||
[[Students.LabCourseSS042Minutes2505][]] | |||
[[Students.LabCourseSS042Minutes2604][]] | |||
[[Students.LabCourseSS042Minutes3004][]] | |||
[[Students.LabCourseSS04Discussion][]] | |||
[[Students.LabCourseWS03][]] | |||
[[Students.LabCourseWS04][]] | |||
[[Students.LabCourseWS04Minutes0212][]] | |||
[[Students.LabCourseWS04Minutes0411][]] | |||
[[Students.LabCourseWS04Minutes0912][]] | |||
[[Students.LabCourseWS04Minutes1111][]] | |||
[[Students.LabCourseWS04Minutes1301][]] | |||
[[Students.LabCourseWS04Minutes1811][]] | |||
[[Students.LabCourseWS04Minutes1910][]] | |||
[[Students.LabCourseWS04Minutes2511][]] | |||
[[Students.LabCourseWS04Minutes2610][]] | |||
[[Students.LabCourseWS04PCList][]] | |||
[[Students.LabCourseWS04Presentations][]] | |||
[[Students.LabMaintenance][]] | |||
[[Students.LabRegistrationForm][]] | |||
Invariant Landmark Detection for highly accurate positioning An essential task to enable highly autonomous driving is the self-localization and ego-motion estimation of the car. Together, they enable accurate absolute positioning and reasoning about the road ahead for e.g. path planning. If a single camera is used as sensor to measure the current vehicle location, positioning is based on visual landmarks and is related to the problem of visual odometry. Current navigation systems rely solely on GPS and vehicle odometry. Newer systems use object detections in the image, like traffic signs and road markings to triangulate the vehicle position within a map. To do so, visual landmarks are detected in the camera image and their position relative to the vehicle is computed. Given the landmark positions, the most likely position of the vehicle with respect to the landmarks in the map can be deduced. Current approaches use detected objects as landmarks (e.g. traffic signs/lights, poles, reflectors, lane markings), but often there are not enough of these objects to localize accurately. In the scope of this project, a method to detect more generic landmarks should be developed. This method, that will be focusing on deep learning, should extract features that are more invariant to different invariances (e.g. illumination) and also provide a good matchability. Challenges: • Create a dataset from different sources: already existing datasets, public webcam streams and synthetic datasets. • Design and implement a method/network to extract robust generic landmarks/features in different environments (highway, city, country roads) and match them to previously extracted landmarks. • Leverage deep learning in order to achieve a high invariance to different environment conditions. Tasks: • Literature review of methods to extract robust landmarks with focus on: o Robustness to changes in appearance, viewpoint. o Uniqueness to match them corretly to already extracted landmarks. • Implementation and evaluation of a deep neural network that is capable of extracting invariant features, that offer the possibility for robust matching. • Application of the feature in a state-of-the-art SLAM algorithm. Literature : [1] LIFT: https://arxiv.org/abs/1603.09114 [2] TILDE: https://infoscience.epfl.ch/record/206786/files/top.pdf [3] Playing for Data: https://download.visinf.tu-darmstadt.de/data/from_games/data/eccv-2016-richter-playing_for_data.pdf [4] ORB_SLAM: http://webdiis.unizar.es/~raulmur/orbslam/ | DA/MA/BA | ||
[[Students.LapCourseSS04Minutes0107][]] | |||
[[Students.LapCourseSS04Minutes0305][]] | |||
[[Students.LapCourseSS04Minutes1005][]] | |||
[[Students.LapCourseSS04Minutes2204][]] | |||
[[Students.LapCourseSS04Minutes2604][]] | |||
[[Students.LapCourseSS04Minutes2804][]] | |||
[[Students.LearningBasedTumorModeling][]] | |||
Lymph Node Detection and Segmentation in MR Images Lymph nodes are critical anatomical structures that reflect the progress of many diseases. One important application, for example, is to estimate cancerous metastasis status by observing their sizes in CT or contrast-enhanced images. To achieve accurate estimation, high-quality segmentation of the lymph nodes is necessary. Currently, the prevalent way of realizing this is through clinical experts’ detection and delineation manually. Unfortunately, this way is extremely time-consuming and heavily dependent on the experts’ experience. Automatic lymph node detection and segmentation that can provide consistent and accurate results are highly desired. | Master Thesis | ||
Echtzeit Erkennung von IST-Zuständen im OP mittels geeigneter Sensorik In dieser Masterarbeit sollen in Kooperation mit dem Klinikum Rechts d. Isar und der Industrie Methodiken recherchiert und prototypisch realisiert werden, die es ermöglichen verschiedene Zustände im OP automatisiert zu erkennen. Dabei soll im ersten Schritt die geeignete Sensorik identifiziert werden. Hierzu werden verwandte wissenschaftliche Arbeiten, aber auch Arbeiten aus der Industrie herangezogen werden. Nach aktuellem Stand stellen sich Tiefensensoren, aber auch einfache RGB Kameras, als geeignet dar. Diese Beurteilung kann aber durch die Recherche des Studenten noch variieren. Abhängig von den Ergebnissen der Literaturrecherche des Studenten, wird dann in Begleitung der Betreuer der Arbeit und Mitarbeitern des Klinikums Rechts d. Isar ein realistisches Konzept erstellt, welches im zeitlichen Rahmen dieser Arbeit realisiert werden kann. Anschließend folgt eine Evaluation des erstellten Systems basierend auf den eingangs definierten Anforderungen. Die hier diagnostizierten Vor- und Nachteile, sowie Schwächen und Stärken, stellen dann die Grundlage für weitere Arbeiten (Future Work) in diesem Kontext dar. Damit stellt diese Arbeit das Fundament für weitere Arbeiten dar. | DA/MA/BA | ||
Depth-based hand tracking for RGBD Augmented C-arms | DA/MA/BA | ||
Optimization of the Candascent Finger Tracking library In dieser Masterarbeit soll die vorhandene OpenSource? Library Candascent erweitert und optimiert werden. Diese Library verwendet den K-Means Algorithmus um verschiedene Cluster basierend auf Tiefendaten der Kinect zu ermitteln. Anschließend wird die Kontur ermittelt und mittels geeigneter Arithmetik die einzelnen Finger identifiziert. Die Library wurde bereits für den Prototypen der Kinect 2 angepasst. Die Aufgabe des Studenten ist es, sich in den Algorithmus einzuarbeiten, die einzelnen Parameter zu verstehen und zu dokumentieren und diese anschließend dynamisch auf die Entfernung von der Hand zur Kinect anzupassen. Weitere Ideen zur Optimierung der Library dürfen gerne vom Studenten eingebracht werden. Am Ende der Masterarbeit soll durch eine vergleichende Evaluation zwischen dem alten, dem neuen und einem alternativen System. | DA/MA/BA | ||
Virtual 3D Menu Grabber This work is about implementing an AR application which allows a smartphone or a tablet to be pointed towards virtual 3D Menus and grab them onto the phone. We expect, that it is much more confortable to interact with the menu on the smartphone, instead of trying to hit the virtual menu in 3D space. If you are interested in this theses, please contact benzina@in.tum.de. | DA/MA/BA | ||
Improving accuracy of indoor localization In dieser Arbeit sollen vorhandene Lokalisierungstechniken innerhalb von Gebäuden untersucht und evaluiert werden. Derzeit befindet sich ein solches System bei der TRUMPF Medizin Systeme GmbH + Co. KG, im Rahmen einer Doktorarbeit, in der Entwicklung. Dieses System soll im Rahmen dieser Masterarbeit verbessert und evaluiert werden. Der Student wird dabei von dem Doktoranden Gel Han betreut und kann von seiner Expertise profitieren. Desweiteren soll der neueste Stand der Technik im diesem Kontext recherchiert und bewertet werden. | DA/MA/BA | ||
Masterarbeit: Evaluierung von Usability durch standardisierte Leitfadeninterviews Verschiedene quantitative Fragebögen werden standardmäßig verwendet, um die Usability, bzw. Gebrauchstauglichkeit einer Benutzerschnittstelle zu bewerten. Im Rahmen einer Projektarbeit wurden derartige Fragebögen analysiert, klassifiziert und zahlreiche, offene Fragen für qualitative Leitfadeninterviews mit Benutzern abgeleitet. Diese Fragen wurden in ein Kategoriensystem eingeordnet. Das dazu erstellte Konzept sieht vor, dass aus den qualitativen Interviewdaten absolute Aussagen zur Benutzerfreundlichkeit und konkrete Usability Schwächen abgeleitet werden. Dieses initiale Konzept wurde in dieser Arbeit auf seine Praxistauglichkeit hin untersucht. Hierfür wurde das Qualitative Usability Konzept sowohl konzeptuell als auch in mehreren Usability Evaluierungen von Nutzerschnittstellen analysiert, verfeinert und seine Tauglichkeit demonstriert. | Master Thesis | Carmen Rudolph | |
Transformer-Based Regression Model for Metabolite Quantification in MR Spectroscopic Imaging MRS data is composed of 1D spectra that can quantitatively characterize the metabolic composition of in-vivo tissue. This is especially useful for characterizing brain tumors. The primary drawback to this data type is the extensive and costly pre-processing and analysis necessary to prepare and annotate the data. Attempts to accelerate this work using deep learning is a budding, active research field. Transformers were initially developed for NLP tasks. However, recent research has shown them to be highly effective for image classification in computer vision as well. Due to the spatial nature of MRS data, CV CNN models have been effective for this quantification task. Therefore, we would like to explore the use of transformers to assess their potential for this computer vision-based regression task. | Master Thesis | ||
Integration und Optimierung von Workflow Informationen in ein kognitives System In dieser Arbeit sollen bereits vorhandene Workflow Diagramme von vorangegangenen Arbeiten untersucht, erweitert/optmiert und mit Hilfe eines Workflow-Management Tools umgesetzt werden. Dies geschieht in Zusammenarbeit mit dem Klinikum Rechts d. Isar und Partnern aus der Industrie. Der Student bekommt hierbei die Gelegenheit wertvolle Erfahrungen im Bereich des wissenschaftlichen Arbeitens in enger Kooperation mit der Industrie zu gewinnen und wird von Mitarbeitern der TRUMPF Medizin Systeme GmbH + Co. KG sowie der Klinik betreut und begleitet. Desweiteren erlangt der Student Kontext-Wissen im Bereich der Medizin. Medizinische Vorkenntnisse sind aber nicht nötig. Zusätzlich erhält der Student wertvolle Kontakte aus der Industrie, Medizin und Wissenschaft. | DA/MA/BA | ||
Validation of Navigated Beta Probe Application in Cancer Surgery In minimally invasive tumor resection, the desirable goal is to perform a minimal but complete removal of cancerous cells. In the last decades interventional nuclear medicine probes supported the detection of remaining tumor cells. However, scanning the patient with an intraoperative probe and applying the treatment are not done simultaneously. In the past we extended the one dimensional signal of a nuclear probe to a four dimensional signal including the spatial information of the distal end of the probe (current status). This signal can be then used to guide the surgeon in the resection of residual tissue and thus increase its spatial accuracy while allowing minimal impact on the patient. The next step is to prepare clinical experiments and integrate the solution into the clinical workflow. The student in charge of this project will contribute in that step by designing ex-vivo and in-vivo experiment protocols, preparing the experimental setup and evaluating the experiments on their own. | Project | ||
Shape Analysis on Meshes or Point Clouds from Medical Data In this project we explore the feasibility of employing geometric deep learning or point cloud-based deep learning for extracting useful features from shapes in medical images. | DA/MA/BA | ||
Magnetic Resonance Imaging of Cardiac Tissue | Master Thesis | ||
[[Students.Ma3DSemDpc][]] | |||
Action Recognition and Generation With RNNs Human action understanding and generation is a high-level concept and very hot topic in computer vision. Deep learning methods made significant research progresses in this area. In this task, we aim to understand and generate realistic human actions from a given video input. We will use adversarial domain adaptation algorithm [2] to train Recurrent Neural Networks [1]. More specifically, Generative Adversarial Networks(GANs) will be used to reduce discrepancy between training the network and sampling from the it over multiple time steps. We will compare the accuracy and efficiency of this approach with other recent object detection methods. Literature: [1] Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780. [2] Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014. [3] Lamb, Alex M., et al. "Professor forcing: A new algorithm for training recurrent networks."Advances In Neural Information Processing Systems. 2016. | DA/MA/BA | ||
Adversarial Multiple Hypothesis Prediction In a recent work we studied the possibilities of predicting multiple hypotheses with a single CNN. This helps in cases where the outcome is not certain and multiple outcomes are possible. (see our arxiv paper for details). One of the applications that we were investigating was video prediction, that is predicting a future frame of a video sequence. Recently, adversarial learning has been shown to be able to predict high quality images by learning the loss instead of explicitly defining it. We believe that a combination of adversarial ideas with our MHP model could result in better images and multiple different predictions for the future. | DA/MA/BA | Ignacio Sarasúa | |
Follow The Magic Lense Interaction in an immersive virtual environment (IVE) such as a CAVE or in our case FRAVEis an important issue to investigate. To be able to select and manipulate the virtual object displayed in VR world. The User can do it through the Magic Lenses, Where the Frave provide the contextual view and the tracked Handheld device the focus view. The view frustum is calculated using handheld device position and orientation. | Master Thesis | Alba Huelves | |
Predicting Alzheimer's Disease | Master Thesis | ||
Rohde & Schwarz: Deep feature representation with auxiliary embedding While convolutional neural networks (CNNs) show outstanding results in various computer vision tasks, the deep feature representations within these models often lack transparency and discriminative power. By introducing additional regularization to the networks, one hopes to find more robust and discriminative embeddings of the feature representations such that for instance different classes cluster nicely. In fact, previous work successfully employed regularization techniques under supervised and semi-supervised settings and reported improved robustness and better generalization on well known computer vision datasets. The work in this master thesis investigates how different regularization techniques for auxiliary manifold embedding affect the performance of the models - as opposed to previous work - for challenging medical data. Further, the work compares the results using both supervised and semi-supervised training. | Master Thesis | Christoph Baur | |
Master/Bachelor Thesis in Autonomous Driving in collaboration with Apex AI | DA/MA/BA | ||
Implementation of efficient computational modules for forward- and backward-projection in X-ray computed tomography X-Ray Computed Tomography (CT) is one of the cornerstones of medical imaging for many decades now. The tomographic reconstruction of CT is quite well understood theoretically and practically, but many open research issues remain. A central point for any reconstruction method is the projector and back-projector pair, which models the interaction process of X-rays with matter, the detection process in the detector and the acquisition geometry. Several standard methods for this are described in the literature, each with specific advantages and disadvantages. Common to all these methods are high computational requirements, necessitating the use of massively parallel computing devices, such as GPUs. | Project | David Frank | |
Artifact Reduction in Digital Breast Tomosynthesis Reconstruction | Master Thesis | Shiras Abdurahman | |
Development of algorithms for reduction of artifacts caused by dental implants in cone-beam computer tomography X-ray Computed Tomography (CT) is one of the leading modalities for diagnostic imaging. In recent years, the development of cone-beam computed tomography systems such as for dental or intraoperative applications have indicated that the spectrum of applications of X-ray CT is extending into other medical fields like surgery, otorhinolaryngology, and many more. One major obstacle in such applications is the presence of implants in the field-of-view, which cause so-called metal artifacts. In this master thesis, we are planning to investigate possibilities to reduce metal artifacts for example in situations where implants or fillings are in the region of interest. This project focuses on developing algorithms that correctly modulate the physical behavior of the imaging system in the presence of implants. The successful candidate should possess adequate system-programming skills (for example: Matlab/Simulink, C/C++) and additional knowledge in parallel computing such as for example the programming of GPUs. This project will be carried out in collaboration between the Department of Radiology and ImFusion GmbH. | Master Thesis | ||
Weakly-Supervised Liver Lesion Localisation and Classification with Spectral CT Data The liver can develop a number of different lesions, which can be both benign or malignant. The correct diagnosis is crucial to plan further treatment for the patient. Medical experts can benefit from using spectral computed tomography (CT) when making their diagnosis. This technology provides additional information about the tissue and contrast agent compared to a conventional CT by measuring material specific absorption properties. Automated liver lesion localization is currently an active field of research, however, usually requires precise segmentation of the lesions for the training set. Since there are no publicly available spectral CT datasets at this point, weakly-supervised learning will be used in this thesis. The goal of this thesis is to first localize anomalies in the liver with a weakly-supervised convolutional neural network (CNN) and in a second step classify the lesions that were found by the first network. The impact of the spectral CT for the network localization and classification in comparison to conventional CT will be investigated. | Master Thesis | Julia Fokhul | |
A comparative study on unsupervised deep learning methods Deep Learning has been growing in popularity in the last years due to its outstanding performance on high dimensional data, being capable of establishing complex mappings between input and output. New techniques and network architectures are evolving both in the supervised and unsupervised deep learning domains. In this work, an overview on some of the recent successful techniques in unsupervised deep learning will be given. The focus in this work primarily lies on how such methods capture the underlying structure of complex input data. In particular, recent developments such as Autoencoders, Variational and Adversarial Autoencoders or Generative Adversarial Networks will be investigated. Through this survey study, a practical explanation on how and when to use these techniques will be derived. Different techniques are compared against each other, finding their strengths and weaknesses and giving concrete examples by applying theses techniques to publicly available datasets, i.e. MNIST, CIFAR and some medical datasets. | DA/MA/BA | Nour Eddin Al-Orjany | |
Automatic Robotic Ultrasound Scan | Master Thesis | ||
Automatic Robotic Ultrasound Scan--Stage II | Master Thesis | ||
Thesis or Guided Research in Autonomous Driving related topics | DA/MA/BA | ||
Color Normalization for Histology Image Processing Automated image processing and quantification are increasingly gaining attention in the field of digital pathology. However, a common problem that encumbers computerized analysis is the color variation in histology, due to the use of different microscopies, scanners or inconsistencies in tissue preparation. This project is to address the issue of color inconsistency in histology and develop an effective normalization technique. | DA/MA/BA | ||
Detection of interventional tools in moving images Fluoroscopic images are used in intravascular guided interventions to help physicians steer the tools towards a desired location. The goal of this project is to use state of the art image processing and modelling techniques in order to separate the background from the tools, thereby improving their visualisation for easier guidance. The challenges we want to address are both the low signal to noise ratio as well as the deformations caused by motion, e.g. respiratory and/or cardiac. | DA/MA/BA | ||
Computer assisted optical biopsy for colorectal polyps. | DA/MA/BA | ||
Balloon catheter real-time simulation (BaCaRTS) http://campar.in.tum.de/view/Students/ProjectForm | DA/MA/BA | ||
Point- and Linefeature based Bundle Adjustment for a Realtime-Trackingsystem Die Bündelausgleichsrechnung ist eine Technik um die Genauigkeit von 3D-Rekonstruktionen zu verbessern. Sie kommt bei verschiedenen Problemstellungen zum Einsatz, unter anderem bei Trackingsystemen. Mit der Bündelausgleichsrechnung auf der Basis von Punktmerkmalen wurden bisher schon gute Ergebnisse erzielt. Durch die Berücksichtigung von Linien im Bild, die zusätzliche Informationen über eine Szene enthalten, soll die Qualität der Bündelausgleichsrechnung in einem Trackingsystem verbessert werden. Dadurch sollen sich neue Möglichkeiten und Anwendungsgebiete eröffnen. Aufgabe dieser Arbeit ist es, Punkt- und Linienmerkmale in den einzelnen Bildern zu finden, Korrespondenzen zwischen den einzelnen Bildern herzustellen und diese Informationen dann in die 3D Rekonstruktion der Punkte, Linien und Kameras sowie deren Verfeinerung durch die Bündelausgleichsrechnung zu integrieren. | DA/MA/BA | Georg Barbieri | |
Bayesian Vessel Tracking | Master Thesis | ||
Feasibility study of beta probes for intraoperative detection of malignancy Cancer is becoming one of the biggest problems in our aging society. The most common and still the most efficient therapy in cancer is tumor resection. One of the major issues in cancer resection is to guarantee that the complete tumor is included within the extracted tissue. Currently the most common method for checking the resection border is the frozen section procedure . The surgeon is resecting tissue suspected to be tumorous and then the resected sample is examined by pathologist. If on the borders of the resected tissue any cancer tissue is left, the resection must be repeated. The whole proces is a rather time consuming process. The intraoperative β−probes are used since the early 90’s in operating rooms, but physicians are not content with this technology. Currently β − probe systems require long experience from the surgeon. Finding and marking active tumor tissue is not a trivial issue. As a complementary to the frozen section procedure, β− probe resection control was introduced. Using navigated β −probes and nuclear tracers, the surgeon can examine resected borders before and during the time needed for the frozen section procedure. This system can increase accuracy and decrease time of the cancer resection operation. Even though β − probes were introduced long ago, few experiments have been done on human patients. However, besides technical evaluation little - to nothing has been done towards evaluating the biological feasibility of β − probes. In this thesis we will focus on biological feasibility studies of using the β − probe intraoperatively. Two types of cells are examined, the human bladder cancer, and the human foreskin fibroblasts. The thesis is concentrating on proving the possibility of generating images of tumors and distinguishing tumor from non-tumor cells. The β− probe images were compared to images of labeled human bladder cancer with luciferine enzyme. To generate stable and repeatable images a scanning system using a mechanical positioning device is introduced. We will present a test of the system as well as data acquisitions to validate the system and to generate future model definitions. | Master Thesis | Jakub Bieniarz | |
An Iterative Reconstruction Framework for Surface PET: Positron Activity Surface Imaging using Tracked Beta Probes for Intraoperative Control of Resection Borders in Cancer Surgery | Master Thesis | Coskun Özgür | |
Vision-based Robotic Pick and Place (with KUKA Roboter GmbH) This project aims at integrating robust algorithms for object pose estimation and tracking as vision guidance for industrial robotic manipulators. Such vision-based control will be applied to industrial tasks such as bin picking and object pick-and-place from shelves. The main goal of the project is to achieve robust vision-based robot control using inexpensive sensors, in comparison to standard expensive industrial ones. The project includes both testing of the perception part, as well as integration with the path planning and grasping algorithms. The project is made in collaboration with KUKA Roboter GmbH?, one of the leader companies in the field of industrial robots. The student will be often working directly with KUKA Comporate Research, located in Augsburg (around 35 minutes by train from Munich HBF). The student will receive financial support from KUKA (in the form of a monthly stipend) for the duration of the Master Thesis. | Master Thesis | ||
Augmented Reality in Transoral Robotic Surgery for Tongue Base Cancer Augmented reality, by superimposing a virtual modeling of a tumor to the per-operative endoscopic view, would allow the precise identification of the tumor boundaries and could improve the efficacy and the tolerance of the tongue base carcinological surgery. To this end, some issues still need to be resolved in the field of augmented reality for soft tissues and notably the per-operative tissue deformation. The actual work of this project will aim to offer a solution by creating, with a simulation software (SOFA) a virtual biomechanical model of the tongue base area based on MRI data acquired preoperatively. The dynamic simulation created by this software will be superimposed in real time to the endoscopic per operative view which will allow the surgeon to visualized, during the surgery, the tumor boundaries and the tissular deformation caused by the surgery. The main objective of this Internship is to explore this option to initiate the implementation of an augmented reality system in trans-oral robot assisted surgery of tongue base cancer. | Project | ||
RGB-D Object Detection with Deep Learning Detecting multiple 3D objects in a scene and estimating their 6DoF pose is a challenging task, especially in presence of clutter and heavy occlusions. Furthermore, scaling to many objects without increasing the runtime poses another challenging problem. With this thesis, we plan to advance the state of the art by developing a new 3D object detection approach based on the use of Convolutional Neural Networks (CNNs). | Master Thesis | ||
Lazy Camera Calibration Toolbox Development | Project | ||
Depth Estimation for Catheters from Single-View Interventional X-ray Imaging The recent success of convolutional neural networks in many computer vision tasks implies that their application could also be beneficial for vision tasks in cardiac electrophysiology procedures which are commonly carried out under guidance of C-arm fluoroscopy. Many efforts for catheter detection and reconstruction have been made, but especially robust detection of catheters in X-ray images in realtime is still not entirely solved. In this project, we aim to build a CNN that able to detect catheters tips and estimate the depth. | Project | Christoph Baur | |
Automatic Cell Tracking in Time Series of Holographic Microscopy Images | Master Thesis | ||
Cement diffusion real-time simulation | DA/MA/BA | tba | |
Handling Imbalanced Data Problem in Chest X-ray Multi-label Classification Chest radiography is the most common imaging examination for screening and diagnosis of chest disease. Predicting the presence of chest radiographic observations is important in the screening of chest disease. It has challenging to train deep neural networks on ChestXray? images due to the class imbalance problem. In this guided research project, we explore an effective training method to deal with the class imbalance in multi-label classification for training deep neural networks in chest X-ray images. | Project | ||
Count rate saturation effects in handheld gamma detectors: Evaluation and influence on 3D intra-operative imaging Gamma detectors are widely used in nuclear medicine, especially for localization of malign structures (e.g. cancer) that are not palpable or not visible using other imaging modalities such as ultrasound or computed tomography. In radio-guided surgery, hand-held gamma detectors are used to guide the surgeon to structures marked by radiotracers such as tumors and metastatic lymph nodes. Combining such detectors with optical tracking systems enables 3D imaging inside the operating room to improve guidance and to verify that all potentially malign structures have been removed during surgery. However, image quality is degraded by several factors, one of which is the count rate saturation in such detectors. Count rate saturation can have different origins, e.g. dead time in the detection material (scintillation crystal or semiconductor) or pulse pile-up. The purpose of this project is to study this behavior with different detectors and come up with a model to adjust for it in order to improve image quality. The student working on this project will have the possibility of working in a clinical environment with patient contact as well as gaining experience in project management with an industrial partner. | DA/MA/BA | /twiki/pub/Main/AmalBenzinaStudentProjects/CountRateSaturation_Graph_Sample.png | |
Deformable Registration of Cine-MRI Images for Colonic Motility Analysis Functional gastrointestinal disorders such as chronic diarrhea are chronic conditions presenting with a significant socioeconomic burden and are usually associated with the colonic motility dysfunction. Therefore, it is necessary to study colonic motility in order to understand its effects on such conditions leading to an improved and more adequate therapy in the end. Existing techniques such as manometry or scintigraphy are either invasive and have long examination times or expose patients to ionizing radiation creating demand for a fast and non-invasive monitoring technique for the quantification of colonic motility. Functional cine magnetic resonance imaging (MRI) allows for noninvasive, fast dynamic imaging with a high tissue contrast. It provides the possibility to observe both anatomical and functional properties of the colon. Current analysis techniques are based on diameter measurements of colonic lumen on cine-MR images. In this thesis, several deformable registration techniques for semi-automatic diameter measurements will be investigated. | IDP/Klinisches Anwendungsprojekt | Raphael Arias and Arianne Tran | |
Sparse Data Reconstruction and its Application in Spherical Computational Sonography Ultrasound imaging widely used in clinics because of several advantages compared to other imaging modalities. It provides images in real time, is portable and substantially lower in cost, without requiring harmful radiation. However, to date, it is still highly dependent on a skilled operator and directional information of ultrasound is neglected. Nowadays, the clinical use of three-dimensional ultrasound technology is another area of intense research activity. State-of-the-art systems mostly perform compounding of the image data prior to further processing and visualization, resulting in 3D volumes of scalar intensities and loss of all directional information. Computational Sonography preserves this directional information of the acquired data, and allows for its exploitation by computational algorithms. In Model Independent Computational Sonography, different models of Computational Sonography are compared to classical scalar compounding for freehand acquisitions, providing both an improved preservation of US directionality as well as improved image quality in 3D. Two models where proposed to store the directional dependent information, A tensor-CS model and spherical-CS model, however, in both modalities the data is reconstructed sparsely since limited number of segments on sphere contain information. This project aims to overcome this problem and present information for each segment on sphere. In order to achieve the aim, a test bed will be provided to compare different reconstruction schemes, on scattered data distributed on sphere surface and evaluate its application in spherical-CS. | DA/MA/BA | Sara Hajmohammadalitorkabadi | |
The Influence of Confidence-Maps on MR/US Registration Recent advances in the development of ultrasound have led to significant improvements in the imaging quality and usability. However, shadowing artifacts, reflections, and attenuation are still significant problems. Confidence maps emphasize the uncertainty in the attenuated and shadow regions, which may contribute to the improvement of multi-modal image registration, such as MR/US. Recent developments in MR/US registration (such as MIND and LC2) show promising results, but still lack the ability to properly take the uncertainty of information in the US image into account. | DA/MA/BA | ||
Self-supervised learning via Contrastive Generative Models | Master Thesis | Anatasia Makarevich | |
Deep Learning-based 2D-3D Correspondence Matching for Object 6DoF Pose Estimation 6DoF object pose estimation has a great meaning in augmented reality and robotic applications. More and more works tend to first find the 2D-3D correspondences between object model and image, and then solve the 6DoF pose using Perspective-n-Point (PnP?) algorithm. We designed a new way to estimate the 2D-3D correspondences. The goal of the thesis is to implement the 2D-3D correspondence matching with an Auto Encoder-Decoder and compare the estimated object pose with state of the art approaches. The knowledge from Structure Light will also be used here to provide a robust matching. | Master Thesis | ||
Crowdsourcing Games for Medical Applications Crowdsourcing has been widely used for annotation, i.e. collecting ground-truth, in medical community. Many studies have shown that non-expert can perform as well as experts in crowdsourcing tasks if they were trained well, however, users/crowd are not motivated to complete the task to the very end. Recently, we showed that games can play a crucial role in motivating the crowd in the annotation task. In this work, we need to extend this to many different use cases. | Bachelor Thesis | ||
Object and System Model Generation for X-ray Computed Tomography | Master Thesis | Wangxin Liu | |
Efficient GPU Projectors for X-ray Computed Tomography X-Ray Computed Tomography (CT) is one of the cornerstones of medical imaging for many decades now. The tomographic reconstruction of CT is quite well understood theoretically and practically, but many open research issues remain. A central point for any reconstruction method is the projector and back-projector pair, which models the interaction process of X-rays with matter, the detection process in the detector and the acquisition geometry. Several standard methods for this are described in the literature, each with specific advantages and disadvantages. Common to all these methods are high computational requirements, necessitating the use of massively parallel computing devices, such as GPUs. | Master Thesis | Michal Szymczak | |
CT Reconstruction with Object Specific Non-Standard Trajectories | Master Thesis | Andreas Fischer | |
Curvelet sparse regularization for differential phase-contrast X-ray imaging Advances in imaging hardware have enabled differential phase contrast imaging (DPCI) with conventional X-ray tube sources. So far, iterative series expansion methods have been applied in a weighted maximum likelihood framework to reconstruct absorption and phase contrast data using the pixel basis functions. This work aims at using the curvelet frame, which provides an optimal sparse representation of C2-functions with singularities along C2-curves. We will integrate and evaluate different curvelet sparse regularization based reconstruction techniques, including multiple discretization methods. We will show, that curvelets further provide a suitable data representation for DPCI tomography while supporting an analytical formula for the forward model for both, X-ray absorption and DPCI. In contrast to the pixel basis functions, this enables discretization free formulation of the forward model. Within the scope of this thesis, the mathematical theory for curvelet based phase-contrast X-ray reconstruction will be derived and methods for sparse regularization will be applied. Finally, we will apply our method to X-ray absorption and DPCI data from both phantom and real data. | Master Thesis | Matthias Wieczorek | |
Feature Visualization for Deep Neural Networks In this project we would like to explore the possibilities to get insight into the inner workings of deep (convolutional) neural networks. For many of our trained models we lack the possibility to understand what the features that were learned look like, or what objects, textures, parts of the image are important to the task. Many works have been proposed in this direction. The project will include study of the state of the art in the field and begin with implementing one or more existing method into our framework. From there, dependent on the scope and the progress, new techniques could be developed. | Bachelor Thesis | Felix Grün | |
Semiautomatic Registration from optical Ad-Hoc Tracking Modalities using CAD Models | DA/MA/BA | ||
Active Contours for Cardiac Segmentation In this thesis we propose a method for interactive boundary extraction which combines a deep, patch-based representation with an active contour framework. We train a class-specific convolutional neural network which predicts a vector pointing from the respective point on the evolving contour towards the closest point on the boundary of the object of interest. These predictions form a vector field which is then used for evolving the contour by the Sobolev active contour framework proposed by Sundaramoorthi et al. The resulting interactive segmentation method is very efficient in terms of required computational resources and can even be trained on comparatively small graphics cards. We evaluate the potential of the proposed method on both medical and non-medical challenge data sets, such as the STACOM data set and the PASCAL VOC 2012 data set. | Master Thesis | Elizabeth Huaroq | |
Uncertainty-Driven Active Learning in Deep Hybrid Models The availability of large amounts of quality labeled data is a fundamental challenge of modern supervised learning. semi-supervised learning techniques try to leverage the use of a small amount of labeled data to train a large dataset of unlabeled data. However, choosing which labeled data to use is not usually addressed. In this work, we explore the use of only uncertain data samples in training. We cluster our unlabelled data using deep Gaussian mixture model. Uncertainty is then modeled using Monte-Carlo dropout, and uncertain data points are used to train our classifier. We validate our work on two toy datasets as well as a real-world medical application. | Master Thesis | Ahmed ElGazzar? | |
Multimodal Deep Learning Diagnostic errors can harm patients and undermine public trust, yet many of them are preventable. Radiology is one of the specialities most liable to claims of medical negligence. Failure to perform the precise detection of all abnormalities in an imaging examination and their accurate diagnosis results in misinterpretation of radiologic images and oversight of abnormalities. Deep convolutional neural networks have led to a series of breakthroughs for image classification. CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. In this master thesis, we address the issue of misinterpretation of radiologic images by applying deep convolutional neural networks to computer-aided detection problems. Created predictive models will leverage medical imaging scans, as well as available medical records and temporal information. | Master Thesis | Anna Zeldin | |
Deep Learning on visual data in the context of autonomous driving We are offering a thesis in collaboration with BMW focusing on the application of deep learning on visual data in the context of autonomous driving. Therefore we are looking for motivated students that ideally already acquired some experience in computer vision and/or machine learning. If you are interested to work on a thesis in this field please contact Jakob Mayr or Federico Tombari. | Master Thesis | ||
Deep Learning for Multi-View Stereo | DA/MA/BA | ||
3D Scene Understanding Leveraging Scene Graphs For a collaboration with researchers from Google, we are currently looking for a very motivated student interested in a master’s thesis project or guided research. The project involves topics including deep learning, 3D computer vision, transformer networks, and scene graphs. Scene graphs are a compact data representation that describes an image or 3D model of a scene. Each node of this graph represents an object, while the edges represent relationships/interactions between these objects, e.g. "chair - standing on - floor" or "guitar - leaning against - cupboard". The aim of the project is to explore novel architectures for scene graph manipulation based on current research trends towards improving the representation capabilities of scene graphs and address known issues in the representation of scenes. | Master Thesis | ||
Feature Visualization for Skin Lesions Deep neural networks have proven to give an outstanding performance in the classification task. However, understanding the learned features and the learning process is still a challenging task. In this project, we aim to investigate and visualize the relevant learned features that networks learn in the context of skin lesion classification. The project consists of developing software tools to assist dermatologist in the diagnosis of a skin lesion by providing both the classification result and a visualization of relevant feature for the diagnosis. | Project | ||
Deep Stereo In this project we aim at developing a method for learning how to predict stereo disparities from a pair of images. We will start from the analysis of a recently published method (Zbontar and LeCun?, 15), and investigate different aspects aimed at improving this approach in terms of accuracy and efficiency. Another important goal of the project is to experimentally evaluate how learned disparities can deal with noisy images under realistic working conditions. | Bachelor Thesis | Christoph Kick | |
3D Freehand Ultrasound Calibration with Convolutional Networks Ultrasound calibration, a means of determining the spatial relationship between the (2D) image coordinate system of the B-scan and a position sensor attached to the transducer, is a vital step for several applications. These range from US guided interventions to 3D volume reconstruction. However, most calibration techniques either require expertise in terms of ultrasound image acquisition, their accuracy is limited by physical phenomena or are tedious to perform. We propose a technique to learn the calibration, from an easily reproducible high accuracy phantom, by means of deep learning techniques. The easy availability of Lego and minimal deviations from standardized sizes, make this an attractive component for reproducibility. The aim of this work is to achieve an easily applicable calibration technique, which will be compared against other common calibration techniques in terms of accuracy. | Master Thesis | Arianne Tran | |
[[Students.MaDeepVisDerma][]] | DA/MA/BA | ||
Evaluation of real-time dense reconstruction for robotic navigation | Master Thesis | ||
Self-supervised Monocular Depth Estimation with Structure Regularities | Project | ||
Depth Recovery from Single-View Interventional X-ray Imaging | Master Thesis | Ulrich Konrad | |
Detectability Indices for Anisotropic X-ray Dark-field Tomography Anisotropic X-ray Dark-field Tomography (AXDT) enables the visualization of microstructure orientations without having to explicitly resolve them. Based on the directionally dependent X-ray dark-field signal as measured by an X-ray grating interferometer and our spherical harmonics forward model, AXDT reconstructs spherical scattering functions for every volume position, which in turn allows the extraction of the microstructure orientations. Potential applications range from materials testing to medical diagnostics. | Master Thesis | Theodor Cheslerean Boghiu | |
A framework for Detection and Depth Prediction for Medical Instruments in Interventional Imaging The recent success of convolutional neural networks in many computer vision tasks implies that their application could also be beneficial for vision tasks in cardiac electrophysiology procedures which are commonly carried out under the guidance of C-arm fluoroscopy. In previous work, we managed to train a CNN that able to 1. detect catheters tips in real-time and 2.estimate the depth of detected electrodes, however, few things need to be improved: 1. high-resolution detection map, 2. graphical model and make it work on noisy images. | Master Thesis | ||
Computation of Detectability Indices for Medical Imaging Medical imaging is a well established tool for diagnosis. Most imaging modalities however have drawbacks, such as radiation exposure and long imaging times. To alleviate these drawbacks, optimized imaging protocols are an interesting topic of research. For cases where the imaging target is already known, it is often possible to compute feature descriptors enabling optimization of the acquisition protocol for this particular feature. A particular group of such feature descriptors are the detectability indices, which form the basis for this Master's thesis. | DA/MA/BA | ||
Distributed SLAM - Jointly mapping 3D Geometry Exploring an unknown scene and self-positioning within it, is a common and well-studied problem in Computer Vision which is known as SLAM (Simultaneous Localization and Mapping). Core fields of application are autonomous cooperative robotics and vehicles as well as tracking and detection systems in the medical domain. Traditional methods target a single system, equipped with image sensors, exploring the scene and building up a map for localization (e.g. a single robot or drone moving within an unknown environment). New approaches also incorporate information from other sensors such as IMUs, gyro or GPS. Another objective for the determination of the position is outside-in-tracking of an object via marker tracking with external sensors, thus providing the relative position of an object with respect to the tracking system. To overcome the line-of-sight problem of outside-in-tracking, and the singularity constraint of traditional SLAM methods, the project aims to develop a distributed SLAM approach. Multiple systems (referred to as sensor nodes hereafter), equipped with an image sensor, contribute to a common map of the scene for localization, while being also tracked by outside-in-tracking for accuracy. Thus, accuracy and applicability can be elevated with a distributed SLAM approach, combining the information of multiple sensor nodes and an external tracking system. Furthermore, the necessity of complicated and error prone calibration processes for individual systems within one application scenario can be avoided. The objective is to develop a generative distributed SLAM approach for challenging scenes and applications. Features like loop detection and closing, pose graph optimization, re-localization and mapping should be extended to a distributed approach, also enabling scalability. | DA/MA/BA | Joe Bedard | |
Interaction Design for Upper-Limb Prosthetics Description of work Despite decades of active research in material science, machine learning, signal processing and biomedical robotics myocontrol, that is natural control of an upper-limb prosthesis via bodily signals, is still unreliable, almost primitive. The main reason for this impasse is, we think, that myocontrol needs constant update and feedback from the user, in order to become stable and safe in all possible situations of the everyday life of an amputated person. It is not a standard machine-learning problem, in which you gather data, pass it through a neural network, and then use the obtained model to control the prosthesis forever. No, here something more interactive is needed. Actually, in order to close this gap, a few years ago in our lab we introduced the concept of incremental myocontrol, meaning a myocontrol system which can readily incorporate new information on-demand without forgetting the past one (or forgetting it selectively), thus increasing its reliability along time. The user is called upon to provide novel data as the system deems that it is the case; as well, the system is ready to grow its dataset whenever the user independently deems that the performance in a specific situation is no longer acceptable. We have the technology: wearable and effective sensors, a realistic prosthetic setup, virtual reality as a start, knowledge on how to tune our favourite machine-learning approach. But we are missing the interaction protocol! What should the prosthesis look like, what functions and affordances should it provide? What should the timing be? What metrics to determine whether the interaction is effective? Starting out from Jean Piaget and Ernst Von Glasersfeld's tuition on Constructivist Psychology, and going through Don Norman's lessons on the Design of Everyday Things, you are required to conceive and build an interaction interface and protocol for upper-limb prosthesis users, which will improve the reliability of the device. You will need to prove that your idea works by carrying out a comparative user study. https://www.youtube.com/watch?v=y6uI5FLB9Q4 | Master Thesis | ||
Development of Pain Therapy Simulator (Facet joint infiltration) The facet joint infiltration is a procedure in minimally invasive surgery. It aims at pain reduction at the small vertebral joints in the area of the lumbar and thoracic spine. The procedure consists of the placement of a very fine needle in the vicinity of the small vertebral joints and the medical therapy is subsequently conducted at the location where the pain occurs. The objective of this research project is (a) the development of a procedural simulator for facet joint infiltration in close colaboration with medical partners of the Klinikum rechts der Isar (Neuro-Kopf Zentrum) and (b) to conduct a study investigating the suitability of the simulator for training. | DA/MA/BA | Tba | |
A hybrid method to compute the 3D dose in tissue samples during micro-CT image acquisition The aim of the project is to develop a software platform to compute the 3D distribution of dose deposited in tissue samples during micro-CT image acquisition performed with monochromatic synchrotron radiation. The dose can be divided into two components: the primary dose deposited directly by the incident X-ray beam and the secondary dose deposited by scattered and fluorescence X-rays. Previous work carried out in the context of synchrotron stereotactic radiation therapy has shown that a deterministic algorithm can be used to compute the primary dose map [1], whereas a hybrid approach (combination of Monte Carlo and deterministic calculations) is well suited to the calculation of the secondary dose [1,2]. The aim of the master internship is to adapt this method to assess dose deposition inevitable (but optimizable) in CT image acquisition. Special care will be taken to relate optimal hybrid parameters to the geometrical/physical CT set-up. The software development will be carried out in the framework of the Geant4/GATE Monte Carlo code. The final goal is to obtain a tool which can permit to calculate doses in biological samples, but that also can be used to simulate the experiment and optimize the experimental procedure. | DA/MA/BA | ||
Sensor System Development for Measuring and/or Characterizing Surgical Drain Fluid Output A large portion of today`s surgical procedures relies on using surgical drains to support and monitor surgical wound healing. Measuring the fluid output of surgical drains is done manually these days. An automated monitoring system would be beneficial to patient safety since abrupt changes in fluid flow might indicate a clogged drain or excessive wound bleeding, with the latter requiring rapid intervention. A characterization of the drain fluid regarding color or reflectivity might yield early signs for a developing bacterial infection in the wound cavity, thus enabling early and effective counter measures. The aim of this master thesis is to develop a sensor prototype suitable for the purpose stated above and to evaluate its performance according to criteria derived from the clinical need. The project should therefore begin with developing an improved understanding of the actual clinical need as a basis for deriving the sensor system requirements. This will be achieved through observations and interviews with physicians and nursing staff in the clinical environment. Based on the requirements a literature review will be conducted to evaluate suitable physical measurement principles with the respective sensor technologies. A prototype will be developed by implementing the most suitable and economically affordable sensor technology. The performance of the prototype in the light of the afore mentioned requirements will subsequently be evaluated in an artificial setting representing the clinical environment as close as possible. | DA/MA/BA | Mario Roser | |
Dynamic objects in dense reconstruction | DA/MA/BA | ||
Dynamic User Interface Generation for Workflow-Sensitive Medical Displays In a modern operating room a plethora of medical devices are available, each coming with their own display and user interface. Due to sterility requirements though, the surgeon cannot directly interact with either device and has to rely on the circulator nurse to carry out his spoken commands on the devices. This level of indirection leads to delays and often frustration. Additionally to this, all devices have to be placed around the patient with a certain safety margin, thus not in an optimal viewing position for the surgeon. Therefore the surgeon usually has to interrupt his actions and turn towards the screen in order to use the displayed information. In this thesis a method should be developed to automatically generate a suitable UI based on a purely semantic and workflow-dependent description. The generated UI should mainly show a single, selected image source (e.g. pre-operative datasets or an intra-operative image stream) together with a limited number of important control elements. For this purpose specific medical interventions have to be analyzed in order to identify their surgical workflow and the image sources and control elements most important for them. Then a prototypical solution should be implemented and tested with medical partners. If time permits, devices with different form factors should be compared for optimal surgical usability. | Master Thesis | Mirije Shefiti | |
A General Reconstruction Framework for Constrained Optimisation Problems in Hyperpolarised 13C Metabolic MR Imaging. Magnetic resonance spectroscopic imaging with hyperpolarised 13C agents is a novel technique that examines the cellular metabolic reactions in a minimally invasive fashion. Being acquired under challenging conditions of rapidly decaying signal and limited injected dose of the hyper-polarised substance, reconstructed images exhibit low signal-to-noise ratio (SNR) and artefacts. The goal of this work was to enhance the quality of images by means of iterative reconstruction with included a priori knowledge in a form of additional constraints and to propose a valid extension to the physical model of signal acquisition. | Master Thesis | Elena Nasonova | |
Detection Models for Emission Tomography Emission Tomography is a popular clinical tool in the diagnosis of diseases. Based on radioactive tracers marking suspicious targets (such as cancerous cells), emission tomography modalities like Single Photon Emission Computed Tomography (SPECT) or Positron Emission Tomography (PET) allow the non-invasive visualization of these tracers within the human body. Mathematically and algorithmically these modalities pose interesting research problems, in particular the inverse problem of tomographic reconstruction. Central for reconstruction is a suitable physical model of the detection process. | DA/MA/BA | ||
Estimation of radiation exposure The objective of the project is to estimate the radiation dose the hands of the surgeon receives. Using machine learing algorithms and the GEANT4 simulation framework the radiation dose is predicted. The hardware setup consists of a mobile C-arm/surgical simulator with multimodal sensors. The sensors are capable of recording depth and rgb values. Based on these values the hands of the surgeons are detected. Knowing where the surgeons hands are a predicition of applied radiation is made. | Master Thesis | Nicola Leucht | |
Evaluating Human Skills using Deep Neural Networks Recently, deep learning has great success in various applications such as image recognition, object detection, and medical applications, etc. Using YouTube? and Vimeo, how-to videos are widely used to transfer the skills of experts. It is done by capturing the reference video for a specific task and users could learn a new skill according to the how-to videos. Then, how to evaluate the newly learned skills? Usually, it has to be evaluated by the experts but it takes high cost. To address this problem, evaluating human skills with the video is required [1-5]. Human skill evaluation or determination is a research area where researchers develop a new solution to automatically assess the human skills from the video. This technology could be extended to surgical skill assessment in medical applications where it’s accuracy become much more important [6-7]. In this project, we will develop a solution to automatically assess the human skills from the video. | Master Thesis | ||
Surgeon monitoring during C-arm based pre-clinical study We search a master student to work on a preclinical study involving a new setup proposing an overlay of x-ray image with video [1,2]. The preclinical study will see participants (expert surgeons) perform facet joint injection [3] with our new system as well as with X-ray only to compare. The goal of this study is to show that the new system can drastically reduce the radiation exposure of the surgeon during this procedure performed daily. The work will be in close collaboration with clinical partners at the LMU hospital in Klinikum Innenstadt (Sendliger Tor). The preclinical study will be conducted in direct relationship with a medical student ( in its doktorarbeit) which will take of the medical aspects of the study while the master student will take care of the technical aspects. | Master Thesis | ||
A study of existing self-supervised depth estimations Self-supervised depth estimation shows the promising result in the outdoor environment. However, there are few works target on the indoor or more arbitrary scenario. The goal of this project is to investigate the current existing self-supervised methods on real-world scenarios and possibly propose a way to improve there performance. | DA/MA/BA | Yinglong Feng | |
Explained Predictions for Neural Networks | Master Thesis | Sharru Möller | |
A flexible framework for realistic convolution-based real-time ultrasound simulation Medical ultrasound imaging is receiving increasing importance, due to its versatile usability both in pre-operative diagnostics and monitoring, as well as intra-operative navigation and real-time update of the surgical plan. The simulation of medical ultrasound imaging is an important task and has several application areas, such as pre-operative planning and simulation of the surgical procedure, intra-operative registration of a multi-modal pre-operative plan to the intra-operative scene (e.g. involving CT imaging), multi-modal registration (US-CT, US-MRI) as well as visual servoing of robotic ultrasound acquisition. Ultrasound simulation approaches are prone to trade-offs between speed and realism. Very fast approaches based on ray-casting of US images from CT deliver rough approximations of US images, which are useful e.g. for registration in real-time and intra-operatively. In contrast to that, physically realistic simulations based on e.g. models of wave-propagation in human tissue yield images which match the real image closely, but these methods are typically very slow in execution. Convolution-based approaches take an in-between position between the two approaches, allowing for very fast (i.e. real-time) simulation, but offering realistic features such as speckle patterns. This thesis aims at understanding the physical process of ultrasound imaging fully and developing a flexible framework for realistic and fast convolution-based ultrasound simulation. | DA/MA/BA | ||
Feature Modulation in Multi-task Learning | Master Thesis | ||
Federated Learning @deepc | Master Thesis | ||
Privacy-Preserving Federated Learning in Medical Imaging | Project | Wasiq Kasam Rumaney | |
Meta-Learning in Medical Imaging Medical Image classification has become competitive with human experts, in some domains. However, its success seems contingent on the availability of large bodies of annotated data; and it is often difficult and expensive to acquire such datasets. High data requirements are a general issue in modern Deep Learning, and increasing sample efficiency is one of the fundamental research problems today. Meta-learning is one of the directions taken to alleviate the need for huge datasets via learning to learn. The goal of this project is to build on work presented in (Snell et al. 2917; Finn et al. 2017) to find approaches to image classification which are sample efficient and adaptable to Semi-Supervised Learning. | IDP | Ahmed Ayad | |
Few Shot Segmentation in Medical Imaging Semantic segmentation does a pixel-wise classification to assign a class or background to each pixel of an image. This problem requires a very large data set of pixel level annotations, which is often unavailable or very costly to create. The aim of this project is to build a state of the art low shot deep learning technique for medical images, which can from few dense or sparse annotated medical image labels derive semantic segmentation of a new previously unseen class. | Master Thesis | Abhijeet Parida | |
Spatial Context Management for Augmented Reality Applications The thesis describes the development of the Spatial Context Ontology Reasoning Environment (SCORE), that supports the management of explicit and implicit spatial context for Augmented Reality applications. Augmented Reality (AR) forms the link between reality and Virtual Reality by augmenting the user's environment with virtual objects and information, that can be interacted with. Here in general head-mounted displays are used to present the augmented view to the user. This research field is investigated at the chair for Computer Aided Medical Procedures and Augmented Reality at the Technische Universität München. Among many other things it covers new user interaction paradigms, sensor analysis and the access to structured information in intelligent environments. A common claim to Augmented Reality systems is not to restrict the users' scope of motion. For supporting mobility the DWARF framework is frequently employed. It represents a distributed AR framework providing ad hoc interoperability of services, that self-assemble into AR systems. AR applications must efficiently handle spatial information of real and virtual objects in order to provide a “natural” feeling of the augmented world to the user. In addition to the mere location-awareness they involve situational aspects that are related to the user more and more. As a matter of fact the management of spatial context increasingly gains importance to applications in AR. This thesis presents a service-oriented framework that is aimed at facilitating the integration of these aspects. It follows a novel approach in spatial context management, that incorporates traditional coordinate-based context models and contextual ontologies that are well-known from research efforts with respect to the Semantic Web. Thus efficiency is combined with a declarative knowledge representation enabling knowledge sharing and reuse. Besides the explanation and discussion of the general research topics, the work is based on, the thesis presents a first application using the presented framework. It is a proactive safety system for vehicles, and is part of an interdisciplinary research project for user-centered driver assistance, that is supported by techniques of Augmented Reality. | DA/MA/BA | Jan-Gregor Fischer | |
Fluid Simulation - Realistic simulation of cement flow The objective of this research work is the formulation of a computational model for realistic real-time simulation of the brittle, porous material of the bone and the interaction of bone cement and instruments with it. This is achieved using a GPU-accelerated mesh-free SPH approach based on the CT imaging data. The research goal contributes to the complete simulation of a minimally invasive surgery (MIS) called “kyphoplasty”. The procedure is conducted on patients with fractured vertebrae in order to set the spine into an upright position and stabilize the vertebrae. First a cavity with a balloon catheter in the vertebra is created. Afterwards the cavity is filled up with cement. | DA/MA/BA | ||
Entwicklung eines Prozesses zur Kalibrierung von Datenhandschuhen in Virtual Environments Do the Vulcanian's greet - we have two wireless data gloves, each equipped with 14 sensors that wait for a motivated student to be integrated in the project VirtualArabia. | Master Thesis | Michael Förster | |
3D Scene Semantic Segmentation Using GANs Voxel data for indoor scenes contains different kinds of objects. Here we develop novel methods for 3D scene segmentation and refinement. Based on previous work using the generative adversarial network, we do model optimization using deep learning. Cause 3D convolutional operations consume a great amount of memory, we can design some method to reduce the amount of computational resources occupied by GPU computing. | Master Thesis | ||
3D GAN for conditional medical image synthesis and cross-modality translation GAN in 3D, especially in medical imaging application is challenging in many aspects, mainly due to the 'curse of dimensionality' and limited available data set. The goal of this project is to develop an optimum strategy to scale GAN in 3D that generalizes well for conditional medical image synthesis. The student will be provided with all-round support including good research environment, sufficient computational resources and active guidance to make the thesis successful. | Master Thesis | ||
GUI Design and Development for Angiography-Based Catheter Tracking tba | IDP | ||
User Interface Design and Evaluation for Multimodal Imaging Today's medicine revolves a lot around the combination of different imaging and tracking modalities to form a new system for Image-Guided Therapy. Recent research show promising results in the field of image registration and fusion. However, when adopting these novel techniques to the clinical environment, the systems are often inappropriate in terms of usability, design and simplicity. This thesis will investigate user interfaces for medical applications, the student will participate in the development an interface for the EU project EndoTOFPET and evaluate the usability in close cooperation with clinical personal. | DA/MA/BA | ||
Gamification in Sensor Registration Gamification - The use of design elements characteristic for games in non-game contexts (Deterding et. al) Over the course of recent years Gamification has become one of the most rapidly growing fields among scientifical courses. A development that by now lead to its widespread adoption among industry and multiple fields of computer science. We aim to apply elements of Gamification to ease users into essential registration/calibration tasks. Given an example process: In sensor-to-sensor registration, two devices track a target to calculate their respective positional-/rotational offset. To do this, the user is required to hold a tracking-target visibly in their views and move it freely arround. The user's task specifically comes down to: - Keep the target in the view of both sensors at all times. - Move it slowly - Don't do any too sudden changes in movement - Angle the marker in all directions while keeping it visible for both sensors - Cover the biggest possible area with your target After a given time; the precise offset is calculated. The focus of this project is to gamify this calibration process and evaluate the results. Applying students will be allowed to implement their own Gamification ideas for this process and evaluate them against the non-gamified approach. The tracking functionality will be provided in a given package. | DA/MA/BA | ||
Interactive Interface for Active Learning | Project | ||
Gamification Catalog Gamification has become a driving factor in many modern applications and fields. Language Learning, public info terminals, even social services contain game elements for the motivational factors in user behaviour. But what elements affect us the strongest? Can we categorize these ideas and rank them in strength? For this project, we want to create an easily distributable mobile application containing some very basic functionality. Different gamification ideas can be added and enabled to measure their effective impact on users. Tasks | DA/MA/BA | ||
Gestyboard 2.0 This theses is about continuing the implementation of a multitouch-keyboard (the Gestyboard) based on gestures which can be used with 10 fingers at the same time. A well-known problem in this field is the lack of haptic feedback. This is the reason why the performance of a virtual keyboard is much lower than the performance of a real keyboard. The goal of this thesis is to get to know if users are able to be as performant with the new innovative multitouch keyboard as with the real keyboard. Therefore, different alternatives of the same concept should be implemented and evaluated. There is already a concept defined but students are welcome to bring their your own ideas into the project. | Master Thesis | Christian Wiesner | |
Graph synthesis for medical applications | Master Thesis | ||
Guided Attention Segmentation Networks | IDP | Rene | |
High Dimensional Regression Using Deep Neural Networks In recent years, Convolutional Neural Networks (CNNs) have been proven to outperform state-of-the-art approaches in several Computer Vision applications. In this master’s thesis, we focus on the problem of high dimensional regression. One example is depth estimation from a single image. Starting from a state-of-the-art multi-scale CNN architecture, depth predictions are performed on the NYU Depth dataset in two stages; an original coarse prediction, using fully-connected layers to include global information, and a second stage which enhances the prediction with details at a finer (local) scale. Challenges include scale ambiguity, high dimensionality and the continuous nature of the output. The final aim of this work is the implementation of a general-purpose high-dimensional regression network, applicable to a set of tasks, such as (the inverse problem) predicting an RGB image from depth, predicting color from grayscale, next frame prediction in a video sequence and potentially medical applications. | Master Thesis | Iro Laina | |
High frequency ultrasound and micro computed tomography cochlea image fusion for intra cochlear navigation | DA/MA/BA | ||
Towards Human-Like Predictor with Rejection Option Recently, measuring statistical uncertainties in deep neural networks has been an important issue in various safe-critical applications such as autonomous driving, computer-aided diagnosis. However, training the predictor which has a rejection option without performance degradation is still an unsolved problem. In this project, we will investigate a novel method (i.e. human-like predictor) where the neural network could reject uncertain samples. The main goal of human-like predictor is to learn deep neural networks which know what they can do and cannot do. It would be important to calibrate the uncertainty of prediction while maintaining accuracy. | Master Thesis | ||
Camera based Head-Up-Display Brightness Control Modern cars use head-up displays (HUD) to make information available to the driver without the necessity to take the eyes off the road. Current systems use a simple photo-diode illumination sensor to measure the overall intensity of the environmental light and adjust the overall brightness of the HUD, accordingly. Alternatively, a camera can be used to measure the luminance of the scene in front of the car. Due to the higher resolution of the sensor, it’s possible to adjust the brightness of the HUD much more precisely and robust. For example, when driving at night a single photodiode is not able to differentiate between the headlights of an upcoming car or an actual change in the brightness of the environment. Also, reflective surfaces can deteriorate the contrast between the HUD and the background. The goal of this project is to develop a system to adjust the brightness of a HUD given a camera image of the scene in front of the car. The challenges are: • Develop a robust method to measure illuminance and reflectiveness of a scene from an image. • Account for the different point of views of the camera and the driver, leveraging 3D information provided by a stereo camera system or monocular scene reconstruction. The project is divided into two sub problems. The first one tackles the challenge of measuring the illumination of a scene with a camera. Given a camera image and exposure time, how can we measure the illuminance of every pixel? The second one focuses on the different viewpoints of the scene between camera and driver. Due to the fact that the position of the driver’s head depends on the driver and also changes continuously, another camera is used to track the head pose in real-time. Given the transformation between camera and head and the depth of the scene one can compensate for the different view-points. To estimate the depth of the scene different approaches are possible and shall be evaluated. Part I: Measuring the illumination of the scene: • Literature review of methods to measure illuminance and reflectiveness. • Implement and validate approaches on example images and in the lab. • Investigate the limits of accuracy and resolution of the approaches. Part II: View-point compensation: • Investigate the quality of the view point transformation between camera and head pose given different types of depth information o Stereo. o Structure-fromMotion. o Piecewise planar assumptions. o Object detections. | DA/MA/BA | Torben Teepe | |
Hand Pose Estimation from Depth Data A key challenge in modern robotics and biomedical engineering is to design artificial hands able to reproduce human abilities [2]. The difficulty to handle human-like manipulation problems is mainly due to the high number of Degrees of Freedom (DOFs) concentrated in a small volume. As a consequence, the control of robotic grasp and manipulation is an interesting challenges for engineers and scientists in the fields of robotics and machine learning. A possible solution consists in learning manipulation tasks from human observation. The fist step to apply learning methods to control anthropomorphic hands consists in tracking human palm and the fingertips (contact points) from a camera sensor. The objective of this Practical Work is to develop an algorithm able to robustly estimate the poses of palm and fingertips using only depth data. To this end, a deep learning technique [1] will be used. | DA/MA/BA | ||
Segmentation of blood stem cells in bright-field images During the analysis of single-cell time-lapse experiments of proliferating and differentiating blood stem cells, we often have to deal with two types of clumped cells. First, cells that are dividing are inherently clumped for certain time points. Second, due to the rising amount of cells over time one can observe more and more groups of clumped cells, especially in late timepoints. An additional characteristic of our data is that the experiments show cells in all differentiation states with different morphologies. In this project, we would like to develop a method that is able to split clumped cells, but also preserves the shape of all cell types. | DA/MA/BA | ||
Segmentation of embryonic stem cells in fluorescent images | DA/MA/BA | ||
Stem cell tracking Develop/improve existing auto-tracking algorithms for improving and automating manual tracking and considering the biologists needs | DA/MA/BA | ||
[[Students.MaHieroQuest][]] | |||
[[Students.MaHuelvesAlba][]] | |||
3D Human Pose Estimation from RGB Images | DA/MA/BA | ||
Humanlike Robot Movement | DA/MA/BA | ||
Perception for humanoid robotics This thesis studies unsupervised monocular depth prediction problem. Most of existing unsupervised depth prediction algorithms are developed under outdoor scenarios, while depth prediction in the indoor environment has long been ignored. Therefore this work focus on filling the gap by first evaluating existing architecture in the indoor environments and then improving the current design of architecture by solving observed issues in the first step. After an extensive study and experiment in the current methods, an issue has been found that existing architecture cannot learn depth freely by the reason of the poor performance of the pose estimation network, which is a side network usually being trained together with depth prediction network. Unlike typical outdoor training sequence, such as Kitti dataset, indoor environment consist of more arbitrary camera movement and short baseline consecutive images which contribute poor training to the pose network. To address this issue, we propose two methods: First, we design a reconstruction loss function to provide extra constraint to the estimated pose and sharpen the predicted disparity map. Second, a novel neural network architecture is proposed to predict accurate 6-DOF pose. Our pose network combines the advantage of FlowNet2? and PoseNet? which makes it able to learn to predict correct poses with relatively short baseline and arbitrary rotation training images. Apart from the above two methods, we use an ensemble and a flipping training techniques along with a median filter on the output disparity map, resulting outperformance of the current state-of-the-art unsupervised learning approaches. | Master Thesis | ||
Hybrid Tracking for Intra-Operative Imaging: Fusion of optical, magnetic, robotic and radioactive tracking techniques | Master Thesis | Felix Achilles | |
Federated Learning with Non-iid data In recent years, the interest in the field of Federated learning has been increasing. This interest peaks in medical machine learning because of the nature of data and the privacy requirements in this field. Despite of the high efforts in commercial federated learning such as mobile phone keyboard prediction, there is still much to do in the medical field. The medical data has non-iid nature which means the data in different clients (nodes) come from different distributions. Most of the current federated learning methods focus on iid problem. Our goal in this thesis is to improve a neural networks generalization in a non-iid setting. Our solution for this problem is to use meta-information and statistical information for guiding the global model in the network. We take this investigation further by employing meta-learning methods for personalization of the clients models in our federated setting. | Master Thesis | Yousef Yeganeh | |
Parallax-free Xray stitching using depth based pose estimation | Bachelor Thesis | Christian Grimm | |
Disentangled Representation Learning of Medical Brain Images using Flow-based Models Generative Models like GANs and VAEs don't learn the data distribution directly as the distribution tends to be intractable. Instead, these models approximate a lower bound on the log-likelihood of the data (VAEs) or use an adversarial network to train the generator(GANs). Invertible flow-based models instead directly optimize for the log-likelihood of the data using normalizing flows. In this project, we study the use of flow-based models in learning meaningful, disentangled representations of medical brain images in both supervised and unsupervised settings. Flow-based models also learn meaningful latent representation which can be used for downstream tasks like meaningful image manipulation. We expect disentangled representations would allow for control over the generative factors of the images, which could be used to generate highly controlled synthetic images for training other models that require a large number of labeled or unlabelled data. | Master Thesis | Aadhithya Sankar | |
Independent Component Analysis of Positron Emission Tomography Data Positron emission tomography (PET) is a medical technique that generates functional images of the human body including the brain. Typically, covariance in brain PET data is investigated using so called seed-based correlation approach. I.e., values from one brain region are correlated with values from other brain regions or throughout the brain. Thus, this method is limited by the need for hypothesis about regions to be correlated. Recently, we successfully applied a hypothesis-free analytical technique known as independent component analysis (ICA) to identify covariance patterns in PET data (Yakushev et al., submitted). ICA is an advanced computational method for separating a multivariate signal into additive components (Hyvärinen and Oja, 2000). In the above study, we applied software called Group ICA Toolbox (GIFT, http://mialab.mrn.org/software/gift/index.html) that is actually developed for another imaging technique. However, PET provides a different kind of signal and has specific sources of variance. Thus, the ultimate aim of this work is to evaluate performance of ICA in PET data. The following research questions should be addressed in a series of simulation experiments optimal number of components, reproducibility of the algorithm. Ideally, the algorithm should be implemented in as a user-friendly image analysis tool. | Master Thesis | ||
Cancer Matestasis detection in Lymph nodes | IDP | Amil George & Bharti Munjal | |
Crowdsourcing in the Medical Context | IDP | Christoph Baur | |
Vectorial Diffraction Aware Image Formation Models in Lightfield (Plenoptic) Microscopy Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera, allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. | DA/MA/BA | ||
MR Image Synthesis | Master Thesis | ||
Implementation and evaluation of splitting methods for reconstruction of X-ray Computed Tomography Reconstruction of X-ray computed tomography (CT) data enables insight into the human body without a surgical procedure. The basic concept comes down to sending X-rays through the human body and measuring the changed X-ray on the other side of the patient. Such methods are called projective imaging methods. In order to reconstruct a 3D volume of the human body providing a map of the physical properties which led to the according projective measurements, there exist several algorithms, for example direct methods such as Filtered-Backprojection, or iterative methods. Iterative methods can be classified into statistical approaches using the maximum-likelihood and into series expansion methods using linear equation systems. Consequently, there exist multiple algorithms in order to compute tomographic reconstructions. | Bachelor Thesis | Johann Frei | |
In vivo micro-CT imaging for the non-invasive assessment of tumor mouse models in preclinical anticancer drug research In this master’s thesis our focus is set on the implementation of advanced techniques in micro-CT imaging of lung, liver, and kidney cancer development and therapy. Micro-CT is a useful imaging modality for the assessment of anatomical and/or functional features of tumor growth in the whole animal and in a non-invasive manner. In specific, methods shall be implemented for the reduction of motion artifacts arising during imaging of a breathing living animal (respiratory-gated micro-CT imaging), and for the derivation of functional perfusion parameters during anti-angiogenic treatment of tumors (dynamic contrast-enhanced (DCE) micro-CT imaging). | Master Thesis | Kristina Erhard | |
Incremental Learning For Robotic Grasping | Master Thesis | Pengyuan Wang | |
indoorSLAM: Robust RGB-D SLAM based on Plane-Line-Point features for indoor scenes | Master Thesis | ||
3D inpainting with Semantic Scene Completion Existing semantic scene completion methods target at complete missing geometry and predict semantic meaning in 3D reconstruction, but without predicting the texture of the completed regions. This project aims to fill this gap by introducing a method to also predict the texture of the predicted regions. | Master Thesis | Kuanhsun Wu | |
Rethinking Deep Learning based Monocular Depth Prediction We are looking for a motivated student who wants to work in the topic of monocular depth prediction using deep learning. Predicting depth from a single color image is a challenging and under-constrained task, and as such, active research is happening that incorporates CNNs. Recent works typically do not enforce an understanding for the objects in the scene. In contrast, our goal is to rethink depth prediction and create an object-aware model, which might lead to more accurate depth. | Master Thesis | Evin Pinar Ornek | |
Interactive Segmentation for Improving Infection Quantification in CT scans Longitudinal changes of pathologies in CT images is an important indicator for analyzing patients from COVID19. To accurately analyze changes of pathologies, consistent segmentation across multiple time-points is required. Only a few studies have been reported for COVID inspection segmentation but there is no study on longitudinal data [1-3]. Moreover, it is very challenging to segment multiple pathologies due to the inter-class similarity and intra-class variability. To address these issues, in this project, we will explore a novel method to fully exploit user guidance. We consider two types of user-guided interactive segmentation. 1) The user provides a pathology mask (segmentation mask, line, circle, scribbles, etc.) on the reference scan. In this scenario, the longitudinal segmentation model will be designed to use the reference scan’s mask as additional input for the segmentation as in [4-5]. By utilizing the information on the target pathology from the user’s input, the network focuses on the target class, which makes the problem easier. 2) The user indicates erroneous areas. Then, the network will utilize the information to refine the segmentation [6-8]. We would explore this interactive segmentation on the longitudinal Covid-19 segmentation problem. | Master Thesis | ||
Deep Intrinsic Image Decomposition In this project we aim at decomposing a simple photograph into layers of material properties like reflectance, albedo, materials etc. This a very challenging but important topic in bot Computer Vision and Computer Graphics as it improves tasks like scene understanding, augmented reality and object recognition. We want to tackle this problem using the human annotated OpenSurfaces? dataset and our recent advances in deep learning especially fully connected residual networks. | Master Thesis | Udo Dehm | |
Invariant Landmark Detection for highly accurate positioning An essential task to enable highly autonomous driving is the self-localization and ego-motion estimation of the car. Together, they enable accurate absolute positioning and reasoning about the road ahead for e.g. path planning. If a single camera is used as sensor to measure the current vehicle location, positioning is based on visual landmarks and is related to the problem of visual odometry. Current navigation systems rely solely on GPS and vehicle odometry. Newer systems use object detections in the image, like traffic signs and road markings to triangulate the vehicle position within a map. To do so, visual landmarks are detected in the camera image and their position relative to the vehicle is computed. Given the landmark positions, the most likely position of the vehicle with respect to the landmarks in the map can be deduced. Current approaches use detected objects as landmarks (e.g. traffic signs/lights, poles, reflectors, lane markings), but often there are not enough of these objects to localize accurately. In the scope of this project, a method to detect more generic landmarks should be developed. This method, that will be focusing on deep learning, should extract features that are more invariant to different invariances (e.g. illumination) and also provide a good matchability. Challenges: • Create a dataset from different sources: already existing datasets, public webcam streams and synthetic datasets. • Design and implement a method/network to extract robust generic landmarks/features in different environments (highway, city, country roads) and match them to previously extracted landmarks. • Leverage deep learning in order to achieve a high invariance to different environment conditions. Tasks: • Literature review of methods to extract robust landmarks with focus on: o Robustness to changes in appearance, viewpoint. o Uniqueness to match them corretly to already extracted landmarks. • Implementation and evaluation of a deep neural network that is capable of extracting invariant features, that offer the possibility for robust matching. • Application of the feature in a state-of-the-art SLAM algorithm. Literature : [1] LIFT: https://arxiv.org/abs/1603.09114 [2] TILDE: https://infoscience.epfl.ch/record/206786/files/top.pdf [3] Playing for Data: https://download.visinf.tu-darmstadt.de/data/from_games/data/eccv-2016-richter-playing_for_data.pdf [4] ORB_SLAM: http://webdiis.unizar.es/~raulmur/orbslam/ | DA/MA/BA | Josefine Gaßner | |
Inverse Problems in PDE-driven Processes Using Deep Learning We are looking for extremely motivated student to work on the topic "Inverse Problems in PDE-driven Processes Using Deep Learning". The scope of this project is the intersection of numerical methods and machine learning. The objective is to develop theoretical framework and efficient algorithms that can be applied to broad class of PDE-driven systems. However, we can tailor the focus and scope of the project to your preferences. | IDP | ||
Evaluation of iterative solving methods for the statistical reconstruction of Light Field Microscopy data Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. Once the forward light transfer is determined based on the optical system response, the reconstruction process is an inverse problem. In fluorescence microscopy, besides the read-out noise, Poisson noise is present due to the low photon count. Hence a Poisson-Gaussian mixture model would be an appropriate approach for likelihood-based statistical reconstruction. Various iteration schemes may result from different likelihoods coupled with regularization. | DA/MA/BA | ||
Fast image fusion on foveated images The goal of this thesis is to combine images from an intesified CCD (EMCCD) camera and an long wave infrared (LWIR) camera. The fused image should contain all salient features of the individual modalities. As the fused image is thought as an replacement for legacy analogous night vision, the fusion algorithm should provide at least 25 frames per second. To achieve high performance the images are to be foveated. A foveated image is an image which has been compressed by taking advantage of the perceptual properties of the human visual system, namely the decreasing resolution of the retina by increasing eccentricity. The thesis should elaborate how image fusion and foveation methods can be combined efficently while providing a good fused image. | Master Thesis | Janosch Peters | |
Automatic Early Detection of Keratoconus Keratoconus (KCN) is a bilateral, non-inflammatory, and degenerative disorder of the cornea in the eye with an incidence of approximately 1 per 2,000 in the general population [1,2], It is characterized by progressive thinning and cone-shaped bulge of the cornea (fig.1) leading to substantial distortion of the vision [2]. The early diagnosis of keratoconus is of a great importance for patients seeking eye surgery (i.e. LASIK), which can prevent the progression of pathology after surgery [3-4]. Rabinowitz [5] shows that his preliminary research using a Wavefront analysis together with Corneal topography demonstrates a good classification between early KCN subtypes and normals. Further, Jhanji et. al. [6] concluded that swept-source OCT may provide a reliable alternative for the parameters of corneal topography (fig.2). On the other hand, Pérez et. al. [3] shows that all of these surveyors including videokeratography, Orbscan, and Pentacam together with the indices can lead to early KCN detection, however, with an increase in false positive detection. Therefore, developing a highly specific diagnostic tool for KCN detection with few false positive is highly desirable. In this IDP/MA project, the objective is to analysis data from approximately 200 patients, treated at the ophthalmology department at Klinikum Rechts der Isar / TUM. Using a retrospective corneal topographic data (fig.2) collected during follow-ups, the aim is to build an early predictive model for KCN detection. | DA/MA/BA | ||
GPU Ultrasound Simulation and Volume Reconstruction Medical ultrasound imaging has been in clinical use for decades, however, acquisition and interpretation of ultrasound images still requires experience. For this reason, ultrasound simulation for training purposes is gaining importance. Additionally, simulated images can be used for the multimodal registration of Ultrasound and Computed Tomography (CT) images. The simulation process is a computationally demanding task. Thus, in this thesis a simulation method, accelerated by modern graphics hardware (GPU), is introduced. The accelerated simulation utilizes a ray-based simulation model in order to provide real-time high-throughput image simulation for training and registration purposes. Wave-based simulation methods are computationally even more demanding and have been considered unsuitable for real-time applications. In the scope of this thesis, wave-based models have been investigated, including the Digital Waveguide Mesh and the Finite-Difference Time-Domain method for solving Westervelt's equation. Initial results demonstrate the feasibility of performing near real-time wave-based ultrasound simulation using graphics hardware. Furthermore, a new algorithm is introduced for volumetric reconstruction of freehand (3D) ultrasound. The proposed algorithm intelligently divides the work between CPU and GPU for optimal performance. The results demonstrate superior performance and equivalent reconstruction quality compared to existing state of the art methods. GPU accelerated ultrasound simulation and freehand volume reconstruction are key components for fast 3D-3D (dense deformable) multimodal registration of Ultrasound and CT images, which is subject of current ongoing work. | Master Thesis | Athanasios Karamalis | |
[[Students.MaKeil][]] | |||
Keypoint Learning | Master Thesis | ||
Kyphoplasty balloon simulation Kyphoplasty, a percutaneous, image-guided minimally invasive surgery, is a recently introduced treatment of painful vertebral fractures which is being performed extensively worldwide. The objective of kyphoplasty is to inject polymethylmethacrylate (PMMA) bone cement under radiological image guidance into the collapsed vertebral body to stabilize it. Before injecting the cement, an inflatable balloon is placed in the vertebral body and subsequently inflated in order to restore the vertebral height and correct the kyphotic deformity caused by the compression fracture. After the balloon is deflated and removed from the vertebral body, the created cavity is filled with PMMA bone cement. The goal of the project is the implementation and validation of a kyphoplasty balloon simulation. | DA/MA/BA | ||
Left Atrium Segmentation in 3D Ultrasound Using Volumetric Convolutional Neural Networks Segmentation of the left atrium and deriving its size can help to predict and detect various cardiovascular conditions. Automation of this process in three-dimensional Ultrasound image data is desirable, since manual delineations are time-consuming, challenging and operator-dependent. Convolutional neural networks have made improvements in computer vision and in medical image processing. Fully convolutional networks have successfully been applied to segmentation tasks and were extended to work on volumetric data. This work examines the performance of a combined neural network architecture of existing models on left atrial segmentation. The loss function merges the objectives of volumetric segmentation, incorporation of a shape prior and the unsupervised adaptation to different Ultrasound imaging devices. | Master Thesis | Markus Degel | |
Deep Generative Model for Longitudinal Analysis Longitudinal analysis of a disease is an important issue to understand its progression as well as to design prognosis and early diagnostic tools. From the longitudinal sample series where data is collected from multiple time points, both the spatial structural abnormalities and the longitudinal variations are captured. Therefore, the temporal dynamics of a disease are more informative than static observations of the symptoms, in particular for neuro-degenerative diseases whose progression span over years with early subtle changes. In this project, we will develop a deep generative method to model the lesion progression over time. | Master Thesis | Umut Küçükaslan | |
Laparoscopic Freehand SPECT | Master Thesis | Ayah Haidar | |
Instrument Tracking for Safety and Surgical View Optimization in Laparoscopic Surgery Laparoscopic (minimally invasive, key hole) surgery involves usage of a laparoscope (camera), and laparoscopic instruments (graspers, scissors, monopolar and bipolar devices). First the abdomen is insufflated with carbon dioxide to create a space between the abdominal wall and organs. The laparoscope and laparoscopic instruments are then inserted through small 5 or 10 mm incisions in the abdomen. The laparoscope projects the image within the abdomen onto a screen. The surgeon can therefore visualise the inside of the abdomen and the operating instruments to carry the surgical procedure. At present, there is increasing interest in surgical procedures using a robot-assisted device. The advantages of using such a device include a steady, tremor-free image, the elimination of small inaccurate movements and decreased energy expenditure by the assistant. A number of studies have evaluated the advantages of robotic camera devices compared with manually controlled cameras or different types of devices. The possibility of developing a laparoscope with a tracking system that will automatically identify and follow the operating surgeon’s instruments does provide significant benefit without requiring bulky robotic systems. Firstly by withdrawing the need to always have an assistant will reduce cost. With an instrument tracking system, there is no need for additional pedals and headband to move the camera, which can be confusing, uncomfortable, unsafe and may actually increase the length of surgery. Besides that, an increased safety of the procedure will be achieved by providing a steadier image and with incorporated safety mechanisms. The current project aims at developing a laparoscopic camera system mounted on the operating bed. The proposed system will track the primary surgeon’s instruments without the need for any constant input. The aim is thereby to recognise key tools with priority (sharp tool 1st), and track their movement in situ to move a camera accordingly. With safety features being one priority, the camera will by default be focused on the instrument with higher priority (i.e scissors, monopolar and bipolar devices) in view. Whenever e.g. the monopolar or bipolar device is out of view, this will allow in future to disable the energy source of those instruments, which will greatly reduce one of the commonest cause of injuries during laparoscopic surgery. | DA/MA/BA | ||
Real-time large-scale SLAM from RGB-D data You will extend an existing RGB-D reconstruction system to support large-scale scenes. In a first step you will evaluate and implement state-of-the-art algorithms for tracking and reconstruction from RGB-D data. Secondly, you will also evaluate and implement algorithms for texturing the obtained reconstruction from camera images. The real-time critical components will be implemented on a GPU. | Master Thesis | ||
[[Students.MaLatein][]] | |||
Comparison of methods to produce a two-layered LDI representation from a single RGB image One of the major drawbacks of the visualizations used in computer vision is the lack of information about the portion of scene that has been occluded by the foreground objects. Depth maps store the results of a mapping from each pixel to its distance from the camera. Since the pair of RGB image and the depth map store more information than a RGB image itself,they are considered 2.5D. However, a simple depth map fails to alleviate the problem as it stores the values for only the visible part of an image. Unlike human beings who are able to perceive the information even if it has been hidden by confidently extrapolating from what is visible, computer vision models are stymied at only what is immediately visible. This has been resolved with other forms of representations of 2D images, one of which is LDI. However getting better LDI predictions from a single RGB image is challenging and we compare two methods in this work and further experiment with them to see if they they can be made better. | Bachelor Thesis | Richa Mishra | |
Learning to learn: Which data we have to annotate first in medical applications? Although the semi-supervised or unsupervised learning has been developed recently, the performance of them is still bound to the performance of fully-supervised learning. However, the cost of the annotation is extremely high in medical applications. It requires medical specialists (radiologists or pathologists) required to annotate the data. For those reasons, it is almost impossible to annotate all available dataset and sometimes, the only a subset of a dataset is possible to be selected for annotation due to the limited budget. Active learning is the research field which tries to deal with this problem [1-5]. Previous studies have been conducted in mainly three approaches: an uncertainty-based approach, a diversity-based approach, and expected model change [3]. These studies have been verified that active learning has the potential to reduce annotation cost. In this project, we aim to propose a novel active learning method which learns a simple uncertainty calculator to select more informative data to learn the current deep neural networks in medical applications. | Master Thesis | Farrukh Mushtaq | |
Learning-based Surgical Workflow Detection from Intra-Operative Signals The goal of this project will be to apply methods from Machine Learning (ML) to medical data sets in order to deduct the current workflow phase. These data sets were recorded by our medical partners during actual laparoscopic cholecystectomies and will contain binary values (like the usage vector of all possible surgical instruments) as well as analog measurements (e.g. intra-abdominal pressure). By learning from labeled data, methods like Random Forests or Hidden Markov Models should be able to detect which of the known phases is the most probable, given the data at hand. | Master Thesis | Ergün Kayis | |
Continual and incremental learning with less forgetting strategy Recently, deep learning has great success in various applications such as image recognition, object detection, and medical applications, etc. However, in the real world deployment, the number of training data (sometimes the number of tasks) continues to grow, or the data cannot be given at once. In other words, a model needs to be trained over time with the increase of the data collection in a hospital (or multiple hospitals). A new type of lesion could be also defined by medical experts. Then, the pre-trained network needs to be further trained to diagnose these new types of lesions with increased data. ‘Class-incremental learning’ is a research area that aims at training the learned model to add new tasks while retaining the knowledge acquired in the past tasks. It is challenging because DNNs are easy to forget previous tasks when learning new tasks (i.e. catastrophic forgetting). In real-world scenarios, it is difficult to store all training data which was used when training DNN at the previous time due to the privacy issues of medical data. In this project, we will develop a solution to this problem in medical applications by investigating an effective and novel learning method. | Master Thesis | Afshar Kakaei | |
Depth estimation in Light Field Microscopy Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera, allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be arranged into multi-views and used to retrieve the depth map of the imaged scene. | DA/MA/BA | ||
A light field renderer for Light Field Microscopy data visualization Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to retrieve angular perspectives of the imaged sample. | DA/MA/BA | ||
Investigation and Implementation of Lightfield Forward Models Light field microscopy is a scanless techniques for high speed 3D imaging of fluorescent specimens. A conventional microscope can be turned into a light field microscope by placing a microlens array in front of the camera, allowing for a full spatio-angular capture of the light field in a single snapshot. The recorded information can be used to volumetrically reconstruct the imaged sample. | Master Thesis | Josue Page | |
Real-Time Volumetric Fusion for iOCT Microscope-integrated Optical Coherence Tomography (iOCT) is able to provide live cross-sectional images during an ophthalmic intervention. Current OCT engines have a limited acquisition rate, allowing to either image cross-sectional 2D images at high frame rate, low resolution and low field-of-view volumes at medium update rate or high-resolution volumes with low update rate. In order to provide full field of view visualization during a surgical intervention, a high resolution scan can be acquired at the start of the intervention which is then tracked with an optical retina tracker to compensate for movement. Goal of this thesis is to devise a method to dynamically update this high resolution volume with the live data acquired during the ongoing intervention, in order to provide a responsive visualization of the surgeon's working environment. Integration of the live data into the volume requires compensation for deformation of the tissue as well as incorporation of motion data from the optical tracker, to accurately find the correct region to update in the volume. | Bachelor Thesis | Michael Sommersperger | |
Understanding and optimization of a low energy X-ray generator for intra-operative radiation therapy The research activity is focused on the combination of minimal-invasive therapy techniques with diagnostic imaging and navigation modalities, e.g. application of intra-operative radiation therapy, MRI guided high focused ultrasound or intra-operative SPECT with MRI guidance. An initial development goal is to use advanced MRI imaging to find and localize small pathologies and subsequently perform minimal-invasive therapy with a small and lightweight X-ray generator and continuous intra-operative imaging for visualization and navigation. An in-vitro setup will be created to apply the low-energy X-rays to real cancer cells and study their biological effectiveness. | IDP | /twiki/pub/Main/AmalBenzinaStudentProjects/xraygenerator.jpg | |
Computational Modeling of Respiratory Motion Based on 4D CT Respiratory lung motion has a serious impact on the quality of medical imaging, treatment planning and intervention, and radiotherapy. This motion not only reduces imaging quality, especially for positron emission tomography (PET), but also inhibits the determination of the exact position and shape of the target during radiotherapy. Based on prior knowledge of average tissue properties, patient-specific imaging (4D CT) and a surrogate signal, a computational motion model can be created. This enables researchers and developers to simulate and generate information about a respiratory phase not covered by the imaging procedure. Therefore, the internal deformation of the lung and its containing cancerous tissue can be computed and taken into account during further imaging acquisitions or radiotherapy. | Master Thesis | Bernhard Fuerst | |
Diverse Anomaly Detection Projects @deepc | Master Thesis | ||
Multiple sclerosis lesion segmentation from Longitudinal brain MRI Longitudinal medical data is defined that imaging data are obtained at more than one time-point where subjects are scanned repeatedly over time. Longitudinal medical image analysis is a very important topic because it can solve some difficulties which are limited when only spatial data is utilized. Temporal information could provide very useful cues for accurately and reliably analyzing medical images. To effectively analyze temporal changes, it is required to segment region-of-interest accurately in a short time. In the series of images acquired over multiple times of imaging, available cues for segmentation become richer with the intermediate predictions. In this project, we will investigate a way to fully exploit this rich source of information. | IDP | Moiz Sajid, Stefan Denner | |
MS Lesion Segmentation in multi-channel subtraction images | IDP | Sarthak Gupta | |
Gradient Surgery for Multitask Longitudinal CT Analysis Longitudinal changes of pathology in CT images is an important indicator for analyzing patients from COVID19. In the clinical setting, clinicians read longitudinal images to get various information such as disease progression, needs for ICU admission, the severity of the disease. They are important to increase the survival rate. However, reading longitudinal 3D CT scans takes a long time which might decrease the efficiency of the clinician's performance. In this project, we will explore a method to automatically analyze longitudinal CT scans to help the radiologist's reading. In particular, we will explore a multitask learning method to fully exploit the relation between different tasks and adaptively balancing the gradients from different objective functions (i.e. Gradient surgery). | Master Thesis | ||
Simulation of Muscle Activity for an Augmented Reality Magic Mirror We have previously shown an augmented reality (AR) magic mirror. We create the illusion that a user standing in front of the system can look inside the own body. The video of this system received a lot of attention and has been seen over 200.000 time on Youtube. We now want to build a system for education of human anatomy using augmented reality visualization. We want to use the system to visualize muscle activity. | DA/MA/BA | ||
Tracking using Autoencoders and Manifolds In this project we want to explore the possibilities of using autoencoders to perform object tracking in video sequences. The object's bounding box is given in the first frame and needs to be tracked thoughte the sequence. We would like to use autoencoders to encode the appearance of the object and to predict its future (location and appearance). | DA/MA/BA | ||
Marker-based inside-out tracking for medical applications using a single optical camera Nowadays, tracking and navigation for small imaging systems are performed mainly by devices based on infrared cameras or electromagnetic fields. These systems impose some disadvantages for the use with freehand devices such as gamma cameras or ultrasound probes: a separate system for “Outside-in” tracking is needed, which causes the main issue of a required line-of-sight between the tracking system and the tool to be tracked in the surgical environment. To solve this problem, the idea is to have a small add-on system attached to the devices being tracked. The add-on system contains of an optical camera to track several markers that are attached to the patient and calculate the inverse trajectory, i.e. the movement of the device. The idea of this project is to develop a tracking software, the “inside-out” tracking technique, with the required data set to have more accurate tracking and image fusion process. An algorithm for multi marker tracking and calculation of "best pose” will be implemented and the problems of illumination, occlusion, and stability will be addressed. Finally, the accuracy will be evaluated and compared to other tracking modalities, especially optical and electro-magnetical tracking. | Master Thesis | Shih Chen-Hsuan | |
Class-Level Object Detection and Pose Estimation from a Single RGB Image Only 2D Object Detection has seen some great advancements over the last years. For instance, detectors like YOLO or SSD are capable of performing accurate localization and classification on a large amount of classes. Unfortunately, this does not hold true for current pose estimation techniques, as they have trouble to generalizing to a variety of object categories. Yet, most pose estimation datasets are comprised out of only a very small number of different objects to accommodate for this shortcoming. Nevertheless, this is a severe problem for many real world applications like robotic manipulation or consumer grade augmented reality, since otherwise the method would be stronlgy limited to this handful number of objects. Therefore, we would like to propose a novel pose estimation approach for handling multiple object classes from a single RGB image only. To this end, we would like to extend a very common 2D detector i.e. Mask R-CNN[1], to further incorporate 6D pose estimation. Eventually, the overall architecture might also involve fully regressing the 3D shapes of the detected objects. | Master Thesis | Edward Cornelius Krubasik | |
Radiation Dose Reduction for Trabecular Bone Structure Analysis in Osteoporosis Diagnostics by Using Iterative Reconstruction | Master Thesis | Felix Kopp | |
Medical Augmented Reality with SLAM-based perception | IDP | ||
StainGAN: Stain style transfer for digital histological images Digitized Histopathological diagnosis is in increasing demand, but stain color variations due to stain preparation, differences in raw materials, manufacturing techniques of stain vendors and use of different scanner manufacturers are imposing obstacles to the diagnosis process. The problem of stain variations is a well-defined problem with many proposed methods to overcome it each depending on the reference slide image to be chosen by a pathologist expert. We propose a deep-learning solution to that problem based on the Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks eliminating the need for an expert to pick a representative reference image. Our approach showed promising results that we compare quantitatively and quantitatively against the state of the art methods. | Master Thesis | M. Tarek Shaban | |
Understanding Medical Images to Generate Reliable Medical Report The reading and interpretation of medical images are usually conducted by specialized medical experts [1]. For example, radiology images are read by radiologists and they write textual reports to describe the findings regarding each area of the body examined in the imaging study. However, writing medical-imaging reports requires experienced medical experts (e.g. experienced radiologists or pathologists) and it is time-consuming [2]. To assist in the administrative duties of writing medical-imaging reports, in recent years, a few research efforts have been devoted to investigating whether it is possible to automatically generate medical image reports for given medical image [3-8]. These methods are usually based on the encoder-decoder architecture which has been widely used for image captioning [9-10]. In this project, a novel automatic medical report generation method is investigated. It is challenging to generate accurate medical reports with large variation due to the high complexity in the natural language [11]. So, the traditional captioning methods suffer a problem where the model duplicates a completely identical sentence of the training set. To address the aforementioned limitations, this project focuses on the development of a reliable medical report generation method. | Project | Hossain Shaikh Saadi | |
Creating Diagnostic Model for Assessing the Success of Treatment for Eye Melanoma An eye melanoma, also called ocular melanoma, is a type of cancer occurring in the eye. Patients having an eye melanoma typically remain free of symptoms in early states. In addition, it is not visible from the outside, which makes early diagnosis difficult. The choroidal melanoma, which is located in the choroid layer of the eye, is the most common primary malignant intra-ocular tumor in adults. At the same time, intra-ocular cancer is relatively rare – only an estimated 2,500 - 3,000 adults were diagnosed in the United States in 2015. Treatment usually consists of radiotherapy or surgery if radiotherapy was unsuccessful. For larger tumors, radiation therapy maybe associated with some loss of vision. Currently, it is unknown which factors lead to the development of such cancer and which factors determine whether a patient is responding to radiotherapy In this master thesis project, the objective is to analyze data from approximately 200 patients, treated at the ophthalmology department at Ludwig-Maximilians University hospital. Treatment consisted of a single-session, frameless outpatient procedure with the Cyberknife System by Accuray. Using pre-procedural data and information collected during follow-up, the aim is to identify factors predictive of a patient's response to treatment and the impact on a patient's visual acuity, measured by the so-called Visus. | Project | ||
3D Mesh Analysis and Completion During a scanning process, it is not possible to acquire all parts of the scanned surface. Data are inevitably missing due to the complexity of the scanned part or imperfect scanning process. This create holes in the mesh, bad triangles, and numerous problems and issues.The goal of the project is to use available libraries to a) compute a quality measure and characteristic for a given 3D mesh, b) identify problems/issues and c) fix it. | IDP | ||
Meta-clustering | Master Thesis | Samin Hamidi | |
Meta-Optimization | Project | ||
Glass/Mirror Detection Mirror and transparent-objects have been an issue for simultaneous re-localization and mapping (SLAM). Mirrors reflect light rays which cause the wrong reconstruction and windows are hard to be observed by cameras. This is especially dangerous for robotics since robots may try to go into a mirror or go through a window. The main goal of this work is to solve this issue by detecting mirrors/windows and reconstructing a correct map. The potential approach is to use an object detection network, such as YOLO, to detect possible mirrors and windows. Then designing a function to correctly reconstruction the reflected region in the map. This work involves knowledge in deep learning and SLAM. | DA/MA/BA | ||
Modeling brain connectivity from multi-modal imaging data | Master Thesis | ||
MR integration of an intra-operative gamma detector and evaluation of its potential for radiation therapy | DA/MA/BA | ||
MR-CT Domain Translation of Spine Data The goal of this project is to synthesise MR images from CT scans of the spine and vive versa in an unpaired setting. | DA/MA/BA | ||
Multi-modal Deformable Registration in the Context of Neurosurgical Brain Shift Registration of medical images is crucial for bringing data obtained by different sources or at a different time into a common reference frame. Adding real-time requirements to 3D multi-modal registration allows physicians to analyze the combination of medical data both preoperatively as well as intraoperatively, providing additional benefits for the patient and helping to achieve a desirable procedure outcome. Different applications usually induce several underlying geometrical transformations ranging from global rigid movements to local nonlinear deformations such as brain shift in neurosurgery or compression of liver tissue during respiratory motion. Using a deformable registration to correct local tissue distortions allows for a transformation of preoperative data into an intraoperatively acquired local reference frame. Preoperative X-ray Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) data are commonly available for diagnostics and procedure planning, but a multi-modal deformable registration with intraoperative Ultrasound (US) is needed for the successful guidance of minimally invasive procedures. Within the scope of this thesis, existing registration techniques have been researched and compared and new cost functions for multi-modal deformable registration of 3D US with preoperative CT and MRI data are proposed. | Master Thesis | Stefan Matl | |
Iterative Iodine Detection and Enhancement Algorithm for Dual-Energy CT | Master Thesis | Faegheh Nazari | |
Registration of Multi-View Ultrasound with Magnetic Resonance Images | Master Thesis | ||
Multiple Screen Detection for Eye-Tracking Based Monitor Interaction A modern operating room usually offers multiple different monitors to present various information to the surgical staff. With the trend to go from highly invasive open surgery to minimally invasive techniques such as laparoscopic surgery, Single-Port surgery or even NOTES, the amount of additional monitors is likely to increase. Knowing which monitor the surgeon is looking at, and on which part of the monitor they are focused allows for a wide variety of supporting systems, such as automatic adjustments of the endoscopic camera position through a robotic system. The goal of this work is to develop a method to recognize monitors with changing content through the cameras of head-mounted eye-tracking systems and translate the detected gaze point to the coordinate frame of the detection monitor. The work will be based on an existing framework (developed in C#) that is able to detect a single monitor, which should be extended to an arbitrary number of monitors and distinguish between them. | DA/MA/BA | ||
Neural solver for PDEs | Hiwi | ||
New Mole Detection | Master Thesis | ||
Robust training of neural networks under noisy labels The performance of supervised learning methods highly depends on the quality of the labels. However, accurately labeling a large number of datasets is a time-consuming task, which sometimes results in mismatched labeling. When the neural networks are trained with noisy data, it might be biased to the noisy data. Therefore the performance of the neural networks could be poor. While label noise has been widely studied in the machine learning society, only a few studies have been reported to identify or ignore them during the process of training. In this project, we will investigate the way to train the neural network under noisy data robustly. In particular, we will focus on exploring effective learning strategies and loss correction methods to address the problem. | Master Thesis | Cagri Yildiz | |
Organs at Risk Detection and Localization for Radiation Therapy Planning using Transformers. In this project, we explore 3D transformers for organs at risk (localization) in volumetric medical imaging relevant for radiation therapy planning. The student will apply and develop state-of-the-art transformers for medical image detection and localization. | Master Thesis | ||
Real-Time Simulation of 3D OCT Images Optical Coherence Tomography (OCT) is widely used in diagnosis for ophthalmology and is also gaining popularity in interventional settings. OCT generates images in an image formation process similar to ultrasound imaging: Coherent light waves are emitted into the tissue and this light signal is partially reflected at discontinuities of optical density. The reflected light waves are then used to reconstruct a depth slice of the tissue. The ability to simulate such a modality in real time has many potential applications: For example, it can greatly help to evaluate image processing algorithms where ground truth is not easily obtainable. It is also a crucial part of a fully virtual simulation environment for ophthalmic interventions, which can be used for training as well as prototyping of visualization concepts. As a first step, existing simulation algorithms shall be reviewed and evaluated in terms of computational efficiency when adapted to 3D. A new or adapted algorithm shall be proposed to support simulation of OCT images from a volumetric model of the eye. This should consider efficient implementation on the GPU and consider realistic simulation of the modality's artifacts, such as speckle noise, reflections and shadowing. | Master Thesis | ||
Self-supervised learning for out-of-distribution detection in medical applications Although recent neural networks have achieved great successes when the training and the testing data are sampled from the same distribution, in real-world applications, it is unnatural to control the test data distribution. Therefore, it is important for neural networks to be aware of uncertainty when new kinds of inputs (which is called out-of-distribution) are given. In this project, we consider the problem of out-of-distribution detection in neural networks. In particular, we will develop a novel self-supervised learning approach for out-of-distribution detection in medical applications. | Master Thesis | Abinav Ravi Venkatakrishnan | |
Seamless stitching for 4D opto-acoustic imaging Optoacoustic tomography enables high resolution biological imaging based on the excitation of ultrasound waves due to the absorption of light. A laser pulse penetrates soft tissue up to a few centimeters in depth and provides 3D-visualization of biological tissues. With its rich contrast and high spatial and temporal resolution optoacoustic tomography is especially preferred for vasculature imaging. The size of a single optoacoustic volume is limited by the size and the field of view of the scanner. In order to get a good general view of the finer biological structure, however, greater field of views are necessary. During this master thesis we will investigate several methods for combining multiple volumes into one larger volume and evaluate which of those existing methods are adaptable for optoacoustic scans. Therefore, we need to find a way to align volumes to each other without having their positions tracked. Additionally, the voxels in the overlapping areas have to be blended in a way that any abrupt transitions or resolution losses are avoided, even though the resolution of each scan decreases around the volume edges and with distance to the scanner. Finally, we aim to propose a method to seamlessly stitch several optoacoustic scans into one high resolution volume without any additional information on the position of the single volumes. | DA/MA/BA | Suhanyaa Nitkunanantharajah | |
Image based tracking for medical augmented reality in orthodontic application This Master Thesis suggests a low-cost Augmented Reality system, termed OrthodontAR?, for orthodontic applications and examines image-based tracking techniques specific to orthodontic use. The procedure addressed is guided bracket placement for orthodontic correction using dental braces. Related research has developed FEM simulations based on cone-beam CT reconstructions of teeth and bone. Such simulations could be used in the planning of optimal bracket placement and wire tension, such that patient teeth move in an optimal manner while minimizing rotation. The benefits would include reduced overall chair time due to fewer corrections and reduced likelihood of relapse due to reduced twisting. The system suggested in this thesis tackles the guided placement of brackets on the teeth, which is required to realize pre-procedure planning. Augmentation of a patient video with a newly placed bracket with its planned position would suffice. The surgeon could visually align planned and actual position in a video see-through head mounted display (HMD). To reduce technical complexity, the system shall be fully image guided. It shall rely on information from both CT and video images to track the patient's jaw. The goal of this thesis is to develop and evaluate image-based methods to overlay the CT of the patient with the video image. A prototype system shall be evaluated in terms of robustness and accuracy to determine if it meets practical requirements. | DA/MA/BA | Andre Aichert | |
Computer-aided Early Diagnosis of Pancreatic Cancer based on Deep Learning Pancreatic ductal adenocarcinoma (PDAC) remains as the deadliest cancer worldwide and most of them are diagnosed in the advanced and incurable stage (1). For the year 2020, it is estimated that the number of cancer deaths caused by pancreatic ductal adenocarcinoma (PDAC) will surpass colorectal and breast cancer and will be responsible for the most overall cancer deaths after lung cancer (2). This lethal nature of PDAC has led to the consensus of screening high-risk individuals (HRIs) at early curable stage to improve the survival (3-6). The lethal nature of pancreatic ductal adenocarcinoma (PDAC) has led to the consensus of screening high-risk individuals at early curable stage. However, there is no non-invasive imaging method available for effective screening of PDAC at the moment. Strong evidence has shown that the pathological progression from normal ductal tissue to PDAC is via paraneoplastic lesions, such as pancreatic intraepithelial neoplasia ( |