Focus and Context Visualization for Medical Augmented Reality supervised by Christoph Bichlmeier and Nassir Navab This diploma thesis concerns focus and context visualization of anatomical data (obtained from different imaging modalities) for medical augmented reality (AR) and thereby the correct fusion of real and virtual anatomy. Medical AR is intended to be used for pre-operative diagnosis, intra-operative guidance and post-operative control, but is still in the stadium of research and was not practically applied yet. It is a technique which augments the surgeon’s real view on the patient with virtually visualized anatomy of the patient’s body. A medical AR system used for the purpose of focus and context visualization includes a tracking system and a video see-through, head-mounted-display (HMD) enabling a stereoscopic, augmented view on the AR scene. Focus refers to the part of the virtual anatomy, the observer (surgeon) is interested in, e.g. the operation site, which has to be perceived at the correct location respective the context of the real skin. In a possible future application of medical AR for surgical interventions, correct perception of position and depth of the focus has to be guaranteed. If the perception of the focus is disturbed or misleading, the danger exists that the surgeon operates at the wrong location and thus vitally important organs of the patient are hurt. Many visualization approaches for medical AR suffer from a misleading depth perception, since the normally hidden interior anatomy is just superimposed on the patient’s body. In these approaches the virtual anatomy seems to be located in front of the human body. Partial occlusion of the virtual anatomy by the real skin can solve this problem. Within this diploma thesis further methods for improving the perception of layout (arrangement) and distances of objects in the AR scene are discussed. Visual cues for the perception of layout and distances of focussed virtual anatomy can be enabled by the exploit of context information. Context information can be provided as well by a correct integration of the camera image, recorded by the color cameras mounted on the HMD, as by non-focus parts of the virtual anatomy. Within the scope of the practical work of this thesis a focus and context visualization framework for medical AR was implemented, which considers and exploits depth cues enabling a correct perception of the focussed virtual anatomy. Therefore general principles and methods for creating and designing focus and context visualizations are taken into account, which are mainly adapted from hand-made illustration techniques. The framework provides a correct fusion of real and virtual anatomy and realizes an intuitive view on the focussed anatomy of the patient. It includes a new technique for modifying the transparency of the camera image of the real body. The transparency is thereby adjusted by means of properties (e.g. curvature) of a virtual skin model. Additionally, a method for clipping parts of the anatomy, hindering the view onto the focus, is introduced. The framework also contains methods for integrating surgical or endoscopic instruments into the medical AR scenario. Instruments are virtually extended as soon as they penetrate into the patient’s body. Moreover, the penetration port is highlighted and virtual shadows are used to provide visual feedback from instrument interaction. The effectiveness of the developed techniques is demonstrated with a cadaver study and a thorax phantom, both visualizing the anatomical region of the upper part of the body, and an in-vivo study visualizing the head.
In-situ visualization in medical augmented reality (AR) using for instance a video see-through head mounted display (HMD) and an optical tracking system enables the stereoscopic view on visualized CT data registered with the real anatomy of a patient. Data can aligned with the required accuracy and the surgeons do not have to analyze data on an external monitor or images attached to the wall somewhere in the operating room. Thanks to a medical AR system like mentioned before, surgeons get a direct view onto and also ”into” the patient. Mental registration of medical imagery with the operation site is not necessary anymore. In addition surgical instruments can be augmented inside the human body. Bringing medical imagery and surgical instruments in the same field of action provides the most intuitive way to understand the patient’s anatomy within the region of interest and allows for the development of completely new generations of surgical navigation systems. Unfortunately, this method of presenting medical data suffers from a serious lack. Virtual imagery, such as a volume rendered spinal column, can only be displayed superimposed on real objects. If virtual entities of the scene are expected behind real ones, like the virtual spinal column beneath the real skin surface, this problem implicates incorrect perception of the viewed objects respective their distance to the observer. The strong visual depth cue interposition is responsible for misleading depth perception. This project aims at the development and evaluation of methods to improve depth perception for in-situ visualization in medical AR. Its intention is to provide an extended view onto the human body that allows an intuitive localization of visualized bones and tissue.