Design, Development and Evaluation of a Multimodal User Interface for Medical In-Situ Visualization
Thesis by: Samuel Kerschbaumer
Advisor:
Nassir Navab
Supervision by:
Christoph Bichlmeier
Abstract
In-situ visualization of medical image data using a head mounted display allows the
presentation of virtual objects within the Augmented Reality (AR) scene from the natural
point of view. However, a major problem is to manipulate the visualization: parameters
should be adjusted by the user to get the desired view of the region of interest. Also,
for different stages in intraoperative navigational procedures, visualization of various instruments
and navigational information has to be adjusted according to the needs of the
operating surgeon. However, in operating rooms there is almost no room for classical interfaces
like buttons, pedals, keyboards and mice. All tools close to the operation site have
to be sterile and space around the operating table is reserved for surgical equipment.
In my thesis, new concepts for interfaces to interact with the AR scene and to manipulate
virtual objects are developed. The optimal user interface has to exploit the advantages of
AR without hindering the user by too complex or space wasting tools, because they would
drastically reduce the acceptance of AR in the operating room.
As a result of this thesis, three input modalities based on hand detection, a foot pedal and
voice recognition are implemented. A user study compares the three different interfaces,
showing their strengths and weaknesses.