MaDynamicUiGen

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

Dynamic User Interface Generation for Workflow-Sensitive Medical Displays

Thesis by: Mirije Shefiti
Advisor: Prof. Nassir Navab
Supervision by: Ralf Stauder, Alexandru Duliu
Timeframe: six months

Abstract

In a modern operating room a plethora of medical devices are available, each coming with their own display and user interface. Due to sterility requirements though, the surgeon cannot directly interact with either device and has to rely on the circulator nurse to carry out his spoken commands on the devices. This level of indirection leads to delays and often frustration. Additionally to this, all devices have to be placed around the patient with a certain safety margin, thus not in an optimal viewing position for the surgeon. Therefore the surgeon usually has to interrupt his actions and turn towards the screen in order to use the displayed information.

In this thesis a method should be developed to automatically generate a suitable UI based on a purely semantic and workflow-dependent description. The generated UI should mainly show a single, selected image source (e.g. pre-operative datasets or an intra-operative image stream) together with a limited number of important control elements. For this purpose specific medical interventions have to be analyzed in order to identify their surgical workflow and the image sources and control elements most important for them. Then a prototypical solution should be implemented and tested with medical partners. If time permits, devices with different form factors should be compared for optimal surgical usability.

Students.ProjectForm
Title: Dynamic User Interface Generation for Workflow-Sensitive Medical Displays
Abstract: In a modern operating room a plethora of medical devices are available, each coming with their own display and user interface. Due to sterility requirements though, the surgeon cannot directly interact with either device and has to rely on the circulator nurse to carry out his spoken commands on the devices. This level of indirection leads to delays and often frustration. Additionally to this, all devices have to be placed around the patient with a certain safety margin, thus not in an optimal viewing position for the surgeon. Therefore the surgeon usually has to interrupt his actions and turn towards the screen in order to use the displayed information. In this thesis a method should be developed to automatically generate a suitable UI based on a purely semantic and workflow-dependent description. The generated UI should mainly show a single, selected image source (e.g. pre-operative datasets or an intra-operative image stream) together with a limited number of important control elements. For this purpose specific medical interventions have to be analyzed in order to identify their surgical workflow and the image sources and control elements most important for them. Then a prototypical solution should be implemented and tested with medical partners. If time permits, devices with different form factors should be compared for optimal surgical usability.
Student: Mirije Shefiti
Director: Prof. Nassir Navab
Supervisor: Ralf Stauder, Alexandru Duliu
Type: Master Thesis
Area: Surgical Workflow, Computer-Aided Surgery
Status: finished
Start: 2014-08-15
Finish: 2015-02-15
Thesis (optional):  
Picture:  


Edit | Attach | Refresh | Diffs | More | Revision r1.3 - 27 May 2015 - 15:09 - RalfStauder