ARPatterns

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

Software Patterns for Augmented Reality Systems

This work is based on a study on software architectures for Augmented Reality systems in commission of the ARVIKA consortium. In this study we identified a core set of subsystems that can be found in different forms in almost every existing system. Each these subsystems is developed with different approaches with advantages and disadvantages. Here we present a collection of these approaches combined with a systematic description.

This view is heavily based on the idea of patterns in software architecture. Patterns are structured descriptions of successfully applied problem-solving knowledge. We are not so bold as to call the approaches we identified patterns, as we would need more data for that - but in the future, we hope to be able to set up a system of AR patterns. Each approach is described by name, goal, motivation, a description, usability, consequences, and project usage. This follows the scheme of describing architectural or design patterns.

The approaches are structured into the identified AR subsystems: Application, Control, Tracking, World Model, Context, Presentation

Note: this site is in the process of moving to our new ARPatterns web?.


Application Subsystem

The application subsystem is the place where the application developer can add the application-specific logic. There are various solutions possible with different advantages and disadvantages.

Main Executable

  • Goal: Keep the flow of control.
  • Motivation: The main parts of an application is independent of Augmented Reality. Indeed AR is only one part among others and is only used to visualize some content.
  • Description: Write the application in a high-level programming language, explicitly describing what happens when.
  • Usability: Use this approach if it is necessary to keep the control flow, for example to guarantee real-time constraints for non-AR subsystems, e.g. reacting to external events. The disadvantage is that it is up to the application developer to implement the continuous update of registration and the rendering.
  • Consequences: The modifiability of the application is low.
  • Known use: MR Platform

Scripting

  • Goal: Quickly develop new applications.
  • Motivation: The real-time constraints of a user application are often not very strong, so that it is possible to quickly develop new applications in a scripting language supported by a powerful environment.
  • Description: For the development of an application, there is a scripting wrapper around all components that have performance constraints. These components are written in compiled languages such as C++ and offer scripting interfaces.
  • Usability: The development of scripted applications allows rapid prototyping but demands powerful components that implement important functionality. The disadvantage is that the scripting approach is not suited for very complex applications.
  • Consequences: A script interpreter is needed, as well as (possibly) a special scripting language for AR.
  • Known use: Chair.ImageTclAR, Karma, Coterie, MARS, EMMIE

Node in Scene Graph

  • Goal: Embed application in world model.
  • Motivation: In Augmented Reality, user interaction is connected with the physical environment. Consequently applications are often linked to places in the real world. With this approach, the application is seamlessly embedded in the environment.
  • Description: A scene graph models the world around a user as a tree of nodes. Each node can be any type object, usually graphical ones. But there are also non-graphical objects that include control code.
  • Usability: Together with a scene graph-based rendering approach.
  • Consequences: The scene graph-based approach for an application handles the control flow to the underlying scene graph platform, e.g. Open Inventor. On the other side this approach offers a relatively easy possibility for the implementation of shared applications for locally nearby users. One 3D interface can be shared among several users but displayed for each user from a different view.
  • Known use: Studierstube

Part of Event Loop

  • Goal: Let an AR library do the tracking and rendering and call the application within the tracking-rendering loop.
  • Motivation: The tracking and rendering must be done in a regular loop that updates the user's view based on her motion. Embed the application into this loop.
  • Description: To alleviate the development of AR applications some libraries provide the needed low-level functionality to update the user's view regularly. The application's task is to provide hooks that can be called within the update loop and that might react on changes in the view.
  • Usability: With a library for tracking and rendering.
  • Consequences: The control flow is managed by the update loop of the tracking-rendering system.
  • Known use: ARToolKit

Web Service

  • Goal: Treat AR as one type of media among others.
  • Motivation: For content-based applications the web-based approach has been proven to be a reasonable approach. AR scenes and world model information can be seen as an AR document. A scene such as an arrow that points to a particular button in front of the user is then described in document that is loaded from a web server.
  • Description: The control flow is situated on a web server and implemented within a web service. This web service is published under a particular web address and the answer of the service is rendered on a web client. If the answer contains Augmented Reality content then the AR component is activated to display the given AR content.
  • Usability: This approach can be used where the focus is on displaying various types of content and load them dynamically from a server.
  • Consequences: The client and the server must be connected. If a connection cannot be guaranteed then there must be a proxy available locally that emulates the server. Alternatively, a smaller instance of the server component may be deployed on the client machine. This approach should be combined with a scene-based rendering component, e.g. a VRML or custom AR browser.
  • Known use: ARVIKA

Multimedia Flow Description

  • Goal: Use high-level description language to describe AR scenes.
  • Motivation: For the development of multimedia content, there are several formats that simplify the creation of new content by providing high-level concepts such as timers. Examples for such languages are SMIL and Macromedia Flash. Additionally to multi-purpose languages for multimedia content, there are domain specific languages for particular fields---for example, description languages for Workflows or for technical manuals (IETMs).
  • Description: A high-level markup language provides domain specific components and concepts that help quickly create new content. For example, to support a training scenario for unskilled workers, the AR system should visualize a sequence of AR scenes and other documents. To describe such a scenario, the content creator has to combine workflow steps and add content to each step. An execution engine for workflows reads such a description and controls the presentation of the current working step.
  • Usability: This approach can be used for applications with a meta-model for the description of documents and their relationship and dependencies combined with a component that reads and executes such descriptions.
  • Consequences: The complexity of this approach is higher for simple applications. This approach should be combined with a scene-based rendering component, e.g. a VRML or custom AR browser.
  • Known use: STAR, ARVIKA, DWARF

Control Subsystem

Augmented Reality systems tend to concentrate more on output than on input; however, the interest in new user input techniques and architectures is growing. In this section, we concentrate on architectural approaches of combining user input, not on the input devices themselves.

Handle in Application

  • Goal: Keep the system architecture simple; provide high-fidelity interfaces.
  • Motivation: The simplest way to handle user input for a specific application is to hard-code it into the application logic itself. Also, this allows custom-tailored input styles for each application.
  • Description: Include input handling code in the application code, with explicit references to the types of input devices.
  • Usability: Within a main executable application.
  • Consequences: Potential for high-fidelity interfaces; reduced modifiability.
  • Known use:

Use Browser Input Functions

  • Goal: Take advantage of existing functionality.
  • Motivation: When using a browser for rendering, you can take advantage of its input features.
  • Description: VRML browsers can send events out through the EAI interface when the user clicks on on-screen objects with the mouse or when the gaze direction coincides with certain objects. Other broswers provide similar functionality.
  • Usability: Together with a browser-based output subsystem.
  • Consequences: Separating input from output modalities and integration into multimodal systems can be difficult. Since wearable systems do not generally have a mouse, the mouse movement must be simulated with another input modality.
  • Known use: ARVIKA

Networked Input Devices

  • Goal: Combine the data of various input devices to form multi-modal user interfaces.
  • Motivation: Multi-modal interfaces require many simultaneous input devices. Modeling all possible combinations is exponentially complex.
  • Description: Provide an abstraction layer for input devices and a description of how the user input can be combined; interpret this description using a controller component. Use middleware to find new input devices dynamically.
  • Usability: Good when combined with multiple viewers for output.
  • Consequences: Allows integration of new input devices at run time or when building new systems.
  • Known use: DWARF

Tracking Subsystem

Without tracking, Augmented Reality is impossible. In this section, we concentrate on architectural approaches of gathering tracking data, not on the tracking devices or algorithms themselves.

Tracking Server

  • Goal: Use a centralized tracking server to reduce hardware requirements on the client system.
  • Motivation: There are many different tracking technologies available: magnetic, video based, inertial, or combinations thereof. Calculate the user's pose from raw data may require significant computing power. One strategy to avoid this is to offload the computing to a server in the user's environment and only transfer the result to the client system.
  • Description: A tracking server in the user's environment coll
  • Usability: Use when integrating a commercial tracking system.
  • Consequences: Availability of a connection to the server.
  • Known use: STAR, UbiCom.

Networked Trackers

  • Goal: Combine the data of various trackers in the environment without knowing the physical location.
  • Motivation: The best results for tracking can be achieved by combining various tracking methods. Usually, the possibilities to connect tracking devices physically to the client device or a tracking server are restricted. As a solution, wrap every tracker with a software interface that masks the location of the tracker by providing a network-accessible interface. To gather the data of all trackers, each tracker is accessed remotely. If supported by middleware, the lookup can be done by name, not by network address.
  • Description: For each tracking device, provide a wrapper that uses middleware concepts such as CORBA. The wrapper provides an interface to the tracker and registers itself in the network. Components that need a tracker (consumer) look for them through middleware services and connect to them. The components search for the trackers by name, not by address. Once connected, the tracker and the consumer communicate transparently.
  • Usability: This approach can be used when the client system can connect to tracking devices over the network.
  • Consequences: The advantage is that virtually any number of tracking devices of different types can be combined. The disadvantage is the overhead of network communication.
  • Known use: ARVIKA, Studierstube, DWARF.

Operating System Resources

  • Goal: Directly connect tracking devices to client system.
  • Motivation: If no network connection is available then only devices that are physically connected to the client device can be used. Particularly inertial trackers and video cameras are small and can be used.
  • Description: The tracking devices are accessed through drivers for the operating system.
  • Usability: The client device must powerful enough to execute the tracking algorithms.
  • Consequences: The advantage is that no network connection is needed but only devices that can be connected physically to the client device can be used. And the computing must be done completely on the client device.
  • Known use: MR Platform, ARToolKit

World Model Subsystem

The World Model is used to describe the world around the user, particularly the virtual objects and their position. Besides that, the world model must also store information about the marker positions or any other features required for tracking.

OpenGL Code

  • Goal: Describe virtual objects in OpenGL source code and load objects at runtime from library.
  • Motivation: To be able to render objects in an AR system, it is enough to write some OpenGL code, compile it and feed the code to the renderer. For testing purposes, this will suffice.
  • Description: The developer creates OpenGL code and calls the OpenGL rendering engine to display it. For correct registration with the user's pose, the position and angle of the virtual camera that looks at the scene can be changed. This is usually done in rendering-tracking-update loop.
  • Usability: When only small scenes need to be presented, simple OpenGL code can easily be written and tested, or existing code for scenes may be reused.
  • Consequences: The advantages are that it is easy to test an AR system. However, this approach is not very scalable as OpenGL is a fairly low-level library for 3D.
  • Known use: ARToolKit

Scene Graph Format

  • Goal: Use a standard format with authoring tools to generate content and rendering engines to display them.
  • Motivation: Scene graphs are the standard component to display virtual environments. Each scene graph can read scene graph descriptions in several formats, where the most well-known is the VRML format.
  • Description: With an authoring tool a content developer creates the model of a virtual scene. In industrial context scenes created with CAD tools can be simplified and reused. The scene description is saved in the file system and given the AR system for processing. Scene graphs are usually stored on the file system.
  • Usability: Can be used where a scene graph is used for rendering, the scene does not change too often, and the scenes are discrete without interconnection with each other.
  • Consequences: Simplifies the creation of new content as authoring tools can be reused and fits perfectly to the scene graph component.
  • Known use: ARVIKA, DWARF, MR Platform.

Object Stream

  • Goal: Serialize world model in main memory to disk.
  • Motivation: Serialization is a well-known technique to save runtime objects. This can be reused for objects in a scene graph at runtime.
  • Description: The runtime environment allows to serializing objects to disk. The next time the application is started the objects are recreated by deserializing them. Recursively, a whole scene graph can be loaded from disk.
  • Usability: Usable where the world model is created by the system user and the system only has to read own formats.
  • Consequences: An object stream can be read only by systems that knows the classes of the objects. Thus, an object stream is a proprietary format.
  • Known use: Tinmith

Configuration File for Marker Positions

  • Goal: Read configuration data for the trackers from a flat file.
  • Motivation: The easiest way to tell a tracker what to look for and how to interpret it is to write it to a file and read it every time the system starts.
  • Description: At system startup or any time the system comes into a new environment the trackers have to know what to look for and how to interpret it. So the tracking component is told the file that describes the markers or any natural features it has to look for. It loads the file at startup time or upon request at run time.
  • Usability: Used if only a small number of markers are needed. If many markers are needed the same problems as with file-based scene graphs will happen.
  • Consequences: Does not scale well for data more complex than marker positions or for dynamically changing data.
  • Known use: ARVIKA, MR Platform, ARToolkit

Database

  • Goal: Use abstract description of world model from a database.
  • Motivation: For larger descriptions of the user's environment a collection of file-based scene descriptions may not be enough. For example, finding a particular scene by filename requires a name schema that could become quite complex depending on the number of scenes and the complexity of them. Especially in mobile environments, this can happen quickly.
  • Description: Instead of loading a particular scene from a file, the system has access to a database system. This system contains information about the environment, e.g. in a geographical schema. Part of the information are graphical information and marker information. The system queries for the graphical information that belongs to a discrete database object and passes it on to the rendering component. The same is true for marker information (to the tracking component) and real world objects (e.g. for occlusion).
  • Usability: World models in a database can be combined with graphical models that are also saved in the database. This decouples the application dependent model, e.g. the model for a machine to be repaired, from the graphical information needed to render objects that are part of the model.
  • Consequences: A database server is a heavy-weight component compared to file system but offers more possibilities to model larger environments. Especially useable for mobile applications a database model based on geographical information system (GIS) concepts combined with graphical models for concrete objects.
  • Known use: ARVIKA, ArcheoGuide, MARS.

Context Subsystem

The context information must be gathered, processed and distributed to interested components.

Blackboard

  • Goal: Gather and process context information.
  • Motivation: Blackboards are a well-known pattern to gather information from various sources, apply rules and create new information. This is particularly suited where a lot of raw data with low-level information must be refined in several steps to higher abstract information. Consumers and producers of information are decoupled.
  • Description: Information producers write information to the Blackboard, a central component. Information consumers read data from the Blackboard, process them and may write new, higher abstract information to the Blackboard.
  • Usability: Good if information from many sources must be analyzed and filtered.
  • Consequences: Blackboards are a central component which might become a bottleneck. As an advantage, the participating components do not need to know each other.
  • Known use: MIThril, ARVIKA.

Repository

  • Goal: Provide a central means for information exchange.
  • Motivation: Many components need context information, other produce context information. A Repository component is a central component that is well-known by all components. It acts as a information storage.
  • Description: Components that produce context information write to the repository. Components that are interested into context information read from the repository. The repository uses an addressing schema to manage the information. Each kind of data is written and read by providing its address.
  • Usability: Good if data from many sources must be stored and read.
  • Consequences: Similar to Blackboards, Repositories can be a bottleneck. But again the participating components do not need to know each other.

Publisher/Subscriber

  • Goal: Provide a means to distribute context information.
  • Motivation: When context data does not need to be saved for a certain period of time, it can be distributed at once without need for a temporary storage.
  • Description: Context providers connect as publishers to a central messaging service, context consumers as subscribers. The context providers write the new context information to a particular channel which distributes it to the connected subscribers.
  • Usability: May be used where there is no need to store context information. Could be used in combination with a context repository. Some data may be saved, some delivered at once.
  • Consequences: Again might be a bottleneck and publishers and subscribers do not need to know each other.
  • Known use: DWARF, ARVIKA

Ad hoc

  • Goal: Connect components that need context information directly with source.
  • Motivation: For simple cases a centralized service for data exchange may not be needed. For example, if only location context is needed a direct link to the tracking subsystem may be enough.
  • Description: An interested component directly queries the context producer component or it registers itself as subscriber. The subscriber list is managed by each component privately.
  • Usability: Suited for reduced context sensitivity.
  • Consequences: If the amount of context sources and context consumers increases ad~hoc hardwired direct connections will lead to unmanageable code. The coupling between consumer and producers is very tight, which makes them very interdependent.

Presentation Subsystem

The Presentation subsystem in AR deals mainly with the presentation of three-dimensional information; thus, most of the approaches here are geared specifically to rendering.

VRML Browser

  • Goal: Use a rendering component that can display simple virtual scenes.
  • Motivation: The usage of a VRML browser is a simple way to display virtual scenes. The standardized VRML format, a markup language for the description of virtual worlds, allows to use a lot of tools for authoring virtual worlds and to reuse components that can render descriptions of virtual worlds.
  • Description: Use a third-party VRML browser, often designed as a web browser plugin, to display 3D information. Use the External Authoring Interface (EAI) that is part of the VRML standard to modify the scene and set the viewpoint based on tracking data.
  • Usability: A VRML browser component can be used if the complexity of the scenes is relatively low and the browser is only used as a rendering engine.
  • Consequences: The advantages of using a VRML browser are the standardized format and the reuse of tools for authoring and the reuse of existing components. This allows rapid prototyping of Augmented Reality system based on VRML scenes. The disadvantages are that the EAI is restricted to relatively simple operations and that tying the VRML browser to the rest of the system may be tedious. Also, the rendering performance of VRML browsers is not as high as that of native OpenGL.
  • Known use: STAR, DWARF

OpenGL

  • Goal: Use a standardized library to render 3D objects and keep maximum flexibility and control.
  • Motivation: OpenGL is the standard low-level library for 3D graphics. While higher-level approaches, particularly scene graphs, provide a more powerful interface for 3D worlds, OpenGL provides the most flexibility to the application programmer. Scene graphs use their own control path user applications have to comply with. By using OpenGL, the developer can implement his own control flow.
  • Description: OpenGL provides low-level 3D constructs. The application developer creates new objects and tells the render to display them. With the information from the trackers the scene can be rendered with the correct viewing direction and distance.
  • Usability: Usable on nearly all systems.
  • Consequences: Modifiability of system is low; changes to the scene lead to a recompilation.
  • Known use: Older ARToolKit versions.

Scene Graph

  • Goal: Use a rendering component that allows more complex and dynamic scenes.
  • Motivation: For the representation of 3D environments, scene graphs have shown to be a reasonable choice. The level of abstraction is higher than for OpenGL, but they are much more powerful and flexible than VRML browsers with a limited application programming interface. Most scene graph components can read VRML based descriptions of scenes.
  • Description: Examples are (Open) Inventor, OpenSG, Open Scene Graph.
  • Usability: Use a scene graph if you don't need the low-level graphics access that OpenGL provides but want to render more complex scenes and need more dynamic access that a VRML browser offers.
  • Consequences: Can restrict the possibilities for modeling the application.
  • Known use: ARVIKA, Studierstube

Proprietary Scene Graph

  • Goal: Use a rendering component that can render scenes and provides a customized programming interface.
  • Motivation: There may be reasons to develop an own scene graph for rendering, e.g. the provided interfaces are not satisfying or the control over the data flow, that is often controlled by the scene graph, should be kept.
  • Description: The Tinmith system uses an own scene graph for graphics rendering on top of OpenGL combined with an own concept for object access through an own addressing schema. Each node of the scene graph has the same abilities to serialize and address them as the other objects in the system.
  • Usability: An own scene graph may be useful if the developer wants to control the implementation and the behaviour of the scene graph. For example, in the Tinmith system each object of the system can be made persistent and it can be reached over an address schema. The nodes of the scene graph are objects of the same type and there have the same attributes.
  • Consequences: The scene graph has to be developed from the ground up. For example, to make scenes persistent, either a parser for a standard format like VRML has to be developed or a proprietary format which cannot reuse scenes developed by standard tools such as VRML editors.
  • Known use: Tinmith

Video Transfer

  • Goal: Offload the video rendering to a server, transfer it to the client and present it there.
  • Motivation: To reduce hardware requirements of the client system, use a rendering server in the environment and transfer the completely rendered images to the client. Particularly suited for video see-through AR.
  • Description: The client gathers videos through one or two head-mounted cameras, encodes them (e.g. MPEG 2), compresses them, and transfers them to the server. The server uncompresses the video images, processes them (calculates the camera position and orientation), augments, encodes and compresses the images. The images are sent to the client, decompressed and shown on the HMD.
  • Usability: This can be used with a good network connection and a strong rendering server.
  • Consequences: The advantage is that the hardware requirements for the clients are very low. Even PDA-class devices can be used. The main disadvantage is increased latency due to the network transfer.
  • Known use: STAR, MR Platform, AR-PDA.

Multiple Viewer Classes

  • Goal: Use different media types for different types of information.
  • Motivation: Although the central requirement of AR is displaying three-dimensional information, other types of output devices such as speech synthesis or wrist-worn displays are also useful.
  • Description: Provide an abstraction layer for different types of viewers (AR, speech, text etc.) that can handle certain document types. Then provide the viewers with the appropriate documents.
  • Usability: This approach can be used whenever multiple output media are desired, unless the rendering complexity is so high that the 3D viewer cannot be parameterized.
  • Consequences: Advantages are a modular system design and the possibility of integrating additional output devices at run time or in new systems. Disadvantages are the extra complexity of a viewer abstraction layer.
  • Known use: ARVIKA, DWARF


Found

The document has moved here.


Apache/2.4.29 (Ubuntu) Server at wwwbruegge.in.tum.de Port 80

Found

The document has moved here.


Apache/2.4.29 (Ubuntu) Server at wwwbruegge.in.tum.de Port 80



Edit | Attach | Refresh | Diffs | More | Revision r1.1 - 20 Sep 2012 - 16:53 - Main.guest

Note: Included topic WebBottomBar? does not exist yet