SepGebhardt

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

Evaluation of inside-out and outside-in pose estimation

Pose:

With the word pose we refer to the relative position and orientation of some sort of object to a world coordinate system. For better understanding we will consider a human head as the object. The position represents the three-dimensional location of the head. As the location does not provide any information about the direction where the head is looking at, more parameters are required. A set of (at least three linear independent) consecutive rotations are used to define the orienation. When talking about augmented reality applications it is essential to know the pose of the visual sensor that is used to reproject virtual elements onto the image plane of a HMD (head mounted display).

Fiducials:

A fiducial is an object enhancement to simplify feature recognition. If you use a camera to recognize a human hand, highly complex image recognition algorithms will be required. As a fact, this mostly does not work satisfyingly. As the hand is a three-dimensional object and a camera picture only represents a down-sampled projection of the hand, recognizing its pose must be done by a combinatoric outline and highlight analysis. In some applications this may work, but for augmented reality such a method is unappropiate, as it is heavily time-consuming and not very accurate. There are several ways to design a fiducial. One very simple way would be to use adhesive colored markers, which could be put on the back and on each finger of the hand. A set of fiducials forms a target.

Tracking:

Imagine looking at a fast moving object, such as a by-passing car. Of course, you are aware of the cars movement and you will therefore intuitively move your eyes and rotate your head to not lose sight of the car. When doing so, you unconsciously estimate the speed of the car and adapt your own movement so you keep the car near the center of your sight. This may sound easy, but it is a quite complex mathematical problem when tracking objects using technical tools. Tracking may be performed when extracting two-dimensional fiducial data or even when estimating six-dimensional pose data. For example, when tracking fiducials, you can improve accuracy and eliminate erroneous or missing data at the cost of time. The trick is to perform feature extraction as usual and then compare it against the estimated result.

Requirements:

The main requirements of augmented reality in medical applications are speed, accuracy and tolerance against errors or inproper handling. Irreverence of any of these aspects results in a positional displacement of real world and virtual objects. For example, when performing surgery on very thin cerebral blood vessels, it is unacceptable that a virtual projected blood vessel does not match reality. Furthermore, missing realtime abilities may result in cyber-sickness of unexperienced surgeons.

Pose estimation

Inside-out:

Imagine looking at an object with your own eyes through some sort of HMD. There is a target, which is formed by a set of retro-reflective markers, mounted on an object to be tracked. A combination of an infrared flash and a grayscale (infrared capable) ccd-camera provides you with an image which contains nothing but infrared reflections. This makes feature recognition and extraction much easier than using images in the spectral area of human sight. With knowledge about the dimensions of the target and its respective markers its possible to calculate its relative position and orientation to the human eye. The advantage of this method is that virtual elements will be rendered onto the image plane very precisely. However, if you look at a car driving straight away from you, you will, as it shrinks into a small point in your field of sight, not be able to estimate the distance between you and the car anymore.

Outside-in:

Now think of another person standing on top of a hill looking down on you and the moving car. If you asked him to quantify the gap, he would estimate it more accurate as yourself. Back to the demo setup. There are a few more (fixed, empirically optimal distributed) "camera combinations", that will not be moved after calibration. Using epipolar constraints and scene reconstruction in terms of curve fitting parameter optimization, you get very accurate positionial information about any target in the scene. However, the resulting relative orientation between two targets will be displaced stronger, which fudges any projection onto the image plane much more than the usual depth offset using inside-out.

Fusion:

There are many more advantages and drawbacks for both methods (like merging markers or adversarial scene states), which still detain many augmentation systems from being integrated into critical applications. The long term goal of this project is to combine both methods using state-based, stochastic and analytical methods. It has already been demonstrated in the past (by William Hoff and Tyrone Vincent) to improve pose estimation in terms of variance minimization.

Application overview

The algorithm (SingleCameraTracker.h/cpp) has been implemented as part of the CAMPAR project applications. The image should illustrate the application data flow on a rather superficial point of view.

The application extracts all visible markers and calculates a rough 3D model by estimating the depth for each recognized marker. This requires intrinsic parameters (Tsai/Lenz) to be provided through the CAMPAR camera model (see Configuration). Afterwards the estimated 3D model is compared to all permutations of the real (preliminarily calibrated) reference target model. By performing this step only possible matches will be considered in the subsequent 2D-3D matching algorithm, which cuts down calculation time significantly. With the resulting extrinsic parameter solution the correspondences between image and model data may be resolved easily, which then enables an algebraic algorithm (Tsai) to calculate a more accurate solution.

Using the 3D data provided by an optical tracking system (like the one from ART) the inside-out depth offset may be corrected as well as missing marker data may be reconstructed. After fusing these sensor measurements, the application uses the Levenberg-Marquardt optimization algorithm to minimize the reprojection error.

There are various additional heuristic methods to decrease calculation time, which are not illustrated by the image. For example, before performing the matching algorithms, the extrinsic parameters from the last frame can be used to solve the correspondence problem much faster. This, of course, does not work when the user moves the HMD too quickly. Anyway, considering the derivation between the last two known solutions, even that restriction may be removed in the future.

For further information please refer to the literature provided below.
Dataflow.gif

Application configuration

The application can be fully configurated through the corresponding xml file (SingleCameraTracker.xml) without any need to recompile. Each device and parameter is explained generously in the following listing. Please do not alter the program source files until absolutely necessary.

Graphic:

This device contains all parameters for visualization, which should be kind of self-explanatory.

AviForOSDevice:

In case you recorded the images provided by the framegrabber, you may reload that video to test the inside-out algorithm. Configure the parameters according to your needs. The camera model (which was used for recording) must be provided using Tsai/Lenz. No need for extrinsic parameters, as the application will estimate those in real time. Please be aware, that you will need synchronized 3D data (saved to a text file and read by a FileTrackerDevice) in order to use sensor fusion.

MeteorIIGrabberDevice:

This is the real thing, the infrared grayscale ccd-camera mounted on the HMD. For implementation, the Matrix Meteor II framegrabber from the current CAMPAR setup has been used. If you wish to use a different camera, you will have to use a complete new device. If not, do not change the parameters ResX, ResY, ColorFormat, SystemNumber, MilFormat and GrabContinuous. The camera model must be provided using Tsai/Lenz.

FileTrackerDevice:

If you use a recorded image to provide the inside-out algorithm with data, you must make 3D data available throughout this device. Use a buffer size greater than 5 to not miss any frames while waiting for the image data from the video. The text file must place a minimum of two tracked objects at the disposal, where one ApplicationID must be "Reference" (reference target), the other "Camera" (target mounted on the HMD).

ARTTrackerDevice:

Essentially, this is the same as the FileTrackerDevice, except you receive your data using an TCP or UDP socket from a remote machine, which will use DTrack to provide the network with 3D data. Configure the section NetConnection according to your local setup. If you wish to use a different tracking system, you will have to use a complete new device.

Synchronizer:

The parameter WaitForThisDevice must point out the slowest device, which usually is the device that feeds the application with image data. If you plan to use the application for a long time without any break, you should use an external NTP time server to prevent that the offset between the timestamps of the executing computer and the tracking systems grows too big. In that case the parameter NTPServer would specifiy the IP of your favorite NTP time server. The option NTPPollInterval may tell the synchronizer how often (in seconds) to correct clock deviance.

SingleCameraTracker:

This is the application device. Rig the Graphic, CameraDevice and TrackerDevice according to your setup. The fields DebugInputFile and DebugOutputFile are for debugging purposes only and should be left blank. The ReferenceTarget and CameraTarget should not be changed, except you did not consider the restrictions of the FileTrackerDevice or ARTTrackerDevice given before. Furthermore, each ReferenceTargetPoint provide the application with the (calibrated) 3D model data. The order of the declared points will be used as given. Using the same order, you must tell the program which of these markers lie on the basic plane, using the parameter Planarity (1=planar, 0=not planar). The parameter CameraTransformation represents the rigid transformation between the target mounted on the HMD and the camera center. If you do not provide this parameter, outside-in estimation will not work, but you can calculate that offset as described in the next section. The EstimationMode tells the application which "mode" to use initially (0=Inside-out, 1=Outside-In, 2=Fused). With ShowInformation you may control what information to display at startup (0=Nothing, 1=Reprojection only, 2=Reprojection and textual information, 3=Everything). The MarkerDiameter (in mm) should be self-explanatory. In case you use a recorded video as data source, you might have to flip the resulting vertical image coordinates for each detected marker. Use the option FlipMarkers to do so. The MarkerThreshold defines the level a pixel has to exceed to be recognized as a reflection. To smooth each detected marker by means of noise reduction, pixel erosion is applied. The parameter PixelErosion tells the algorithm how strong to erode each component. With OptimizationSteps you may specify after how many iterations to abort optimization. Likewise, the parameter OptimizationCriteria defines the minimal reprojection error (mean squared) desired.

Application runtime usage

There are a few interaction features during program execution. By pressing the "space bar", you can cycle through the available modes (EstimationMode). The key "i" changes the information detail to be displayed (ShowInformation). And last but not least, you may press the key "r" while residing in the mode "Fusion". This will trigger the application to record the 3D data for the HMD target and the estimated camera pose. If you press it again, it will use the class HandEyeCalibration to calculate the transformation between HMD target and camera center, presuming the application did fetch more than ten frames.

References

[1] W. Hoff, T. Vincent. Analysis of head pose accuracy in augmented reality. IEEE transaction on visualization and computer graphics, Vol. 6, No. 4, October 2000.
[2] C-P. Lu, G.D. Hager, E. Mjoisness. Fast and globally convergent pose estimation from video images. IEEE transactions on pattern analysis and machine intelligence, Vol. 22, No. 6, June 2000.
[3] T. Sielhorst. High accuracy tracking for medical augmented reality. Diploma thesis, October 2003.
[4] R. Haralick, C. Lee, K. Ottenberg, and M. Nölle. Analysis and solutions of the three point perspective pose estimation problem. IEEE conference on computer vision and pattern recognition, Pages 592-598, June 1991.
[5] J. Zhang. Tsai's RAC based camera calibration. Applied sensor technology lecture script, Hamburg university, January 2004.
[6] Cambridge University Press. Numerical recipes. http://www.library.cornell.edu/nr/bookcpdf.html


ProjectForm
Title: Evaluation of inside-out and outside-in pose estimation
Abstract: It has been proved in the past, that for Augmented Reality applications inside-out tracking works more accurate than outside-in tracking in terms of image overlay error. However, distance of the measured pose by an inside-out system is higher compared to the true pose than the distance between the outside-in and the true pose. A demosetup at ART Tracking in Weilheim using a tracked single camera (inside-out) and a rig of multiple cameras (outside-in) provides us data to perform more optimization in the terms of augmentation accuracy.
Student: Timo Gebhardt
Director: Nassir Navab
Supervisor: Joerg Traub and Marc Schneberger
Type: SEP
Status: finished
Start: 2005/08/01
Finish: 2005/12/31


Edit | Attach | Refresh | Diffs | More | Revision r1.23 - 06 Jun 2006 - 16:23 - AndreasKeil