The algorithm (SingleCameraTracker.h/cpp ) has been implemented as part of the CAMPAR project applications. The image should illustrate the application data flow on a rather superficial point of view.The application extracts all visible markers and calculates a rough 3D model by estimating the depth for each recognized marker. This requires intrinsic parameters (Tsai/Lenz) to be provided through the CAMPAR camera model (see Configuration). Afterwards the estimated 3D model is compared to all permutations of the real (preliminarily calibrated) reference target model. By performing this step only possible matches will be considered in the subsequent 2D-3D matching algorithm, which cuts down calculation time significantly. With the resulting extrinsic parameter solution the correspondences between image and model data may be resolved easily, which then enables an algebraic algorithm (Tsai) to calculate a more accurate solution. Using the 3D data provided by an optical tracking system (like the one from ART) the inside-out depth offset may be corrected as well as missing marker data may be reconstructed. After fusing these sensor measurements, the application uses the Levenberg-Marquardt optimization algorithm to minimize the reprojection error. There are various additional heuristic methods to decrease calculation time, which are not illustrated by the image. For example, before performing the matching algorithms, the extrinsic parameters from the last frame can be used to solve the correspondence problem much faster. This, of course, does not work when the user moves the HMD too quickly. Anyway, considering the derivation between the last two known solutions, even that restriction may be removed in the future. For further information please refer to the literature provided below. |
SingleCameraTracker.xml
) without any need to recompile. Each device and parameter is explained generously in the following listing. Please do not alter the program source files until absolutely necessary.
Graphic:
This device contains all parameters for visualization, which should be kind of self-explanatory.
AviForOSDevice:
In case you recorded the images provided by the framegrabber, you may reload that video to test the inside-out algorithm. Configure the parameters according to your needs. The camera model (which was used for recording) must be provided using Tsai/Lenz. No need for extrinsic parameters, as the application will estimate those in real time. Please be aware, that you will need synchronized 3D data (saved to a text file and read by a FileTrackerDevice
) in order to use sensor fusion.
MeteorIIGrabberDevice:
This is the real thing, the infrared grayscale ccd-camera mounted on the HMD. For implementation, the Matrix Meteor II framegrabber from the current CAMPAR setup has been used. If you wish to use a different camera, you will have to use a complete new device. If not, do not change the parameters ResX
, ResY
, ColorFormat
, SystemNumber
, MilFormat
and GrabContinuous
. The camera model must be provided using Tsai/Lenz.
FileTrackerDevice:
If you use a recorded image to provide the inside-out algorithm with data, you must make 3D data available throughout this device. Use a buffer size greater than 5 to not miss any frames while waiting for the image data from the video. The text file must place a minimum of two tracked objects at the disposal, where one ApplicationID
must be "Reference" (reference target), the other "Camera" (target mounted on the HMD).
ARTTrackerDevice:
Essentially, this is the same as the FileTrackerDevice
, except you receive your data using an TCP or UDP socket from a remote machine, which will use DTrack to provide the network with 3D data. Configure the section NetConnection
according to your local setup. If you wish to use a different tracking system, you will have to use a complete new device.
Synchronizer:
The parameter WaitForThisDevice
must point out the slowest device, which usually is the device that feeds the application with image data. If you plan to use the application for a long time without any break, you should use an external NTP time server to prevent that the offset between the timestamps of the executing computer and the tracking systems grows too big. In that case the parameter NTPServer
would specifiy the IP of your favorite NTP time server. The option NTPPollInterval
may tell the synchronizer how often (in seconds) to correct clock deviance.
SingleCameraTracker:
This is the application device. Rig the Graphic
, CameraDevice
and TrackerDevice
according to your setup. The fields DebugInputFile
and DebugOutputFile
are for debugging purposes only and should be left blank. The ReferenceTarget
and CameraTarget
should not be changed, except you did not consider the restrictions of the FileTrackerDevice
or ARTTrackerDevice
given before. Furthermore, each ReferenceTargetPoint
provide the application with the (calibrated) 3D model data. The order of the declared points will be used as given. Using the same order, you must tell the program which of these markers lie on the basic plane, using the parameter Planarity
(1=planar, 0=not planar). The parameter CameraTransformation
represents the rigid transformation between the target mounted on the HMD and the camera center. If you do not provide this parameter, outside-in estimation will not work, but you can calculate that offset as described in the next section. The EstimationMode
tells the application which "mode" to use initially (0=Inside-out, 1=Outside-In, 2=Fused). With ShowInformation
you may control what information to display at startup (0=Nothing, 1=Reprojection only, 2=Reprojection and textual information, 3=Everything). The MarkerDiameter
(in mm) should be self-explanatory. In case you use a recorded video as data source, you might have to flip the resulting vertical image coordinates for each detected marker. Use the option FlipMarkers
to do so. The MarkerThreshold
defines the level a pixel has to exceed to be recognized as a reflection. To smooth each detected marker by means of noise reduction, pixel erosion is applied. The parameter PixelErosion
tells the algorithm how strong to erode each component. With OptimizationSteps
you may specify after how many iterations to abort optimization. Likewise, the parameter OptimizationCriteria
defines the minimal reprojection error (mean squared) desired.
EstimationMode
). The key "i" changes the information detail to be displayed (ShowInformation
). And last but not least, you may press the key "r" while residing in the mode "Fusion". This will trigger the application to record the 3D data for the HMD target and the estimated camera pose. If you press it again, it will use the class HandEyeCalibration
to calculate the transformation between HMD target and camera center, presuming the application did fetch more than ten frames.
ProjectForm | |
---|---|
Title: | Evaluation of inside-out and outside-in pose estimation |
Abstract: | It has been proved in the past, that for Augmented Reality applications inside-out tracking works more accurate than outside-in tracking in terms of image overlay error. However, distance of the measured pose by an inside-out system is higher compared to the true pose than the distance between the outside-in and the true pose. A demosetup at ART Tracking in Weilheim using a tracked single camera (inside-out) and a rig of multiple cameras (outside-in) provides us data to perform more optimization in the terms of augmentation accuracy. |
Student: | Timo Gebhardt |
Director: | Nassir Navab |
Supervisor: | Joerg Traub and Marc Schneberger |
Type: | SEP |
Status: | finished |
Start: | 2005/08/01 |
Finish: | 2005/12/31 |