• Nenhum resultado encontrado

Automatic Marker Field Calibration - VTT project pages server

N/A
N/A
Protected

Academic year: 2023

Share "Automatic Marker Field Calibration - VTT project pages server"

Copied!
7
0
0

Texto

(1)

Automatic Marker Field Calibration

Sanni Siltanen (contact author), Mika Hakkarainen, Petri Honkamaa,

VTT Technical Research Centre of Finland, Address: P.O. Box 1000, 02044 VTT, Finland, Email: firstname.lastname@vtt.fi

Keywords: Augmented Reality, Marker field, Marker board, Autocalibration, ARToolKit

Abstract

In many augmented reality applications a marker field (set of markers) is used to define the 3D- coordinate transformation between the camera and the virtual objects. In most systems, the physical positions and pose of each marker relative to each others need to be measured in advance. This calibration process is often time consuming and inaccurate if done by hand. We present an automatic calibration process for a set of markers creating a marker field. Our automatic calibration can be used without any preparations and the markers may be placed in any 3D arrangement including even arbitrary angles and slanting planes.

The calibration is a real time process and it does not need separate calibration phase. The user may lay markers randomly on suitable places and start tracking immediately. The accuracy of the system improves on the run as the transformation matrices are updated dynamically. The calibration can also be done as a separate calibration stage, and the results can be saved and used later with another application. In our implementation we used ARToolKit and its marker field implementation.

However, the algorithms to create a marker field can be applied to create any type of feature/marker field mapping, not only ARToolKit compatible.

1 Introduction

Markers are widely used in augmented reality (AR) systems to determine the 3D coordinate transformation between camera and the real world.

In the simplest case, each virtual object is associated with a specific marker and they are shown on top of the markers whenever markers are detected in the view. As each object is positioned according to a specific marker, there is no need to know the relative positions of different markers compared to each others. For example ARToolKit’s simple demo uses this approach [1].

However, often a single marker is not enough for the application. This is the case when for example due to the camera movement a marker is not visible all the time, or the visualization area is large like in wide-area indoors applications covering several rooms. In these cases marker fields are widely used. The marker field contains several

markers and a predefined global origin. The relative position of each marker compared to the origin is known and the same coordinate system is used for all markers. This way the objects can be augmented in the area of the marker field even when some of the markers are not detected or visible. In a simple case all markers are coplanar and near each others, then their relative positions can be easily measured using for example a ruler. In this case the marker field (i.e. marker “board”) can be printed on a single poster, thus moving it to a new place is easy.

Yet, for some applications this is not sufficient.

When marker poses are arbitrary and/or distances come large, the marker field definition is impossible to determine by hand or at least the accuracy of the calculations is not feasible. The calibration of a marker field becomes especially difficult if we want to augment virtual objects in wide-area building environments.

In this paper we introduce an approach to make the marker field definitions automatically. Our approach sets no restrictions on the placement of the markers. They can be rotated in arbitrary angles and locate at random distances, and lay on inclined planes. The coordinate origin is defined relative to the so called base marker. Thus the only requirements are that the markers remain static compared to each others, and the system needs to be able to detect (during the marker field creation process) at least two markers at the same time, so that each marker is connected by pairs of detected markers to the base marker.

The introduced system allows both pre-creation of the marker field definition, as well as real-time implementation that allows the augmenting software to accept new markers on the fly to improve the accuracy of the field definition dynamically.

2 Related Work

In addition to vision based tracking systems, there is available a variety of sensor based and hybrid tracking methods. For example Intersense uses an inertial tracker together with the vision based system [6]. In their tracking system, the relative position and pose of the inertial tracker and camera is fixed. Using the information from inertial tracker they predict the position of the markers in the view and thus limit the search window which speeds up the image analysis part of their system. Also magnetic and gyro sensors are used to stabilize the tracking systems [2, 9].

(2)

In augmented reality applications the camera is already part of the system. Thus vision based tracking systems do not require any special device.

This partly explains the popularity of vision based tracking system in AR. Vision based systems can be based on fiducial markers or natural features.

For example [8, 12] use natural features for simultaneous localization and mapping. The reliable recognition of natural features is difficult, time consuming and it often causes jittering in the 3D map in process of time. The initialization of feature based tracking is also complicated. There are several systems that use a model of the scene or part of the scene for initialization like [10, 11]. This approach limits the use for predefined locations and prevents ad-hoc installation. Furthermore, high-end feature based tracking system requires more processing power which is disadvantage especially in mobile application using light weight devices.

Altogether, despite of recent advancement in markerless tracking, marker based systems are still popular and widely used in augmented reality.

Marker based systems need often less processing power and they are easy to implement. In addition of getting the 3D information from the marker, markers can maintain additional information for example the ID of the object to be augmented.

Perhaps the most generally used marker based system is ARToolKit [1]. ARToolKit’s marker field (called marker board) does not have an automatic configuration process, instead it assumes that user measures the placement of the markers beforehand.

Uematsu and Saito [4, 5] present a solely visual tracking system with multiple planar markers. Their system used a set of reference images for estimating the extrinsic camera parameters during separate calibration process, whereas we use all images of the video source for creating the marker map and to update it dynamically.

3 Creating the Marker Field Definition

We concentrate here on designing a marker based light weight single camera system with no additional sensors.

The automatic marker field creation process can be divided in three or four stages:

1) Marker detection

2) Calculating the relative transformations between marker pairs

3) Calculating the transformations from each marker to the origin

In real-time system we also have

4) Decision rule to update marker field definition.

Each step is described in detail in the following four sections, correspondingly.

3.1 Marker Detection

For the initialization of the system, we use the real time video stream and browse the marker field area by moving the camera around. Markers are detected, and for each detected marker we save the ID and calculate transformation matrix relative to camera TID. We calculate also a confidence value cID for the calculated transformation matrix.

Confidence value indicates how good the detection is, or how much we trust in the accuracy of the transformation matrixTID.

3.2 Calculating the Relative Transformations between Marker Pairs

When we detect two markers m1 and m2 for the first time same time simultaneously, we calculate the relative transformation matrix Tm1m2 between them, that is

1

1 2 2 1.

m m = m m

T T T

Confidence value cm1m2 for Tm1m2 is based on confidence values cm1 and cm2. Possible choices are for instance taking their average or minimum.

When the same marker pair is visible again we update the transformation matrix Tm1m2 using continuous weighted average. When a pair is detected for the Nth time we get

1

1 12 12

12 12

1 1

1

1 2 1 2 1 2

N i N

i

N i N i

i i

C N C N

m m m m m m

C C

=

= =

= +

∑ ∑

T T T

If the confidence value exceeds predefined threshold value, we don’t update the transformation matrix.

Our algorithm works as long as the relations between the markers remain static and there is a visual connection between pairs of markers to able the calibration process.

Sometimes the markers are detected incorrectly.

Therefore a new marker is added to the marker field only after it has been detected predefined number of times (e.g. 30) as a pair with same marker.

Otherwise occasional false detection would produce a “ghost marker” in the marker field.

3.3 Calculating the Transformations between each Marker and the Origin

At some point we know enough relative transformations between marker pairs for solving the transformations between each marker and the base marker. For this purpose we organize the marker information as a graph. Each marker

(3)

presents a node and if we have the transformation between two markers, then we draw a path between them, see figure 1a. We associate also a weight value for each path. This value indicates the cost of the path in question and it is used to find the shortest path. We consider later possible weight values to be used for this purpose.

5 0 1 2

3 4

6

7 5

0 1

2

3 4

6

7

a) b)

7.25 5.85 3.92

43.48

142.86 0.67

26.32

333.33 0.75

4.78 6.45

4.27

0.71

5 0 1 2

3 4

6

7 5

0 1

2

3 4

6

7

a) b)

7.25 5.85 3.92

43.48

142.86 0.67

26.32

333.33 0.75

4.78 6.45

4.27

0.71

Figure 1. a) A graph of the installation presented in figure 6, before applying Dijkstra. b) A tree

with weights after applying Dijkstra The next step is to find shortest path from each marker to the base marker. We use Dijkstra’s algorithm presented in [3] to solve single-source shortest-path problem. After Dijkstra’s algorithm, we have a tree representation of the marker field, see figure 1b.

We get the transformation between the origin and the marker simply by multiplying transformation matrices along the shortest path connecting them. Thus, the transformation from the base marker b to the marker m is

1 1 2 2 3...

bm= bm m m m m m mn

T T T T T ,

where m1… mn belong to the path from base marker to marker m. The origin is defined relative to the base marker, thus if we mark the transformation from the origin to the base marker with Tob, the transformation from origin to the marker m is

om= bm ob. T T T

Altogether, these transformation matrices define the marker field. Now we can either save the marker field definition for later use or keep updating it dynamically.

3.4 Decisions Rules for Dynamic Updates For dynamic updates we need to decide when to recalculate the transformation matrix and, when to recalculate the tree presentation.

In the simplest case we update the graph presentation dynamically and apply Dijkstra every time the graph changes. This is however unnecessary as most of the changes are not so significant that they would change the actual paths

to be used. For this purpose we have decision rules to decide when we need to recalculate the shortest paths. Basic decision rule is to make the recalculating only when we add a new path or some of the weight values are changed more than certain threshold.

The transformation matrix needs to be recalculated naturally every time the shortest path changes. Otherwise we can set some threshold value how much the transformation matrices along the path need to change before we recalculate the total transformation.

Other option would be to use a graph presentation only to add new markers to the marker field and initialize transformation matrices then update the transformation matrices using for example the augmentation error as described in the next section.

3.5 Augmentation Error

The augmentation error is used to perceive the accuracy of the creation process. But it can also be used for updating the transformation matrices.

To visualize the error caused by inaccuracy of the marker field, we augment a cube twice. First with the transformation calculated using only the detected marker T* and then with the marker field definition (see figure 2). The augmentation error for a cube is

8 *

1

1

8 i m i bm ob i e=

= T x T T x ,

where points xi are the corners of the augmented cube.

Figure 2. The green cube is augmented based on direct detection and the red is augmented using marker field, the difference between them is the

augmentation error.

The dynamic update of the marker field can also be based on this augmentation error. The transformation from the position given by the marker field to the directly detected position is

* 1

* mm = m m

T T T

If the error exceeds given threshold, we correct the marker field transformation by moving the result by

(4)

a factor a towards the detected position (0<a<1).

The new Tbm is

om=µ om

T TT ,

where Tµ is transformation matrix which rotates around the same axis as Tmm*, by the angle a , where is rotation angle caused by Tmm*. In addition the translation part of Tµ isa*x, where x is the translation vector ofTmm*. There is always some error in the direct detection as will be discussed in the measurements section. Hence, if the augmentation error is small enough, there is no point in updating the marker field. And when the updating is needed, we want to prevent over reacting, thus we use factora.

3.6 Temporary Markers

The basic idea of a marker field is that all relations between markers and the coordinate origin are defined. Thus, every time at least one of the markers is detected the whole coordinate system is known. When more markers are visible, the coordinate system can be calculated more accurately. Based on the nature of the marker fields we can also use temporary markers during the calibration process. For the system the situation is the same as if the marker would be occluded by some other object.

There are a few cases when the use of temporary markers is justified. For example, the user may define the desired coordinate origin with a temporary base marker. Note that the user may position objects with temporary markers also in the air. Another case where temporary markers are useful is outdoor applications on a living environment like city squares or streets. For example if we want the marker field to continue on both sides of a street, we can use a temporary marker in the middle of the street and remove it immediately after the calibration process.

4 Implementation

We used ARToolKit and its marker field implementation for detecting the markers and calculating the transformation matrices.

The hardware used in the tests was Dell Precision M70 laptop and Logitech QuickCam for Notebooks Pro. In the error measurement scenario we replaced the camera with Sony DSR-PD100AP video camera, so were able to use exactly same video feed in every case. We have also tested the system with Sony Vaio UX. The ARToolkit version was ARToolkit professional (commercial) 4.0645 which we slightly modified to support immediate marker field updating instead of reading marker field info from file.

We calibrated the camera with ARToolKit’s camera calibration program to get the intrinsic camera parameters. In our application we accepted a new marker to the marker field after it was detected 30 times simultaneously with the same marker.

4.1 User Input

The user can indicate which marker ID is used as a base marker, by default we use the marker with smallest ID. A good selection for the base marker is a marker that is in the central part of the marker field. This way the paths from base marker to any other will not become too long.

The default placement for the marker field origin is on the base marker, axis along the detected marker’s axis. Should some other orientation or location be more convenient for some reason, an additional transformation may be applied to move the origin to desired location and orientation.

4.2 Finding the Shortest Path

We use the Dijkstra algorithm to find the best marker-pair-chain to be used to solve the transformation from each marker to the origin. For this purpose we can indicate a cost for using each transformation.

We tested few different cost models. ARToolKit returns an error value of the marker detection. We used its inverse as confidence value in this example. The confidence value could be composed of all factors affecting detection accuracy like the distance of the marker, the sharpness or focus of the image, etc. We have also considered using the actual augmentation error as a cost measurement.

On the other hand, we have noticed that we get useful results even when all the weights are equal to one; this means that we search the path with least number of nodes. However, using meaningful cost values in the graph allows more dynamic changes in the path and more optimal results.

Table 3 presents the weighs of the test run with the configuration shown in figure 6. If there has been visual connection between marker i and j, we have a weight value in the table in the row i, column j or vice versa. For example we could detect markers 1 and 6 at the same time, but markers 1 and 4 were never visible simultaneously.

Connections in table 3 are presented in graph form in figure 1a. The weights in table 3 are used with Dijkstra to create tree presentation shown in figure 1b. This tree shows the shortest path from each marker to the base marker.

(5)

1 2 3 4 5 6 7

0 7,25 4,27

1 3,92 5,85

2 0,67 43,48

3 333,33 0,71 0,75

4 6,45 142,86 26,32

5 4,78

6 7

Table 3. Measured weights for marker pairs 4.3 Save and Update

The calibrated marker field definition can be saved to a file which can be used as an input file for any application using the ARToolKit’s marker field system. In addition we made also a slight update for the ARToolKit interface to make it possible to update the marker field definition also dynamically.

Both approaches were tested with various applications.

4.4 Measurements

Measuring all relations correctly in difficult configurations, like shown in figure 6, is impossible without special devices. Especially angles are difficult to measure accurately. Yet small errors in rotation values affect greatly to the position of objects far away from rotation centre.

To test how the measurement error affects to the augmentation error, we placed two markers of size 5 cm x 5 cm, in the distance of 37 cm in line on plane (figure 5).

0 1

Exact 3.56 6.60

Automatic 4.45 4.64 1 Degree error

X 3.78 5.98 Y 3.59 7.00 Z 3.25 6.79 All axis 3.74 6.34 5 Degree error

X 4.46 6.96 Y 8.09 8.49 Z 3.29 10.24 All axis 9.32 9.34 Table 4. Median augmentation errors in test

example.

The first test was made using the known (exact) marker field definition (370 mm translation and no rotations). In this case, we assume that the error results purely from the marker detection algorithm,

due to imperfect threshold, edge detection, corner extraction, rounding errors in matrix calculation etc.

In the second test we used the automatic marker field generation. These are the values to be compared with the other tests.

The rest of the test cases include self-inflicted errors in the marker field definitions that is 1 or 5 degrees flaws in rotation values along x-, y-, z- and all axes. Figure 2 shows the augmentation error of the last case.

Table 4 shows the median error values of about 400 frames. Marker 0 is base marker, and origin is also on that marker. With predefined marker field (with and without flaws) error on base marker is smaller than on marker 1. In contrast the automatic detection process averages the error out. We also noted that even small errors in rotations have a clear effect on augmentation error on markers other than base marker. Which was expectable as origin was on the base marker. There our automatic calibration produced clearly smaller errors.

Figure 5. Error measure set up.

5 Examples

We used a configuration shown in the images 6 and 7 to create example graphs show in figure 1. Red arrows in the figure 6 indicate the direction of the positive y-axis and the numbers beside markers are the IDs.

Figure 6. Example of small scale marker field test configuration

(6)

In our test configuration we had “walls” in middle of the marker field, so that we could not detect most of the markers simultaneously. We laid markers in arbitrary poses, such that measuring relative positions and rotation angles by hand would have been impossible task.

Figure 7. Examples of test use.

Image shown in figure 6 was taken from the top of the marker field afterwards, but in real use we moved around the table and screen shots from calibration are show in figure 7. On top of every detected marker we augmented a green cube. We also augmented red cube on top of every marker using marker field positions. Occlusions where not taken into account, thus the red cubes where augmented also when markers where behind some object.

Figure 8. User interface of the

ArMarkerFieldGenerator application, showing detected markers and origin.

We tested our system for example with small scale marker field configuration shown in figure 6, and in wide area installation shown in figure 9.

Screenshot from MarkerFieldGenerator application is shown in figure 8.

The separate calibration process was tested among others with our ARMobile-application presented in [7]. The visual accuracy of the augmentation was clearly better with the automatic calibration process than when the marker field configuration was measured by hand.

Figure 9. Marker field in wide area indoor application.

6 Conclusions

We tested the algorithm using several marker configurations from simple planar setup and having markers on sides of a cube up to having markers on different slanting planes in arbitrary angles. We tested different configurations in small scale with markers of size 5 cm x 5 cm, up to large scale environment with markers of the size 70 cm x 70 cm. With the randomly positioned markers the augmentation results are clearly more accurate if automatic marker field generation process is used instead of measuring marker relations by hand.

There are several choices for selecting cost model, updating rules, and other parameters in marker board creation and updating. In future more comparison between them should be made to find the optimal choices.

The calibration process is easy to use and it allows complex ad hoc marker setups. This makes the use of multi-marker system convenient in practice.

The algorithms used for creating marker field are not marker dependent. The same algorithms could be used with any set of features e.g. marker corners, optical features, image patches etc. We provide a solution for add-hoc use of markers without troublesome set-up or calibration process.

This makes AR-applications convenient to use and brings them in real use. Furthermore our implementation provides an automatic calibration for widely used ARToolKit’s marker field.

(7)

References

[1] ARToolKit:

http://www.hitl.washington.edu/artoolkit/

[2] Migel, Lang, ganster, Brandner, Stock, Pinz.

Hyprid tracking for outdoor augmented reality applications. IEEE Computer Graphics and Applications, pages 54-63, Nov-Dec 2002 [3] Cormen, Leiserson, Rivest, Introduction to

algorithms, Twenty third printing 1999, The MIT Press, Cambridge, Massachusetts London, England

[4] Uematsu and Saito, AR registration by Merging Multiple Planar Markers at Arbitrary Positions and Poses via projective Space, ICAT 2005 [5] Uematsu and Saito, AR Baseball Presentation

System with Integrating Multiple Planar markersAdvances in Computer Entertainment Technology (ACE06), June 14 - 16, 2006, Hollywood, California, USA., June, 2006 [6] Naimark and Foxlin, Circular Data Matrix

Fiducial System and Robust image Processing for a wearable Vision-Inertial Self-Tracker, ISMAR, 2002

[7] Woodward, Lahti, Rönkkö, Honkamaa, Hakkarainen, Jäppinen, Siltanen, Hyväkkä.

Case Digitalo – A Range of VR/AR

Technologies in Construction Application. CIB w78, 2007.To be published

[8] Davison, Gonzales, Kita, Real-time 3D SLAM with wide-angle vision, IAV04, 2004

[9] Uchiyama, Takemoto, Yamamoto and Tamura.

MR Platform: A basic body on which mixed reality applications are built. In Proc. of the ISMAR, pages 246-253, 2002

[10] Comport, Marchand, Chaumette, A real-time tracker for markerless augmented reality, ISMAR 2003, IEEE Computer Society, Washington, DC, USA

[11] Bleser, Wuest, Stricker, Online camera pose estimation in partially known and dynamic scenes, ISMAR 2006. IEEE/ACM International Symposium on Mixed and Augmented Reality, Vol., Iss., Oct. 2006

[12] Davison, Walterio, Murray, Real-Time Localisation and Mapping with Wearable Active Vision, ISMAR, p. 18, 2003.

Referências

Documentos relacionados

Agterne primivari gi-mpodobind cununi, Copiii;i-icuprinde, la pieptul Ei, zic6nd: ,,Mereu sa fili iubire, in lume strdlucind!', %ottzrznta Tic-tac, tic-tac, bate inima ritmat..