View_Integrator exploits the correspondence between fiduciary points (markers) in different adjacent views. The procedure requires that the user interactively select corresponding markers in the views to be aligned. Then it estimates with sub-pixel accuracy the 3D position of the centres of each marker and minimises the sum of all the distances between the estimated centres until a preset threshold is reached. The surface shape suggests the typology of the markers used to determine the coordinates of the fiduciary points.

Placement of ‘hard’ markers: in some cases, markers of circular shape are physically placed on the surface. This approach has the advantage that we can freely move the object with respect to the optical head, acquire all the views needed to completely acquire it, with the only constraint that the overlapping regions contain the same set of markers. However, the markers are still present on the range information, inducing additional noise. Fig. 1 illustrates this experimental case. The object under test is a manikin head. Fig. 1.a shows the marker selection; Fig. 1.b presents the corresponding 3D range images. Fig. 1.c illustrates their alignment.

Placement of ‘soft markers’: as shown in Fig. 2.a, we can turn off the projection of the markers during the measurement, and turn it on for the acquisition of the color/texture information. In this way, the markers do not disturb the surface, and the alignment can be performed more accurately.
Fig. 2.b illustrates the View-Integrator interface during the selection of the markers, and the result of the alignment is presented in Fig. 2.c.

Feature based selection of the markers: Fig. 3 illustrates how the alignement of the views is performed in the case that neither ‘hard’ nor ‘soft’ markers are used. In this situation, the selection of the fiduciary points is based on the choice of corresponding features in the images; however, this task is very time consuming and critical for the operator, especially when the number of partial views to be aligned is high and when the color information superimposed to the range does not help the operator, as is the case of the two views shown in Fig. 3.a.

Our approach to solve this problem is the elaboration of the range information by means of the Canny edge detector. As shown in Fig. 3.b, the 3D images present significant edges that are well enhanced by the filter and dramatically simplify the operator work. Fig. 3.c shows the effect of the Canny detector and Fig. 3.d the matching between the views.