PSI - Issue 13

Francisco Barros et al. / Procedia Structural Integrity 13 (2018) 1993–1998 Author name / Structural Integrity Procedia 00 (2018) 000–000

1996

4

2.2. Camera recalibration The two cameras were recalibrated independently in relation to the reference camera system, and their relative calibration was later calculated from each of the cameras’ position and orientation. The procedure is similar for both cameras, so it will only be described for the camera that acquired the image in Fig. 1e. It can be easily extrapolated for the camera that acquired Fig. 1f. Four areas of the printed speckle known to be common to the images in Fig. 1a, 1b and 1e were selected in each of the images before feature detection: the surfaces of the two objects beside the specimen to each side, and two regions of the pattern behind the specimen. This step ensures that no features are detected within the region of interest for DIC or any moving parts, and that the areas to be detected are well distributed and not entirely random. It also reduces erroneous matches by enforcing that the same physical regions are matched between images. Then, for each of the areas, SURF features were detected and matched between the images in Fig. 1a and 1b, and their locations were plugged into an algorithm for iterative fundamental matrix estimation and elimination of outliers, so as to remove matches that did not obey the epipolar constraint. The same procedure was applied between Fig. 1a and 1e. The points used for the recalibration, shown in Fig. 2a-c, corresponded to the features from Fig. 1a for which matches were successfully found in the other two images. The world coordinates of the matched points, which can be seen in Fig. 2d, were then obtained from their locations in the images of the reference state and the calibration of the cameras previously obtained by conventional methods. With these points’ world coordinates now being known, as well as their location in the image to be calibrated, they were used as calibration targets using the Zhang algorithm. The calculated parameters were the camera’s intrinsic matrix including skew and aspect ratio, second-order radial distortion and the position and orientation of the camera relative to the world coordinate system.

Fig. 2. Detected and matched feature locations: (a,b) Reference state in original calibration; (c) Deformed state in new calibration; (d) World coordinates of the detected points. 2.3. Digital image correlation and coordinate system merging Subset matching was performed using commercial DIC software, using a subset size of 25 and a step size of 10. The image coordinates of matched subsets, along with the calibration and recalibration results, were used to produce 3D point clouds corresponding to the three image pairs, using the optimal triangulation method outlined in [2]. Using the rotation and translation between camera systems obtained during recalibration, the point clouds taken from both calibration states can be placed in the same world coordinate system. 3. Results The detected positions and orientations of the cameras relative to each other and the specimen is shown in Fig. 3. A view from above is shown, as the camera orientations and alignments were nearly parallel to the ground. The obtained translations and rotations seem to be consistent with the repositioning imposed and with the differences seen on the images for the different cameras.

Made with FlippingBook. PDF to flipbook with ease