Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Stereo-DIC Calibration and Speckle Image Generator Based on FE Formulations

  • 787 Accesses

  • 14 Citations

Abstract

Stereo digital image correlation (stereo-DIC) is being accepted by the industry as a valid full-field measurement technique for measuring shape, motion and deformation, and it is therefore of utmost importance to provide uncertainties on the obtained measurements. However, the influences on a stereo-DIC measurement are not fully understood; indeed, stereo-DIC is a complex optical-numerical process and it is not always clear how errors are propagating throughout the measurement chain. In order to investigate the magnitude of the different error-sources a simulator for stereo-DIC is proposed. This simulator is able to generate realistic synthetic images as if they were made during a real set-up, so the error sources can be investigated separately and an optimal set-up can be chosen before any physical test is performed. We present in this paper the mathematical approach to the DIC simulator including details on how to convert FE displacement field results to stereo-DIC images. The simulator includes the ability to control the lighting and to create synthetic calibration images. The synthetic images are compared to simulations for a bulge test as a validation of the simulator. Synthetic calibration images are compared to experimental calibration studies to verify those. Finally a brief look at how the simulator could be used for looking at calibration quality is conducted.

Introduction

Stereo digital image correlation (stereo-DIC) is becoming an accepted technique when it comes to measuring shape, motion and deformation due to its flexibility and ease of use. The accuracy of the technique however is not yet fully understood due to the fact that it is an optical-numerical technique in which many influences, both numerical and experimental, are present. For single-camera, or 2D-DIC, there has been a lot of research into the error sources [14], including 2D simulators [5, 6], that can be used to investigate error sources by means of simulation, and that are able to optimize a test set-up for material identification [7].

This approach is now applied to stereo-DIC in order to improve the knowledge in uncertainty quantification. The first section of this paper covers the principle of the simulator, broken down into three consecutive steps that use a finite element mesh, one reference image and data describing the stereo-DIC set-up (extrinsic and intrinsic parameters) to create deformed stereo images. The process includes the following steps: the projection of the mesh in space, the deformation of the image, the adding of lighting effects, the simulation of the depth-of-field (DOF) and the inclusion of camera-noise. In “Verification of the Virtual Experiments” the simulator is verified by simulating a bulging-experiment in ideal conditions (no camera noise, perfect lighting, perfect focus, perfect calibration-data, etc.) and by comparing the imposed deformation-field with the measured one from a stereo-DIC measurement using the MatchID-platform. This section is followed by an uncertainty study of the calibration process based on a bootstrap Monte-Carlo, which can be found in “Validation of the Calibration Image Creation Against Experimental Results”. For this section the authors assume that a flat calibration plate is used (common in both commercial as open-source calibration-packages) and not a “3D” calibration-plate in which the fiducials are at different heights on the calibration plate since this would fall out of the scope of the article. Ref. [8] is mainly followed in this section to verify the simulator on the one hand and to confirm the experimentally found data with simulated data on the other hand. In [8] a library of images was created by manually taking 1000 to 2000 images for a roll-motion (rotating the target around its vertical axis), a twist motion (rotating the target around the horizontal axis) and a plunge motion (moving throughout the field-of-view). Images were picked from this library in order to perform a Monte-Carlo study of the calibration uncertainty. The same approach is followed here, but using simulated images instead. The main advantage of simulating data is that it is far less time- and labour-consuming but more importantly, the true values are exactly known. Making it possible to simulate a wider range of “real-life” DIC setups without performing numerous experiments. Experimentally obtained results were used to further validate the behaviour of the simulated data. The influence of the calibration data on the stereo-DIC position and displacement measurements, falls outside the scope of this paper and will be handled in subsequent publications.

Virtual Experiments in Stereo-Vision DIC

In this paper virtual stereo-DIC experiments are performed by means of an image generator that is able to produce images as if they were taken during a real experiment. This implies that a myriad of influences are accounted for (e.g. camera properties, lighting, focus, etc). By simulating the imaging process one has access to ground-truth simulated data and thus the deformation field encapsulated in the images. This enables one to investigate the effect of different influences (independent of each other) on the calibration quality, the DIC-measurement, etc. The developed simulator differs from a previous image generator proposed by Orteu [6] in the way the deformation-field is constructed; [6] uses continuous G 2 test objects, ruling out the uses of FE-data to simulate an experiment. The focus of the proposed simulator lies in the simulation of mechanical experiments, thus FE simulations are a natural input to the DIC-simulator. This has the drawback that non-continuous surfaces are present, thus degrading the quality of the lighting since artificial flat surfaces are present in the model. A small enough mesh size should thus be used for reducing lighting errors. The following steps are performed during the virtual experiment: first an image is deformed based on a (projected) mesh, which is extracted from an FE-package. This is followed by adding the influence of light, de-focus and noise.

Deforming and Projecting a Reference Image

In order to create deformed images, one needs a reference image (taken during a real test or generated by dedicated software e.g. [9], a reference camera from where this image was taken (from now on called “the virtual reference camera”) and a mesh, representing the object that will be deformed. The mesh is extracted from an FE-package and will be projected 3 times based on the pinhole camera model [10] as shown in Fig. 1. This projection will transform points in the 3D space to 2D points on the sensor-plane of a camera, including the lens distortions. The first projection transforms the mesh onto the reference image, as seen from the reference camera. The other two projections transform the same mesh (if no deformation) or a deformed mesh (representing the deformed state) onto an empty virtual camera image, as seen from the two stereo-camera’s in the specified set-up. The projections of these meshes are merely a change of the nodal positions of the mesh due to the fact that the cameras are at different positions (thus changing the sensor-location of the projected mesh). No deformation is imposed so far. This is followed by the generating of the deformed image based on the mesh, reference image and principles of finite element mapping. The mapping, based on Lagrange polynomials, can be described by the following form:

$$ d=\sum\limits_{i=1}^{N} \Phi_{i} \delta_{a} \hat u_{a} $$
(1)

Where d is the displacement, Φ is the shape function of the used basis, δ a is the displacement in direction a and where \(\hat u_{a}\) is the unit vector in direction a. A common element type is the linear Q4 element; a bilinear quadrilateral element, based on two Lagrange polynomials. The displacements in this element can be described by 2 local coordinates [ ξ,η] in a square master-element, where [ ξ,η] 𝜖 [ −1..1]. If one has the global coordinates of the four nodes and the local coordinates of a point, one can determine the global coordinate of that point by mapping it using equation (1), where the the shape functions are:

$$\begin{array}{@{}rcl@{}} \Phi_{1} = \frac{1}{4}(1-\xi)(1-\eta) \\ \Phi_{2} = \frac{1}{4}(1+\xi)(1-\eta)\\ \Phi_{3} = \frac{1}{4}(1-\xi)(1+\eta) \\ \Phi_{4} = \frac{1}{4}(1+\xi)(1+\eta) \end{array} $$
(2)
Fig. 1
figure1

Camera set up

The deforming process, (outlined in Fig. 2, calculates the grey-value of each pixel of the deformed image by extrapolating the grey-value from the reference image, based on the element deformation. The use of the back- and forth-mapping has the advantage that possible interpolation errors in this stage cancel. The main error source is the interpolation of the reference image, and if high accuracy is needed, a high-resolution image can be used in order to minimize this error source as stated in [11]. This is in contrast to the methods used so far for deforming images for a 2D DIC-setup [12], where 3 interpolations were performed. In order to deform images based on an FE-mesh a check must be performed to be sure that the location of the considered pixel is inside an element of the (projected) deformed mesh (denoted as element e). The requirement for this is that the local coordinates [\({\xi ^{e}_{g}}, {\eta ^{e}_{g}}\)] 𝜖 [ −1..1], where e represents the element and g denotes the fact that the element is in its deformed state. In order to check this requirement the local coordinates have to be calculated from a given global coordinate (the pixel location). The general belief is that this inverse mapping, from global to local, is not directly possible [13]. However this is possible for a standard Q4-element (as described in [14]) as the authors use a different, generic, approach to approximate the inverse mapping by using an iterative, updating Taylor expansion. By doing this, the inverse mapping can be determined in an agile manner. To do this the local coordinates are described as in [15]:

$$ (\xi, \eta)=(\xi_{0}, \eta_{0})+(\bigtriangleup\xi, \bigtriangleup\eta) $$
(3)

Where (ξ 0,η 0)=(0,0), so that:

$$\begin{array}{@{}rcl@{}} x_{g} = X^{e}(\xi_{0}+\bigtriangleup\xi,\eta_{0}+\bigtriangleup\eta) \\ y_{g} = Y^{e}(\xi_{0}+\bigtriangleup\xi,\eta_{0}+\bigtriangleup\eta) \end{array} $$
(4)

A first order Taylor expansion can be performed on this equation:

$$\begin{array}{@{}rcl@{}} x_{g} = X^{e}(\xi_{0},\eta_{0})+\bigtriangleup\xi \frac{\delta X^{e}(\xi_{0},\eta_{0})}{\delta\xi} +\bigtriangleup\eta\frac{\delta X^{e}(\xi_{0},\eta_{0})}{\delta\eta}\\ y_{g} = Y^{e}(\xi_{0},\eta_{0})+\bigtriangleup\xi \frac{\delta Y^{e}(\xi_{0},\eta_{0})}{\delta\xi} +\bigtriangleup\eta\frac{\delta Y^{e}(\xi_{0},\eta_{0})}{\delta\eta} \end{array} $$
(5)

If the matrices A,D and X are defined as:

$$ A= \left[\begin{array}{ll} \frac{\delta X^{e}(\xi_{0},\eta_{0})}{\delta\xi} & \frac{\delta X^{e}(\xi_{0},\eta_{0})}{\delta\eta} \\ \frac{\delta Y^{e}(\xi_{0},\eta_{0})}{\delta\xi} & \frac{\delta Y^{e}(\xi_{0},\eta_{0})}{\delta\eta} \end{array}\right] $$
(6)
$$ D= \left[\begin{array}{l} \bigtriangleup\xi \\ \bigtriangleup\eta \end{array}\right] $$
(7)
$$ X= \left[\begin{array}{l} x_{g}-X^{e}(\xi_{0},\eta_{0}) \\ y_{g}-Y^{e}(\xi_{0},\eta_{0}) \end{array}\right] $$
(8)

The equation can be solved in the following way:

$$ [D]=[A]^{-1}[X] $$
(9)
Fig. 2
figure2

Deforming process

Since a Taylor-expansion is used, estimated local coordinates are obtained. In order to improve the accuracy of the obtained local coordinate a Gauss-Newton algorithm is used to iteratively optimize ξ and η. ξ 0 is updated by ξ i and η 0 is updated by η i in the above equations for calculating △ξ and △η until the convergence criterion (△ξ, △η)≤0.01 is reached. These mapping functions (from global to local and from local to global) will be used to deform the images; once the local coordinates and the matching element (in its deformed state) are obtained, the global coordinates of this element in its undeformed state can be determined. The authors refer to [15] for more information regarding the mapping. The grey value of the global coordinate in the (interpolated) reference image is then given to the current pixel in the deformed image. A deformed image can thus be created by performing this procedure for all pixels. The process is outlined in Fig. 2 (in which G(X,Y) is the pixel at the location (X,Y) in the deformed image), and the mapping between coordinate systems can be seen in Fig. 3, in which M(x f ,y f ) is the mapping function from global to local for coordinates (x f ,y f ) and in which ξ e(x g ,y g ) and η e(x g ,y g ) are the local coordinates in element e in the deformed configuration, while ξ e(x f ,y f ) and η e(x f ,y f ) are the ones for the undeformed configuration.

Fig. 3
figure3

Transformation between coordinate systems

Influence of Light, Focus and Noise

Once a deformed image is computed light influences, defocus, and noise can be added. These items will be addressed in this section.

Adding lighting-effects

As lighting is often difficult in DIC-experiments, it is important to include them in the simulation. For example, if an object deforms, unwanted reflections can occur and the quality of a DIC measurement will drop. It is thus of utmost importance that these effects are incorporated in the virtual experiment. There are many algorithms available in the literature to simulate the influence of light on a surface, these are the so-called shading algorithms (e.g. flat shading, Gouraud shading, Phong shading, etc. [16]). Each shading algorithm consists of an illumination model and an interpolation technique. The illumination model describes how the object reflects incident light, while the interpolation technique defines which points are used to calculate the light influence (e.g. using the normal at each vertex and then interpolating to a certain point or interpolating the normal at each point, etc.). The Phong reflection model [17] is chosen in combination with an adapted form of the Gouraud interpolation technique. This reflection model consists of three components: an ambient, a diffuse and a specular component. The ambient component models light that has been reflected by multiple objects before it hits the object of interest, and this component is consequently a constant over the entire object of interest. The intensity of this ambient component is noted as I a .

The diffuse component is a result of direct reflection in all directions by the object itself and this component heavily depends on the angle between the light source and the surface of the object. This type of reflectance is known as Lambertian reflectance and the influence is proportional to the angle between the normal (N in Fig. 4) of the surface and the vector going from the light source to the surface. The intensity of the diffuse light (I d ) can be modelled as:

$$ I_{d} = I_{i}k_{r}\cos\theta $$
(10)

Where I i is the brightness of the point light source, k r is the reflection coefficient of the surface, and where 𝜃 is the angle between the normal of the surface and the vector going from the light source to the surface as illustrated in Fig. 4.

Fig. 4
figure4

Phong lighting

The last component is the specular component, modelling the directly reflected light from the object to the camera, possibly causing saturation. This component is proportional to the angle between the camera-direction and the direction of the reflected light and can be defined as:

$$ I_{s} = I_{i}k_{s}(L_{s}\cdot C)^{r} $$
(11)

L s in equation (11) defines the reflected light vector and C is defined as the vector between the camera-location and the point where the reflection is being considered as shown in Fig. 4. r is a shininess constant (which is infinite when a perfect mirror is being considered).

If all components are added together and if the cosine of the angle between the normal (N) and the incident light vector (going from the light source to the point being evaluated, denoted as L d in Fig. 4) is written as a dot product this yields:

$$ I_{total} = I_{a} + I_{i}(k_{r} (L_{d}\cdot N)+ k_{s}(L_{s}\cdot C)^{r}) $$
(12)

This model initially assumes that there is a point light, shining in every direction with the same intensity, and the light has no distance-dependency. An attenuation factor is implemented to overcome the latter:

$$ I = I_{a} + (k_{r} (L_{d}\cdot N)+ k_{s}(L_{s}\cdot C)^{r}) \frac{I_{i}}{1+kd^{2}} $$
(13)

Where k is an attenuation factor and where d is the distance between the light source and the point being considered.

A second type of light source, the spotlight, is introduced to have a directional light source. The spotlight has, aside from the position of the light source, also the direction of the light source and the spot-cutoff angle (defining the angle of the light cone). Note that there are more advanced lighting techniques, BRDF [18, 19] for example, but they are both more computationally expensive. Some example-images with the different light components (inserted values given in Table 1) can be found in Fig. 5(a) to (e) for the set up defined in Fig. 6. Similar values for the lighting components were imposed in “Validation of the Calibration Image Creation Against Experimental Results”.

Fig. 5
figure5

Example images with different lighting components added, with blue being a higher grey level value and red being a lower grey level value compared to the original image. Please note that the color scales are not the same for each image

Fig. 6
figure6

Set up

Table 1 Lighting values inserted in simulator for images Fig. 5(a) to (e)

To prevent pixel saturation, the ambient light intensity I a is set to a negative value. This is acceptable because it is a phenomenological model using arbitrary values.

Adding depth of field

A camera always has an aperture (either a physical shutter element, or the diameter of the lens) between the lens and the camera sensor, reducing the amount of light falling onto the sensor, and also limiting the angle of the light rays falling onto the sensor and thus determining the depth-of-field (defined as the zone of acceptable sharpness in front of and behind the subject on which the lens is focused, further defined as DOF). A lens focusses only on one plane in space, all other points in space will produce a blur spot. However, if the blur spot is smaller than the pixel-size the image will appear in focus over a certain range, which defines the DOF. Every point in the DOF will have a blur spot smaller than the acceptable circle of confusion (defined here as the pixel-size). DOF is therefore controlled by the aperture, with smaller apertures providing larger DOF and larger apertures smaller DOF with the concurrent change in light at the detector. As can be found in [20] the near limit of the DOF is:

$$ D^{-}=\frac{D \cdot H}{H+D} $$
(14)

Where D is the distance at which the camera is focussed and where H is the hyperfocal distance (the nearest location to the camera where the DOF is equal to infinity). The far limit of the depth of field can be written as:

$$ D^{+}=\frac{D \cdot H}{H-D} $$
(15)

This formula is only valid if H is smaller than D, else the limit is set at infinity. The hyperfocal distance is defined as:

$$ H=\frac{f^{2}}{N \cdot c}+f $$
(16)

Where f is the focal length, N is the f-number (relating the size of the aperture and the focal length to each other, e.g. f/3.5 means that the maximum aperture size is equal to the focal length divided by 3.5) and c is the acceptable circle of confusion. All points outside the DOF will create a larger blur spot than the acceptable circle of confusion and will appear de-focussed. The actual size of the blur spot can be calculated based on similar triangles, as can be seen in Fig. 7 [20]:

$$ \frac{\vert LR \vert}{V_{D}}=\frac{\vert EB \vert}{V_{D}-V_{P}} $$
(17)

Where:

$$\begin{array}{@{}rcl@{}} V_{D} = \frac{f\cdot D}{D-f} \\ V_{P} = \frac{f \cdot P}{P-f} \end{array} $$
(18)

This equation can be solved for the circle of confusion (line EB) with the fact that \(LR=\frac {F}{N}\) as following:

$$ Coc = \vert V_{D} - V_{P}\vert \frac{f}{N \cdot V_{D}} $$
(19)
Fig. 7
figure7

Depth of field

Note that the diameter of the blur spot is not symmetrical to the focus plane [20]. In order to implement the DOF, each pixel is convoluted with a Gaussian kernel, where the kernel size is equal to the size of the circle of confusion (see equation (19)), converted to pixel units. The use of a Gaussian kernel to implement DOF is actually a simplification of the Airy disk formula, which will be implemented in an updated version of the simulator.

Adding noise

The last step in the image-generation is the adding of noise to the image; two noise models are implemented: the first model is a simplification, in which the standard deviation of the noise at each grey-level is the same. The second one is more realistic in the sense that it takes the grey-level heteroscedaticity into account, for as most digital cameras, the standard deviation of the noise is depends on the grey-level-value of the pixel [21, 22]. Both models add a random grey-level to each pixel with a Gaussian distribution, having a mean of the desired intensity and a fixed variance given as a percentage of the dynamic range of the camera in the simplified model or a changing variance (given as an input-file, containing the variance per grey-level-value) in the case of the heteroscedaticity-model.

Verification of the Virtual Experiments

As a first benchmark a bulging-experiment is simulated and the resulting deformation-field is compared to the imposed one (exported by the simulator in the reference frame of camera 1) to the measured deformation-field. The measured displacements are obtained by performing a stereo-DIC measurement on the generated (noise-free) images with perfect calibration-data. The MatchID platform was chosen since this software is written by the same research-group and the authors thus have access to the same libraries, so the programming work could be reduced by using already validated subroutines (e.g. interpolators). For this verification no noise was imposed on the images in order to reduce errors, with the noise-added case verified separately. Figure 8 represents the different steps needed to make the images and to verify the simulator. The verification was done by comparing the measured displacement of each evaluated pixel with its imposed displacement.

Fig. 8
figure8

Flowchart for comparison of imposed and measured deformation fields

Verification: Results

The generated images represent an ideal bulge experiment: no camera-noise, perfect even lighting, an optimized speckle pattern (speckles that are at least three by three pixels in width and height, smooth gradients across the image, etc. the authors further refer to [10, 23] for more information concerning speckle patterns) and the entire sample in perfect focus with infinite depth-of-field. The bulging-experiment itself consists of a flat round plate, clamped on the edges, and a pressure is applied on the backside of the plate, resulting in a 5 mm bulge (mesh is extracted from the Abaqus FE-package). The properties of this experiment can be found in Table 2, the DIC settings can be found in Table 3 and the ROI and used subset size can be seen in Fig. 9. The discrepancy between the imposed and measured coordinates can be found in Tables 4 and 5, respectively for the undeformed and deformed state of the plate.

Fig. 9
figure9

Used ROI (in blue) and subset size (indicated as a yellow square)

Table 2 Properties of bulge test
Table 3 DIC-settings
Table 4 Stereo-DIC versus imposed displacements for the bulge-experiment-undeformed state
Table 5 Stereo-DIC versus imposed displacements for the bulging experiment-deformed state

These data verify the correct working of the image-generation since the obtained errors are much smaller than what one can usually get in a real experiment. This self-consistent generation of DIC images that, when analysed with the same stereo calibration, come up with the correct shape in an ideal situation verifies the simulator. Please note that the images are generated as real as possible to investigate the impact of the various error sources, but it is however not the purpose to generate absolute ground-truth images.

Validation of the Calibration Image Creation Against Experimental Results

In this section the different influences on the calibration of a stereo-set-up will be investigated, in order to compare with experimental data previous obtained in [8]. Different influences will be covered. Including the number of images, the target size and quality and the usage of the FOV. Data that was previously experimentally found in [8] is matched in the following sections, thus validating the calibration image generation. All acquired data came from a Monte-Carlo approach, in which images were randomly picked from a generated library.

Influence of the Number of Images on the Acquired Calibration Parameters

When a calibration of a stereo-DIC set-up is performed multiple images must be used to accurately determine the different parameters; indeed, especially lens-distortions need a vast number of images to be modelled correctly. The general belief is: “the more, the better”, but how many is enough? In order to investigate this, a bootstrap Monte-Carlo approach was followed according to a previous paper [8], thus duplicating the experimentally found trends (e.g. in Fig. 10) and validating the calibration stage of the simulator. A set of 1000 image-pairs was generated with the simulator and from this set different groups, each representing a set of calibration-images, were randomly picked (though roll/pitch/plunge and all motions together were represented in a same amount of images) and these groups were calibrated. Table 6 shows the number of calibration-images per group and the number of Monte Carlo runs. Starting at 75 images per collection the number of collections is lowered to compensate the increased calibration-time.

Fig. 10
figure10

Influence of number of images on selected calibration parameters from a real experiment [8]. Results are similar for the other parameters

Table 6 Nr of images and nr of groups

There is no bias in the calibration parameters when in-focus images are used, which is consistent with the literature. Our work does show that if there is a large amount of de-focus, some bias in the calibration parameters will be present. Since we limited the images to be in-focus all the time only the variation of the parameters are reported rather than the mean. As can be inferred from Fig. 11(a) to (f) the standard deviation on the calibration parameters drops to a constant value when 100 or more image-pairs are used in a calibration-procedure (including bad image-pairs). If however bad calibration-pairs (in which the minimum required detected fiducials is not reached) are omitted from the image-set 50–75 image-pairs will yield good results, confirming the conclusion earlier made in [8] that one has to take at least 50 image-pairs for a good calibration. No improvement in the accuracy was noticed when more than 100 image-pairs were used. Please note that the results are obtained with a perfect target with an infinite DOF, so the errors will be higher in a real experiment. Another comparison with experimental data was the covariance matrix of the calibration parameters. Figure 12 shows the covariance obtained by the simulated images which can be compared with the experimentally obtained ones in [8], Fig. 6. This plot can be interpreted as follows: the two considered parameters are co-varying more when the data in the scatter plot has a more linear behaviour. The considered parameters are not co-varying if the data in the scatter plot is oval and data-points are scattered all over the plot. For instance Fx and Fy are clearly linked since a perfect linear curve is obtained as indicated in Fig. 12. The data proved to be consistent with experimentally found data, thus further validating the calibration image generation.

Fig. 11
figure11

Influence of number of images on selected calibration parameters. Results are similar for the other parameters

Fig. 12
figure12

Covariance in the calibration parameters

Influence of Using the FOV

Another common calibration rule is that the entire FOV has to be covered by images in order to have correct calibration-data. This intuitively feels correct for modelling the lens-distortions; the distortion mainly increases near the edge of the FOV since lens-distortions mainly have a radial nature in practice. It is thus easier to measure and model them correctly when more data points are available in that region (i.e. when the target reaches the edge of the image). Only the influence of the usage of the width of the FOV is investigated, the usage of the depth of the FOV is not included in this section.

In order to validate this with the data available in the literature a test was performed in which three times 50 calibrations were made (once with 10 images per calibration, once with 25 images per calibration and once with 50 images per calibration), and this was done for different calibration-situations. These encompassed the influence of using the entire FOV or only taking images at the centre, not having a good focus (denoted as “blur”), and high or low contrast (denoted as “low light” or “much light”; i.e. high contrast uses more of the available grey-levels, while low contrast uses only a limited amount of them) in the images. The consequence of these influences on the accuracy of the radial distortion factor κ1 can be seen in Fig. 13. A better accuracy is obtained if the entire FOV is being used, as intuitively sensed before. One can also see that a slight de-focus is not causing problems for the calibration. Please note that not using the entire FOV can be identified by checking the epipolar distance; indeed, the epipolar distance (the distance between the epipolar line and the matched location from the cross-correlation) is depending on the quality of the calibration data. Even though the epipolar line is calculated without the lens distortion data (by using the fundamental matrix), the deformed images will have lens distortions in them. These distortions are first corrected, before the triangulation is performed. However, if poor calibration data is available this correction will result in poor correction of the (usually mainly radial) lens distortions, thus manifesting themselves as a higher epipolar distance near the edges, as can be seen in Fig. 14 (depicting the epipolar distance of the calibration data set that has the highest error in it). This effect can be spotted even if a good calibration-score is obtained (see Table 7): the obtained lens distortion factors are differing quite a lot from the imposed ones in the case of the only-in-center images, while the full-FOV ones are quite good and this in which the reported calibration scores are very similar. The more in-depth reader will notice that mainly Kappa 1 is improved when calibrating over the entire FOV, while Kappa 3 is actually getting worse. This can be explained by the fact that all calibration parameters are solved as one set and that the influence of Kappa 3 is much less since all radial lens distortions or normalized when calculating the influence of them. It is also clear that the epipolar distance error indicator is not symmetrical over the x-y plane since there will also be errors in the c x and c y parameters, thus changing the origin of the radial distortions. The reported calibration-score is an indicator for the global error in the calibration, allowing local deviations since the calibration score is based on the reprojection of the dots with the obtained calibration-parameters, and a least-square difference between the original world coordinates and the calculated ones.

Fig. 13
figure13

Influences on the accuracy of radial distortion factor κ1

Fig. 14
figure14

Not calibrating the entire FOV can result in changing epipolar distances near the edges of the FOV. The reported epipolar distances are in pixels

Table 7 Reported calibration score and calculated lens-distortions

Influence of the Target Quality on the Acquired Calibration Parameters

It is nearly impossible to investigate the influence of the target quality in real life; indeed, no target will ever be perfect, and the calibration target errors are hard to measure. However, using the simulation software in this paper, it is possible to add predefined errors in the target quality with the proposed software, and one can easily investigate the consequence of a non-perfect target without the large costs coming with producing high-quality targets.

In order to investigate the importance of the target quality different sets of images were made with the following properties:

  • Each set contains 250 image-pairs.

  • The target is a 9×9 target with 10 mm spacing.

  • Each set represents a specific target quality, going from a target on which each dot can randomly shift 0.025 mm to a target on which each dot can randomly shift 0.150 mm. Each set adds 0.025 mm random offset, so 6 sets are generated. The simulator allows the generation of this faulty calibration-target by using a mesh in which the nodes that coincide with the center of the fiducial is moving between the reference state (with a perfect target as reference image) and the deformed state (moving over the FOV).

  • The dots of the reference target are kept as perfect circles; the authors presume that the non-roundness is of no influence on the calibration-result.

  • The target has a size of about one-quarter of the FOV.

  • The target is moved randomly over the entire FOV and is being rolled/plunged/twisted so lens distortions can be modelled correctly.

  • The cameras have a noise level of 2 % (i.e. the standard deviation of the camera noise, divided by the possible numbers of grey level values is two percent. In the case of an 8 bit camera (yielding 28 possible grey level values) this comes down to a standard deviation of the image noise of 5.12 grey level counts; 2 % = 5.12/ 28).

  • Even lighting conditions are present i.e. no highlights are present and good image contrast is present.

  • The parameters of the set up can be found in Table 8.

    Table 8 Imposed calibration parameters

30 calibrations were performed for each possible target quality in order to investigate the influence of the target quality on the uncertainty of the obtained calibration parameters. Each calibration used 50 image-pairs, randomly picked from the 250 available image-pairs per target quality. The images were randomly picked in a way that there is a balanced amount of images over the entire FOV and that rolling/pitching/plunging is present in the complete collection. All images were calibrated using MatchID-calibration with default settings (i.e. a full-bundle approach as described in [24]). Figure 15 represents the reported calibration score of 1 single set for each target quality. The reported calibration scores increase with decreasing target quality, as can be expected. The consequence of the decreasing target quality on the measured parameters can be seen in Fig. 16(a) to (f). As can be expected the standard deviation increases with poorer grid quality, however no bias in the obtained calibration parameters is observed. Please note that in this case the center of each fiducial is randomly shifted and there is no bias in the target, however if there should be a bias (e.g. when the distance between different rows of fiducials is biased) the calibration parameters will probably be biased. This is however not investigated in this paper.

Fig. 15
figure15

Reported calibration score at different target qualities

Fig. 16
figure16

Influence of target quality on obtained calibration parameters

Based on the obtained results from both this manuscript as from [8] the authors advise to use the lower boundary (1/10th of a pixel) in real experimental circumstances. For more information about the uncertainty in the calibration parameters the authors refer to [8].

Conclusion

The simulator was verified using the self-consistent bulge test and validated with the experiments in [8], thus confirming its usefulness as a tool to validate the DIC-process more quickly and consistently. Using the simulator, experienced-based rules of thumb can now be checked and updated (if needed). In the case of the calibration of a stereo-DIC set-up the following guidelines, confirmed both experimentally [8] and with the simulator, can be followed to increase the quality of the obtained parameters:

  • Use a target that covers the entire FOV, thus improving the calibration quality since the entire FOV is used and lens distortions can be modelled correctly (see “Influence of Using the FOV”).

  • Make sure that the spacing between the dots is consistent and well-known; if there is a systematic bias in the dot-spacing a systematic error will be present in the obtained calibration parameters if a bundle-approach algorithm is used for the calibration. Bundle-approach algorithms are namely only capable of dealing with random effects. Additionally the uncertainty in the dot spacing should be less than 1/10th of a pixel (see “Influence of the Target Quality on the Acquired Calibration Parameters”).

  • Always have images with the following motions in them: roll, twist, plunge, and everything together. The authors refer to [8] for more information regarding this.

  • Move throughout the entire FOV if your target is smaller than the FOV and fill the entire calibration volume (see “Influence of Using the FOV”).

  • Use at least 50 image-pairs for your calibration (see “Influence of the Number of Images on the Acquired Calibration Parameters”).

  • Take more than 50 images so bad image-pairs can be excluded from the calibration (in which the minimum required detected fiducials is not reached), while a minimum of 50 images is still available for a good calibration (see “Influence of the Number of Images on the Acquired Calibration Parameters”).

  • Check the epipolar distance errors with a static image-pair before starting your test and recalibrate if needed (see “Influence of Using the FOV”).

  • If a series of test is done over a large period of time re-calibrate in-between tests. Hereby changes in the set-up due to camera-motion, vibrations, heating, etc can be detected in time. Considering camera parameter drift was previously shown in [25].

References

  1. 1.

    Bornert M et al (2009) Assessment of digital image correlation measurement errors: methodology and results. Exp Mech 49(3):353–370

  2. 2.

    Wang YQ, Sutton MA, Bruck HA, Schreier HW (2009) Quantitative error assessment in pattern matching: effects of intensity pattern noise, interpolation, strain and image contrast on motion measurements. Strain 45(2):160–178

  3. 3.

    Lava P, Cooreman S, Coppieters S, De Strycker M, Debruyne D (2009) Assessment of measuring errors in dic using deformation fields generated by plastic fea. Opt Lasers Eng 47(7–8):747–753

  4. 4.

    Pan B et al (2009) Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review. Meas Sci Technol 20(6):062001

  5. 5.

    Rossi M, Badaloni M, Lava P, Debruyne D, Chiappini G, Sasso M (2014) Advanced test simulator to reproduce experiments at small and large deformations. In: Advancement of optical methods in experimental mechanics, vol 3

  6. 6.

    Garcia D, Orteu J, Robert L, Wattrisse B, Bugarin F (2013) A generic synthetic image generator package for the evaluation of 3d digital image correlation and other computer vision-based measurement techniques. In: PhotoMechanics

  7. 7.

    Rossi M, Pierron F (2012) On the use of simulated experiments in designing tests for material characterization from full-field measurements. Int J Solids Struct 49(3–4):420–435

  8. 8.

    Reu PL (2013) A study of the influence of calibration uncertainty on the global uncertainty for digital image correlation using a monte carlo approach. Exp Mech 53(9):1661–1680

  9. 9.

    Orteu J, Garcia D, Robert L, Bugarin F (2006) A speckle texture image generator. In: Proceedings of SPIE 6341, Speckle06: Speckles, from grains to flowers, 63410H

  10. 10.

    Sutton MA, Orteu J, Schreier HW (2009) Image correlation for shape, motion and deformation measurements. Springer Science

  11. 11.

    Wang Y, Lava P, Debruyne D (2015) Using super-resolution images to improve the measurement accuracy of dic. In: Optical measurement techniques for systems and structures III, pp 353–361

  12. 12.

    Wang Y, Lava P, Coppieters S, De Strycker M, Van Houtte P, Debruyne D (2012) Investigation of the uncertainty of dic under heterogeneous strain states with numerical tests. Strain 48(6):453–462

  13. 13.

    Weaver W, Johnston PR (1987) Structural dynamics by finite elements. Prentice Hall College Div, p 320

  14. 14.

    Hua C (1990) An inverse transformation for quadrilateral isoparametric elements: analysis and application. Finite Elem Anal Des 7(2):159–166

  15. 15.

    Wittevrongel L (2015) R0276087. A self adaptive algorithm for accurate strain measurements using global digital image correlation. Other titles: “Een zelfcorrigerend algoritme voor nauwkeurige rekmetingen met behulp van globale digitale beeldcorrelatie”. PhD thesis, KU Leuven, https://lirias.kuleuven.be/handle/123456789/506239, October 27

  16. 16.

    Furguson SR (2013) Practical algorithms for 3D computer graphics. CRC Press, Boca Raton

  17. 17.

    Phong BT (1975) Illumination for computer generated pictures. Graphics Image Process 18(6):311–317

  18. 18.

    Rusinkiewicz SM (1998) A new change of variables for efficient brdf representation. In: Max N, Drettakis G (eds) Proceedings of the Eurographics Workshop in Vienna. Eurographics. Springer, Vienna

  19. 19.

    Ashikhmina M, Shirleyb P (2000) An anisotropic phong brdf model. J Graphics Tools 5(2):25–32

  20. 20.

    Potmesil M et al (1982) Synthetic image generation with a lens and aperture camera model. ACM Trans Graph 1(2):85– 108

  21. 21.

    Grediac M, Sur F (2014) 50th anniversary article: effect of sensor noise on the resolution and spatial resolution of displacement and strain maps estimated with the grid method. Strain 50(1):1– 27

  22. 22.

    Reu PL (2015) A realistic error budget for two dimension digital image correlation. Adv Opt Methods Exp Mech 3:189–193

  23. 23.

    Lecomptea D, Smits A, Bossuytb S, Solb H, Vantommea J, Van Hemelrijckb D, Habraken AM (2005) Quality assessment of speckle patterns for digital image correlation. Opt Lasers Eng 44(11):1132–1145

  24. 24.

    Hartley RI, Zisserman A (2004) Multiple view geometry in computer vision. Cambridge University Press, Cambridge

  25. 25.

    Lava P, Pierron F, Reu PL (2014) Dic course-metrology beyond colors

Download references

Author information

Correspondence to R. Balcaen.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Balcaen, R., Wittevrongel, L., Reu, P.L. et al. Stereo-DIC Calibration and Speckle Image Generator Based on FE Formulations. Exp Mech 57, 703–718 (2017). https://doi.org/10.1007/s11340-017-0259-1

Download citation

Keywords

  • Digital image correlation
  • Uncertainty quantification
  • Calibration
  • Optical techniques