Advertisement

SN Applied Sciences

, 1:588 | Cite as

Bisection searching based reference frame update strategy for digital image correlation

  • Yunlu ZhangEmail author
  • Lei Yan
  • Sreekar Karnati
  • Frank Liou
Research Article
  • 67 Downloads
Part of the following topical collections:
  1. Engineering: Digital Image Processing

Abstract

The strategy to update the reference frames in the digital image correlation analysis is an essential but often overlooked problem. A good reference frame update strategy should be able to adjust the frame step in respect of varying practical circumstances including the different loading rate, speckle pattern, plastic deformation, imaging system, lighting condition, etc. In this work, a simple but effective bisection searching (BS) strategy is presented to solve this problem. The frame step is reduced into one half for the unconverged locations, and the intermediate frame is utilized to assist the correlating process. This process is iteratively conducted to adjust the frame step in different regions automatically. The performance of the BS strategy is evaluated against the constant step (CS) update strategy on simulated experiments. The results indicate that the BS strategy can automatically adjust the frame step for changing speckle pattern and loading rate. The accuracy and robustness of the BS strategy are better than the CS strategy with the same pixel level and subpixel level searching algorithms. The BS strategy also successfully tracked all POIs and adjusted the frame step in the real world experiment with changing loading rate and large plastic deformation (over 70% engineering strain).

Keywords

Digital image correlation Template update Reference frame update Bisection method 

1 Introduction

The objective of digital image correlation (DIC) analysis is to track the positions of predefined points, i.e., points of interest (POIs), on the surface of the specimen from the initial state to the last state. From the displacement information of these POIs, 2D or 3D surface deformation, strain field, elongation, crack tip opening displacement, the coefficient of thermal expansion, and other useful quantities could be derived [7].

DIC may be considered as one of visual object tracking algorithms. However, unlike most visual tracking algorithms proposed in computer vision community that only output the positions of bounding box with the expected accuracy of several pixels, the experimental metrology applications require the results from DIC are accurate at the subpixel level. Much research effort has been devoted to improving the accuracy of subpixel level searching algorithm. Inverse compositional Gauss–Newton algorithm (IC-GN) [12] is the most widely adopted subpixel level searching algorithm due to its high accuracy and efficiency. Recently, Zhou et al. [18] proposed an offset subset modification to reduce the systematic error and eliminate subpixel interpolation in the reference image. Su et al. [13] aimed to solve the same problem by introducing position randomness to the subset. For pixel level searching, Zhang et al. [17] suggested a robust feature matching based point set registration approach to generate the accurate initial guess for all POIs in cases with small or large translation, deformation, or rotation.

The surface patterns that carry the deformation information may change considerably due to many unavoidable factors. The first common situation is that new surface features appear under large plastic deformation, such as Portevin–Le Chatelier effect. Another case is that the thermal radiation of the specimen at high temperature changes the surface feature. The varied surface pattern makes it extremely difficult if not impossible to directly find the corresponding locations in the last frame from the initial frame, e.g., the decorrelation effect occurs [11].

However, only limited researches have been committed to solving this problem. One trivial solution would be changing the reference frame at a constant frame step, e.g., the constant step strategy (CS). Nevertheless, the selection of the appropriate step is not a trivial problem. If the step is too large, the decorrelation could still happen. If the step is too small, the small error generated at each reference frame calculation will be accumulated, and the results would not be accurate. Besides, the CS strategy could not account for varying conditions including loading rate, illumination, noise level, etc.

Pan et al. [11] proposed an incremental reliability guided (IRG) technique regarding this problem. The reliability guided method was introduced in [6] that the initial guess for every POI is from the converged deformation parameter of its neighbors with the highest zero normalized cross-correlation coefficient (ZNCC). The initial guess of the first POI, called the seed point, is from integer pixel displacement searching. The IRG strategy determines a new reference frame has to be used only when the ZNCC of the seed point is smaller than a predefined threshold. Tang et al. [15] suggested another approach. They utilized the deformation parameter of the seed point in \((n-1)\hbox {th}\) frame to initialize the shape of the subset in the reference frame and then match it in the \(n\hbox {th}\) frame. This method is quite similar to the template update strategy in visual tracking area [5]. It is equivalent to use the deformation parameter in the \((n-1)\hbox {th}\) frame as the initial guess for the \(n\hbox {th}\) frame. The reference frame update criterion in Tang et al. [15] is the standard deviation of grayscale residual larger than a predefined value. Another approach called the single step DIC was proposed by Goh et al. [4]. The loading force and displacement are used to estimate the deformation parameter and initiate subpixel level searching. They claimed that it does not need to update the reference frame.

Although these methods are more flexible than the CS strategy, each one has its limitation. One obvious drawback of the IRG strategy is that each time only the ZNCC of one or several seed points was examined to determine if the reference frame need be updated. It could not guarantees all other POIs could be successfully matched with this frame step due to the possible large local deformation. Also, the IRG strategy uses the \((n-1)\hbox {th}\) frame as the current frame if the calculated ZNCC of the seed point in the \(n\hbox {th}\) frame is below the threshold. This strategy of reducing the frame step may present a performance issue that several consecutive current frames may all fail due to their similar states. In [15], the physical meaning of the criterion of the standard deviation of the grayscale residual is not clear. Single step DIC [4] only works for the elastic deformation situation like the rubber used in the experiment and replies on additional loading data.

Currently, one trend in the DIC experiments is that the analyzed data is only a small part of the captured data. For instance, the most industrial camera could output full resolution images at a rate of 10 to 30 frame per second. In the current work, a bisection searching (BS) strategy is proposed to tackle the reference frame update problem in DIC by utilizing the abundant recorded images. The intermediate frame is automatically utilized to assist the correlation calculation. In this way, the frame step is reduced only for those POIs with large deformation or varying surface pattern. Also, the frame step is kept at a high value for other POIs to reduce the accumulated error. The idea of utilizing the abundant images coincides with the recently proposed spatial–temporal DIC [3], which exploits the information on the neighboring frames to improve the accuracy of DIC algorithm.

The rest of the paper is organized as follows. The BS strategy is detailed in Sect. 2. The effectiveness of this strategy is studied in the simulated and real-world experiments in Sect. 3. Finally, conclusions are drawn in Sect. 5.

2 Bisection searching based reference frame update strategy

2.1 Basic principal of digital image correlation

Only necessary concepts of DIC analysis are presented in this subsection. For a detailed discussion, the readers are referred to [1, 12] and reviews [9, 10]. Here subset based IC-GN DIC is presented. However, It should be noted that the BS strategy could also be used with other subset-based or fullfield-based DIC algorithms.

Subset-based DIC algorithm can be considered as one of the area-based image registration/matching method. DIC algorithm tracks the locations POIs at different stages of the deformation with know locations in the initial stage. For each POI, it actually represents a small square image around it, and the small image is designated as the subset. The image frame that contains POIs with known locations is the reference frame, and the image frame to find the deformed locations of POIs is the current frame. To describe the deformation of a subset, the first-order approximation is often adopted. In other words, the displacement and the gradient of the displacement are to be assumed as constant within the subset. It could be expressed as
$$\begin{aligned} x^*&=x+u+u_x(x-x_0)+u_y(y-y_0)\nonumber \\ y^*&=y+v+v_x(x-x_0)+v_y(y-y_0) \end{aligned}$$
(1)
where (xy) is the coordinate of a point of the subset in the reference frame, \((x^*,y^*)\) is the coordinate of the corresponding point in the current frame, \((x_0,y_0)\) is the coordinate of the center of the subset, e.g, POI, in the reference frame, uv and \(u_x, u_y, v_x, v_y\) are the displacements and gradients of displacement within the subset, \(p=(u, v, u_x, u_y, v_x, v_y)\) is the set of deformation parameter that needs to be estimated.
To quantify the similarity between two subsets, the criterion of zero-mean normalized sum of squared difference (ZNSSD) is widely adopted. The ZNSSD coefficient of two subsets is
$$\begin{aligned} C_{\mathrm{ZNSSD}}(\varvec{p})=\sum _{\varOmega }\left( \frac{(F(x,y) -\bar{F})}{\varDelta F}-\frac{(G(x^*,y^*)-\bar{G})}{\varDelta G}\right) ^2 \end{aligned}$$
(2)
where \(\varOmega\) is the set of points in the subset, F(xy) and \(G(x^*,y^*)\) are intensity values of the points in the reference and current frame. It should be noted that both (xy) and \((x^*,y^*)\) could be at subpixel positions, so interpolation is required. \(\varDelta F=\sqrt{\sum _{\varOmega } (F(x,y)-\bar{F})^2}\) and \(\varDelta G=\sqrt{\sum _{\varOmega } (G(x^*,y^*)-\bar{G})^2}\) are the fluctuations in the subsets, and \(\bar{F}\) and \(\bar{G}\) are mean intensity values of the subsets. The possible range of \(C_{\mathrm{ZNSSD}}\) is [0, 4] and the lower the \(C_{\mathrm{ZNSSD}}\) represents higher the similarity between the two subsets.
The objective of DIC to find the corresponding point in the current frame is accomplished by finding the deformation parameter p that minimizes the ZNSSD criterion. The optimal deformation parameter p can be calculated by the gradient-based iterative algorithms, and IC-GN is one of them. The estimated difference of deformation the parameter is
$$\begin{aligned} \varDelta p = -\frac{\nabla C_{\mathrm{ZNSSD}}(\varvec{p}_0)}{\nabla \nabla C_{\mathrm{ZNSSD}}(\varvec{p}_0)} \end{aligned}$$
(3)
where \(\varvec{p}_0\) is the initial value of the deformation parameter or the result from the previous iteration.

As the nature of the gradient-based method, it could only converge to the local optimum. Therefore, to get the correct subpixel level deformation parameter, an initial guess close to global optimum should be provided to the subpixel level searching algorithm. This step is called pixel level searching. The integer pixel searching method is commonly adopted. This algorithm calculates the cross-correlation coefficient of a subset at candidate positions locate around the POI. The integer displacement to the location with the highest cross-correlation coefficient will be the initial guess. However, it suffers from the drawback that failure may happen in the cases of rotation or non-rigid deformation. Recently, a feature matching based point set registration approach was proposed to generate subpixel level accurate initial guess for all POIs even in cases with rotation and nonrigid deformation [17].

2.2 Bisection searching

The previous subsection only discussed the tracking problem between two frames. However, in practice, multiple image frames or a continuous video are often used to record the changing states of the specimen. The first benefit of this approach is that the deformation information at these intermediate states can be determined, which may provide useful insights. As will be demonstrated, the result in the intermediate state can assist the tracking process to the last frame. In considering the tracking problem of multiple frames, let
$$\begin{aligned} \varvec{X}^i=\begin{pmatrix} x^i_0 &{} y^i_0 \\ x^i_1 &{} y^i_1 \\ \cdots &{} \cdots \\ x^i_{n-1} &{} y^i_{n-1} \end{pmatrix} \end{aligned}$$
(4)
where \(\varvec{X}^i\) stores the coordinates of all POIs at the frame of index i, \(i \in [0, 1, \ldots , m-1]\), m is the total number of frames, n is the total number of POIs.

As discussed previously, always use the initial frame as the reference frame only works for small deformation cases, and it is not suitable for many practical situations. In contrast, always update the reference frame at a constant step also generate the large accumulated error if the frame step is too small, and the calculation may fail if the frame step is too large. Meanwhile constant frame step could not be adapted to varying situations.

In principle, a feedback control strategy could be used to solve this problem, which is similar to the time stepping algorithm in finite element analysis. However, in general, the dynamic model of the DIC error in respect to frame step is not clear, and it is also dynamic and nonlinear with changing situations. Therefore, this approach is not considered here.

A simple yet effective reference frame update strategy is introduced to solve these issues. This strategy is similar to the bisection method in root finding. Therefore it is called the bisection searching strategy. It tries to directly track the POIs from the initial frame to the last frame. If it fails, then reduce frame step in half and this process continuous recursively. As illustrated in Fig. 1, with known POIs \(\varvec{X}^0\), the first step is to find the corresponding locations \(\varvec{X}^{100}\) and if it fails, then it tries to track the POIs from frame 0 to frame 50 and from 50 to frame 100. During these two sub-steps, the same strategy is applied if any step could not converge. The criterion to determine if the tracking step fails or not is discussed later. In this way, the frame step reduces automatically in the hard to converge steps, and the frame step keeps at the high value in the period with small differences.
Fig. 1

Schematic illustration of the BS strategy in finding appropriate reference frames. The circled numbers are the order of calculation. Dotted lines and solid lines represent failed attempts and successful tracking respectively

Once the reference frames are determined, they can be utilized as landmarks to derive additional displacement at the required frame indexes. These additional frames together with the reference frames are called output frames. Figure 2 shows the calculation of additional output frames at the required frame step of 10. Since the tracking calculation succeeded between the reference frames, it is expected to converge with smaller frame step at the output frame. A two-step procedure is proposed to reduce the need for pixel level searching during output frame calculation by utilizing the calculated displacement information. First, a similar bisection searching step is conducted to generate denser frame indexes with maximum frame step less or equal to the desired output step. As Fig. 2 shown, additional results at frames 57, 63, and 69 are generated. Here dotted lines represent the initial guess of deformation parameters are estimated by linear interpolation. It should be noted that all the reference frame, in this case, is still the frame 50. The second step is to calculate the frames at the required step if they are not available yet. The procedure is similar to the previous step by using the linear interpolation to get the initial guess. A full pixel and subpixel level searching will be carried if the number of not converged POIs is larger than the number calculated from the original reference frames (in this case is from frame 50 to frame 75). The last subfigure shows the overall reference frame calculation relationship. For all additional output frames, it is directly calculated from its previous reference frame. In this way, the dense output information is derived, and the accumulated error is minimized.
Fig. 2

Schematic illustration of the strategy to derive additional deformation information at output frames. Solid lines represent the calculation from the reference frame to current frame. Dotted lines represent the linear interpolation to provide an initial guess of deformation parameters

Regarding the criterion of successful tracking, Fig. 3 illustrates the detailed procedure for a typical combination of pixel and subpixel level searching algorithm. With known POIs \(\varvec{X}^i\), the first step is to determine if the pixel level searching failed. For the feature based point set registration algorithm [17], the convergence criterion is that the change of estimated deformation field is smaller than a threshold within a certain number of iterations. If it fails, the same BS strategy is utilized on the reduced frame step and recursively solves for all POIs. The middle index \((i+j)/2\) is rounded to the nearest integer in the calculation.
Fig. 3

Schematic illustration of the complete steps of the BS strategy. It consists of the failure criteria of pixel and subpixel level searching algorithm and the corresponding procedure under different cases

If the pixel level searching is successful, then subpixel level searching is followed with the determined initial guess. The convergence criterion of IC-GN algorithm is the norm of the change of the displacement is less than a threshold within a certain number of iterations [8]. Also, the POI is considered not converged if its \(C_{\mathrm{ZNSSD}}\) is larger than a threshold \(C_{\mathrm{th}}\). The combined criteria of convergence and \(C_{\mathrm{th}}\) could effectively eliminate the false matching. The subpixel level calculation of each POI is independent, so it is possible that a part of POIs converged and others are not. For those unconverged POIs, the possibility of moving out of frame should be considered. Here a simple test is adopted. If the distance of any point in the estimated deformed subset to the boundary of the image is less than 5 px in the current frame or any point is actually out of the field of the image, the corresponding POI is marked as out of the field and not considered as unconverged. The 5 px margin is set for the support domain of interpolation. If there is not any unconverged POI, then the locations of all POIs \(\varvec{X}^j\) at frame j is obtained. Otherwise, a test of current frame step is conducted. If the current frame step \(j-i\) is only 1, then the not fully converged result has to be returned to terminate the recursive function. If the frame step is larger than 1, next the ratio of the number of current unconverged POIs to the number of previously valid POIs determines if a bisection searching need to be conducted on all POIs or only on the unconverged POIs. If the ratio is larger than a threshold \(\eta\), it indicates that a large number of initial guesses are not accurate or the tracking patterns changed considerably, so the subpixel level searching algorithm could not converge. Then a bisection searching is conducted on all POIs. On the other hand, if only a small amount of POIs could not converge, then to avoid introducing accumulated error for most POIs, the bisection searching is only conducted on these POIs.

2.3 Implementation notes

In total, there are only two hyper-parameters need to be set in the BS strategy except the parameters for the pixel and subpixel level searching algorithms: the ZNSSD threshold \(C_{\mathrm{th}}\) and the unconverged ratio threshold \(\eta\). They were set to 0.2 and 0.1 respectively in the implementation. The threshold of \(C_{\mathrm{th}}\) of 0.2 equals the requirement of ZNCC coefficient should be larger than 0.9, which is more strict than the threshold \(C_{\mathrm{ZNCC}}\) of 0.8 in [11]. The unconverged ratio threshold \(\eta\) is another empirical parameter that determines how often the complete bisection searching is conducted. If this threshold is too small, unnecessary bisection searching will be conducted on the POIs that already converged with the larger frame step and resulted in a higher accumulated error. If this threshold is too large, it could not effectively exclude obvious faulty initial guess that does not converge for a large number of POIs.

The original point set registration algorithm used in Zhang et al. [17] was the feature guided Gaussian mixture model. It has been switched to the adaptive vector field consensus point set registration method [16]. The latter method automatically determines the suitable regularization parameter and is more robust and versatile.

The adaptive subset algorithm was used instead of the original IC-GN algorithm [18]. The advantage is that it eliminates the interpolation step for the POIs at the subpixel locations, increases the efficiency, and reduces the systematic error.

3 Experiments and results

The performance of the BS strategy was compared with the CS strategy on two simulated experiments. Also, the BS strategy was also applied to a real-world experiment. The IRG strategy was not compared here due to its limitation discussed in the introduction and incompatibility between the reliability guided approach and the feature matching based pixel level searching algorithm.

3.1 Simulated experiments

Fig. 4

Simulated speckle images under the rotation deformation. The number on the left is the frame index of the images

The deformation of two simulated experiments was the large degree of pure rotation, which is hard to track for the DIC algorithm. The rotation processes were both divided into two stages with different characteristics. The difference is that in the first experiment, the varying factor is the speed of rotation. The time duration ratio and speed ratio in the two stages are 2:1 and 1:6 respectively. In this way, the adaptability of the reference frame update strategy under varying loading rate was evaluated. In the second experiment, the actual speckles were replaced gradually to simulate the varying speckle pattern in the plastic deformation situation. \(6\%\) of speckles were moved to new random locations following uniform distribution at the last frame. The replacing process was also divided into two stages and the time duration ratio and replacement rate ratio in two stages were also 2:1 and 1:6 respectively. The rate of rotation in this experiment was kept as constant. The specific values of these parameters were set by preliminary experiments that ensure the moderate amount of bisection searching is required, and the total computation time is not too long. The change of the varying factor to the frame index is shown in Fig. 5. The size of the generated images was \(500\times 500\ \hbox {px}\), and the maximum degree of rotation in the two experiments were \(\pi /2\) and \(\pi /4\) respectively. The number of simulated images was 300. Some represent frames of generated speckle images are shown in Fig. 4.
Fig. 5

The relationship between the varying factor and the frame index in the simulated experiments. The vertical dashed lines represent the locations of reference frames determined by the BS strategy. The y axis represents a degree of rotation, b replace rate

The algorithm to generate the speckle images was the Boolean model [14]. This algorithm simulates the process of image acquisition chain and eliminates the need for interpolation. The bit depth of the simulated speckle image was 8. The standard deviation of the Gaussian point spread function \(\sigma\) was 0.5. The quantization error probability \(\alpha\) was set to 0.1. The contrast of the image \(\gamma\) was set to 0.9. These parameters were adopted from the original paper that can generate realistic speckle patterns within practical computation time. The original paper used the Poisson process to get the locations of the speckles. In this work, it was replaced by a newly improved artificial pattern [2]. The randomness parameter and radius of the speckle were set to 0.3 and 2 px respectively.

In experiment 1, the BS strategy was compared with the CS strategy with frame step from 10 to 100 with the step of 10. The pixel and subpixel level searching algorithm used by these strategies were the same. 784 POIs were evenly distributed in the square with the grid step of 10 px. The subset size was \(21\times 21\ \hbox {px}\). The number of maximum IC-GN iterations was 25.

All strategies except the CS strategy of frame step 100 were able to generate converged results \(\varvec{X}^{299}\) for all POIs at the last frame. The CS strategy of step 100 generated 775 failed POIs when calculating from frame 200 to 299. Root of mean squared error (RMSE) was utilized to measure the difference of determined displacement to the commanded displacement at the last stage. The results are summaries in Table 1. The BS strategy performed the best with the lowest RMSE among all results. The determined frame steps were 150, 74, 38, and 37 respectively as shown in Fig. 5. The BS strategy automatically reduced the frame step at the period of the higher rotation rate. The CS strategy with the frame step of 80 produced the minimum RMSE among CS strategies. If the frame step is small, the accumulated error causes the increase of RMSE. If the frame step is too large, the DIC algorithm could not successfully track all POIs. This result confirms the previous analysis of the CS strategy. As shown in Table 1, the computation time of CS strategies decreased with the increase of the frame step. The computation time of the BS strategy was 11.4s and was close to that of the CS strategy with frame step of 30. Although the computation time of CS was higher than some CS strategies, it avoids the time-consuming process of manual selection of the frame step in the CS strategy.
Table 1

RMSE and computation time of the BS strategy and CS strategy with different frame step in the simulated experiment 1

 

BS

CS 10

CS 20

CS 30

CS 40

CS 50

RMSE (px)

0.0045

0.0564

0.0096

0.0068

0.0061

0.0052

Computation time (s)

11.4

29.6

15.1

10.5

8.7

6.8

 

CS 60

CS 70

CS 80

CS 90

CS 100

RMSE (px)

0.0053

0.0050

0.0046

0.0057

Failed

Computation time (s)

5.9

6.0

5.0

4.9

 

In experiment 2, the BS strategy was compared with the CS strategy with the frame step from 10 to 40 with the step of 10. The locations and number of POIs were the same as experiment 1. The DIC analysis was harder to converge and resulted RMSEs were higher than experiment 1 due to the replacement of speckles. The CS strategy with frame step 40 was failed to get the result. The automatic determined locations of the reference frames is shown in Fig. 5b. The BS strategy also automatically decreased the frame step in the region with larger speckle replacement rate, and the average frame step was 60, which was larger than the CS strategies. None of the strategies generated all converged POIs. Among them, the BS strategy solved 782 out of 784 POIs, and the number of solved POIs by the CS strategy decreases as the frame step increase. At first glance, RMSE of the BS strategy was larger than RMSEs of the CS strategy with step 20 and 30. In fact, this is not a fair comparison due to the BS strategy converged at those POIs the CS strategy could not. The subsets of these POIs contains removed or added speckles that will interfere with the tracking process. Therefore, a new intersection RMSE was computed among the intersection of all converged POIs from all strategies. The result shows that the BS strategy was the best on all converged POIs. The RMSE of the CS strategy increases with the decrease of the frame step due to the accumulated error.

The computation time of the BS strategy was less than the CS strategy with frame step of 10 and larger than the rest CS strategies. This is because the total number of the BS computation iterations was 29, which includes all failed trials and partial computations. The higher computation time of the BS strategy was reasonable and necessary to automatically adjust the frame step and increase the number of solved POIs (Table 2).
Table 2

RMSE, number of converged POIs, and computation time of the BS strategy and CS strategy with different frame step in the simulated experiment 2

 

BS

CS 10

CS 20

CS 30

CS 40

RMSE (px)

0.0431

0.0576

0.0415

0.0408

Failed

Num. of converged POIs

782

737

693

665

 

Intersection RMSE (px)

0.0400

0.0559

0.0406

0.0406

 

Computation time (s)

25.1

29.4

15.0

10.4

 

Intersection RMSE means the RMSE was computed among the intersection of all converged POIs from all strategies

3.2 Real-world experiment

The effectiveness of the BS strategy was also validated on a tensile test. Because in the realistic experiment, there is no true deformation information available to test the accuracy. So only the adaptability of the BS strategy was tested here. Similar to the simulated experiment 1, the loading rate was increased dramatically during the test: changed from \(1.67\times 10^{-3}\) to \(1.67\times 10^{-2}\,\hbox {mm}/\hbox {s}\). The material of the specimen was highly ductile pure copper. The length, width, and thickness of the gauge section were 3 mm, 1 mm, and 1 mm respectively. The total testing time was 6 min. A continuous video with the frame rate of 30 was used to record the progress of deformation. The natural scratches on the surface of the specimen were used as the pattern to track the deformation. The grid step of POIs and subset size were 15 px and \(21\times 21\ \hbox {px}\) respectively. The total number of POIs was 690.
Fig. 6

Tracked POIs on the tensile specimen at four representative stages

The BS strategy successfully tracked all POIs from the initial frame to the last frame before the fracture. Four representative frames and corresponding POIs are shown in Fig. 6. Additional output frames were calculated at the frame step of 30. The derived elongation is plotted in Fig. 7 along with the locations of determined reference frames. The elongation could be divided into three regions. The first flat region corresponds to the stage before the tensile test starts. The second and third regions are the slow and fast loading periods respectively. It shows clearly that the BS strategy automatically reduced frame step in the high loading rate period to ensure convergence, and used large frame step in the small loading rate region to reduce the accumulated error.

In the large loading rate region, the frame step was not constant, and it can be divided into three sub-regions. The first sub-region may be attributed to the response of the rapid loading rate change, and the last region may relate to the necking process. To validate this hypothesis, the volume consistent factor (VFC) is plotted with respect to the frame index in Fig. 7. VFC is defined as
$$\begin{aligned} {\mathrm {VCF}}=(1+\epsilon )(w/w_0)^2 \end{aligned}$$
(5)
where \(\epsilon\) is the elongation, w is the width of the specimen, \(w_0\) is the initial width. The width was calculated from the minimum distance between the upper and lower boundary of the POIs. When the specimen was under uniform deformation, the volume consistent constraint should hold, and VFC should be close to 1. The larger the deviation of VFC from 1 represents the higher degree of local deformation, e.g., the necking process. Figure 7 shows clearly that the third sub-region coincides with the rapid decrease of VFC, which indicates that BS strategy automatic reduced the frame step for the large local deformation. The corresponding deformation state at frame 10136 is shown in Fig. 6c.
Fig. 7

The relationship between the elongation, volume consistent factor, and the determined reference frames. Solid line, dotted line, and vertical dashed lines represent the elongation, volume consistent factor, and locations of generated reference frames respectively

4 Discussion

Previous results indicate that the BS strategy is capable of adjusting the frame step for the changing loading rate and speckle pattern. Although the BS strategy may not generate the optimal reference frames among all possible routes, it shares the same characters of simplicity and robustness as the counterpart in root finding. The uniqueness of the BS strategy includes the minimal number of hyper-parameters and the adaptability to changing situations. It should be noted that the BS strategy does not aim to replace any existing pixel level and subpixel level DIC searching algorithms. However, it will work with most DIC algorithms with minor modifications and improve the robustness and accuracy of DIC analysis. For instance, the strategy of reducing the frame step in IRG algorithm can be replaced with the BS strategy to avoid the possible performance issue discussed in the introduction.

It should be noted that all the previous analysis of the BS strategy was based on the monotonic loading condition, which is the most common situation in experimental metrology of large deformation. The BS strategy can also be used for non-monotonic loading experiments with minor modification. For instance, in combination with loading data, the loading process could be divided into several monotonic loading subsections, and then the BS strategy could be applied. Another limitation of the BS strategy is that it inherently requires that all the data are available at the start of the analysis; e.g., the calculation should be offline. This requirement should not be a significant limitation in practice since most DIC analysis is offline due to the computation burden.

5 Conclusion

In this research, the BS strategy is proposed to solve the problem of selecting proper reference frames in varying situations. The key idea of this strategy is reducing the frame step into one half when the correlation failed.

The effectiveness of the BS strategy was compared with the CS strategy on two simulated experiments. The BS strategy successfully tracked the POIs to the last frame. In contrast, the CS strategies failed in the cases of the largest frame step. The BS strategy also automatically reduced the frame step in the period of the high rotation rate in the first experiment and large speckle replacement rate in the second experiment. The RMSE calculated by the BS strategy was also lower than the results of all CS strategies. The computation efficiency of the BS strategy was on the same level of most CS strategies and lower than the CS strategy with larger frame step. The moderate computation time of the BS strategy is worthy for the higher convergence and accuracy, and it avoids the time-consuming process of manual selection of the frame step in the CS strategy.

The robustness of the BS strategy was also demonstrated in the real-world experiment. It successfully tracked all POIs including the necking region with large plastic deformation above 70% of the engineering strain. The determined frame step was automatically reduced in the period of higher loading rate and the necking.

Notes

Acknowledgements

This study was funded by NASA EPSCoR (NNX13AM99A), National Science Foundation (CMMI-1625736), DOE STTR (DESC0018879), and the Intelligent Systems Center (ISC) and Material Research Center at Missouri S&T.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Blaber J, Adair B, Antoniou A (2015) Ncorr: open-source 2D digital image correlation matlab software. Exp Mech 55(6):1105–1122CrossRefGoogle Scholar
  2. 2.
    Chen Z, Shao X, Xu X, He X (2018) Optimized digital speckle patterns for digital image correlation by consideration of both accuracy and efficiency. Appl Opt 57(4):884–893CrossRefGoogle Scholar
  3. 3.
    Chi Y, Pan B (2018) Spatial–temporal subset-based digital image correlation: a general framework. CoRR arXiv:abs/1812.04826
  4. 4.
    Goh C, Ismail H, Yen K, Ratnam M (2017) Single-step scanner-based digital image correlation (SB-DIC) method for large deformation mapping in rubber. Opt Lasers Eng 88:167–177CrossRefGoogle Scholar
  5. 5.
    Matthews L, Ishikawa T, Baker S (2004) The template update problem. IEEE Trans Pattern Anal Mach Intell 26(6):810–815CrossRefGoogle Scholar
  6. 6.
    Pan B (2009) Reliability-guided digital image correlation for image deformation measurement. Appl Opt 48(8):1535–1542CrossRefGoogle Scholar
  7. 7.
    Pan B (2011) Recent progress in digital image correlation. Exp Mech 51(7):1223–1235CrossRefGoogle Scholar
  8. 8.
    Pan B (2014) An evaluation of convergence criteria for digital image correlation using inverse compositional Gauss–Newton algorithm. Strain 50(1):48–56CrossRefGoogle Scholar
  9. 9.
    Pan B (2018) Digital image correlation for surface deformation measurement: historical developments, recent advances and future goals. Meas Sci Technol 29(8):082001CrossRefGoogle Scholar
  10. 10.
    Pan B, Qian K, Xie H, Asundi A (2009) Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review. Meas Sci Technol 20(6):062001CrossRefGoogle Scholar
  11. 11.
    Pan B, Wu D, Xia Y (2012) Incremental calculation for large deformation measurement using reliability-guided digital image correlation. Opt Lasers Eng 50(4):586–592CrossRefGoogle Scholar
  12. 12.
    Pan B, Li K, Tong W (2013) Fast, robust and accurate digital image correlation calculation without redundant computations. Exp Mech 53(7):1277–1289CrossRefGoogle Scholar
  13. 13.
    Su Y, Zhang Q, Fang Z, Wang Y, Liu Y, Wu S (2019) Elimination of systematic error in digital image correlation caused by intensity interpolation by introducing position randomness to subset points. Opt Lasers Eng 114:60–75CrossRefGoogle Scholar
  14. 14.
    Sur F, Blaysat B, Grediac M (2017) Rendering deformed speckle images with a Boolean model. J Math Imaging Vis 60:1–17MathSciNetzbMATHGoogle Scholar
  15. 15.
    Tang Z, Liang J, Xiao Z, Guo C (2012) Large deformation measurement scheme for 3D digital image correlation method. Opt Lasers Eng 50(2):122–130CrossRefGoogle Scholar
  16. 16.
    Zhang Y, Xie X, Wang X, Li Y, Ling X (2018) Adaptive image mismatch removal with vector field interpolation based on improved regularization and Gaussian Kernel function. IEEE Access 6:55599–55613.  https://doi.org/10.1109/ACCESS.2018.2871743 CrossRefGoogle Scholar
  17. 17.
    Zhang Y, Yan L, Liou F (2018) Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method. Opt Lasers Eng 104:149–158CrossRefGoogle Scholar
  18. 18.
    Zhou Y, Sun C, Chen J (2014) Adaptive subset offset for systematic error reduction in incremental digital image correlation. Opt Lasers Eng 55:5–11CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Mechanical EngineeringMissouri University of Science and TechnologyRollaUSA

Personalised recommendations