Advertisement

Establishment of cellular automata image model and its application in image dimension measurement

  • Fei Peng
  • Shuqiang WangEmail author
  • Shuo Liang
Open Access
Research
  • 160 Downloads
Part of the following topical collections:
  1. Visual Information Learning and Analytics on Cross-Media Big Data

Abstract

Aiming at how to improve the efficiency of image edge detection, an image edge detection method based on least squares support vector machine (LSSVM) and cellular automata is proposed. Firstly, a new kernel function is constructed based on the Gauss radial basis kernel and polynomial kernel, which enables the LSSVM to fit the gray values of the image pixels accurately. Then, the gradient operator of the image is deduced, and the gradient value of the image is obtained by convolution with the gray value of the image. Then, the cellular automata evolves the gradient value according to the designed local rules to locate and detect the image edge. Simulation results show that the proposed edge detection algorithm is effective, and the new algorithm has higher detection performance than Sobel and Canny algorithms.

Keywords

Image analysis Cellular automata Image dimension measurement 

Abbreviations

LSSVM

Least squares support vector machine

SDCC

Self-driving cooperating car

SVM

Support vector machine

1 Introduction

Cellular automata [1] are a discrete dynamic system in space, time, and state. The mapping is similar to cellular automata in some aspects, the mapping is discrete in time evolution, and the value of state variables of cellular automata is continuous. If mapping corresponds to ordinary differential equations in continuous dynamical systems, then cellular automata correspond to partial differential equations in continuous dynamical systems, because partial differential equations are in time. The values of space and state variables are continuous. Therefore, cellular automata have become an extreme representative in the field of discrete dynamical systems. The space of cellular automata is composed of a series of cells arranged in grid distribution. The grid space can be one-dimensional, two-dimensional, or high-dimensional. In addition, grille space can be limited or infinite.

In order to analyze the traffic situation of airport surface, Xing et al. [2] proposed an airport surface traffic simulation method based on the theory of agent and cellular automaton model. The traffic characteristics of airport surface are analyzed. The combination of agent theory and the cellular automaton model is discussed. Different parts of the airport surface traffic are defined respectively as one-dimensional cellular automaton, and aircraft agent is designed, which means that a traffic simulation model is constructed based on agent-cellular automaton. The airport surface traffic is simulated. Simulation results show that the simulation method has the characteristics of simple, high efficiency, and high accuracy, which can reflect the autonomy and individual differences of aircraft taxi in airport traffic system. And the analysis and assessment of the traffic situation has high application value in airport surface.

Aiming at the actual situation of complex structure and the evacuation of subways affected by the subjective conditions of pedestrians, these papers use the ant colony algorithm to solve the large passenger flow under complex building structure to find the optimal evacuation path in the macro view, and on the micro-level, use the intelligent decision model of cellular automaton to construct the underground pedestrian evacuation model of the fusion of ant colony algorithm and cellular automaton and discuss the evacuation efficiency and individual status of a station in Guangzhou Metro during the evacuation of pedestrians [3, 4, 5]. The simulation results can provide the preparation of contingency plans, staff training, the evacuation of passengers, and emergency drills with a reference.

Another major research area of cellular automata is autopilot [5, 6, 7, 8, 9, 10]. In the autopilot study, by establishing a single-lane autopilot-manual driving traffic flow model, the unit length of the cell is refined, and autopilot is fully considered. The difference in reaction time of the manually driven vehicle system reduces the reaction time of the automatic driving system, so that the road capacity is greatly improved.

Image size measurement includes image acquisition [11, 12, 13], image preprocessing [14, 15, 16], image segmentation [17, 18, 19], feature extraction [20, 21, 22], and parameter measurement [23, 24, 25]. The focus of image acquisition is to obtain high-quality image. Image preprocessing can suppress noise and enhance contrast, thus improving the quality of source image. The object of interest is segmented from the background by image segmentation, and the parameters such as dimension are important for feature extraction and parameter measurement. The hardware such as camera and image processing algorithm make up the image size measurement system and finish the measurement work on this platform. Every component of the image size measurement system is indispensable, and the measurement algorithm is the most important.

In the last 20 years, the image measurement technology has developed rapidly at home and abroad and has been widely used in the measurement of geometric parameters of parts [26, 27, 28, 29, 30]. The micro-size measurement and appearance detection of precision parts used are aerial remote sensing image, light wave interferogram, and the analysis of stress and strain field state distribution map, and so on. The high resolution, high sensitivity, wide spectral response, and large dynamic range of image size measurement system are difficult to compare with traditional measurement methods. Image measurement technology generally has no special requirements on the environment, and it is very suitable for some traditional measurement methods which are difficult to realize, especially as the online measurement link of automatic pipeline. With the continuous improvement of manufacturing technology requirements of time, the matching image measurement technology is bound to move to a higher level.

From the existing parts of measurement research results, we found that today’s measurement has the accuracy that does not meet the requirements of the development of the prior art. The measurement process is greatly disturbed by external noise, and it is difficult to realize the shape recognition. Cellular automata have been applied to image processing and have achieved rich results [31, 32, 33, 34]. In these studies, the introduction of cellular automata has improved the accuracy and anti-interference ability of image processing. For this reason, we believe that with cellular automata, the introduction of image size measurement can solve this problem well. Based on the edge detection method of least squares and cellular automaton, this paper establishes an efficient image edge detection model. Firstly, a Gauss radial basis-based LSSVM is established. The method is combined with the cellular automaton model to evolve the gradient values according to local rules to achieve image edge location and detection. The edge of the image detected by the simulation experiment is accurate.

2 Proposed method

2.1 Image edge detection algorithm based on LSSVM-CA

2.1.1 Construction of least squares support vector machine and its kernel function

The basic idea of SVM is to select a set of feature subsets of the training set (called support vector machine), which can separate classes and provide favorable conditions for classifier generation. At the same time, the complexity of operation is reduced while the classification accuracy is guaranteed. Compared with classical classification algorithms, SVM has obvious advantages in many aspects, such as preventing trained learning and computing speed, but it also has some limitations. So researchers put forward a lot of deformation algorithms of support vector machine; these deformation algorithms are mainly by adding function items, variables, or coefficients to deform the formula, so this chapter mainly describes the deformation algorithm.

2.1.2 Least squares support vector machine (LSSVM)

Let the sample be an n-dimensional vector and l samples, and their values in a region be represented as (x1, y1), ⋯⋯(xl, yl) ∈ Rn × R. First, a nonlinear map ψ(⋅) is used to map samples from the original space Rn to the feature space ψ(x) = (φ(x1), φ(x2), ⋯φ(xl)). The optimal decision function y(x) = w ⋅ φ(x) + b is constructed in this high dimensional feature space. In this way, the nonlinear estimation function is transformed into the linear estimation function in the high dimensional feature space. Using the principle of structural risk minimization, finding the extremum of the weight vector w and the offset b is minimized:
$$ R=\frac{1}{2}\cdot {\left\Vert w\right\Vert}^2+c\cdot {R}_{\mathrm{emp}} $$
(1)

Among them, ‖w2 is the complexity of the control model, c is the regularization parameter, and Remp is the error control function, that is, insensitive loss function.

Commonly used loss functions are linear ε loss function, quadratic ε loss function, and Huber loss function. Different types of support vector machines can be constructed by selecting different loss functions. The loss function of the least squares support vector machine in the optimization target is the quadratic term of the error ζi. Therefore, the optimization problem is:
$$ \min J\left(w,\zeta \right)=\frac{1}{2}w\cdot w+c\sum \limits_{i=1}^l{\zeta}_i^2 $$
(2)

s.t: yi = φ(xi) ⋅ w + b + ζi, i = 1, ⋯⋯, l.

The Lagrangian method is used to solve this optimization problem.
$$ L\left(w,b,a,\gamma \right)=\frac{1}{2}w\cdot w+c\sum \limits_{i=1}^l{\zeta}_i^2-\sum \limits_{i=1}^l{a}_i\left(w\cdot \varphi \left({x}_i\right)+b+\zeta -{y}_i\right) $$
(3)

among which ai, i = 1, . . … , l, is the Lagrange multiplier.

According to the optimization conditions,
$$ \frac{\partial L}{\partial w}=0,\frac{\partial L}{\partial b}=0,\frac{\partial L}{\partial \zeta }=0,\frac{\partial L}{\partial a}=0 $$
(4)
acquirability \( w=\sum \limits_{i=1}^l{a}_i\phi \left({x}_i\right) \), \( \sum \limits_{i=1}^l{a}_i=0 \), ai = i, and
$$ w\cdot \phi \left({x}_i\right)+b+{\zeta}_i-{y}_i=0. $$
(5)
Defining kernel function K(xi, xj) = ϕ(xi) ⋅ ϕ(xj), K(xi, xj) is a symmetric function satisfying the Mercer condition. According to (5), the optimization problem is transformed into solving linear equations:
$$ \left[\begin{array}{cccc}0& 1& \cdots & 1\\ {}1& K\left({x}_1,{x}_1\right)+1/c& \cdots & K\left({x}_1,{x}_1\right)\\ {}\vdots & \vdots & \ddots & \vdots \\ {}1& K\left({x}_l,{x}_l\right)& \cdots & K\left({x}_l,{x}_l\right)+1/c\end{array}\right]\;\left[\begin{array}{c}b\\ {}{a}_1\\ {}\vdots \\ {}{a}_l\end{array}\right]=\left[\begin{array}{c}0\\ {}{y}_1\\ {}\vdots \\ {}{y}_l\end{array}\right] $$
(6)
Finally, the nonlinear model is obtained as follows:
$$ f(x)=\sum \limits_{i=1}^l{a}_iK\left(x,{x}_i\right)+b $$
(7)

The image fitting takes the M × N neighborhood of the pixel as the processing unit. Same as 3 ≤ M = N ≤ 9. The distance between the pixels in the neighborhood is expressed as Δr and Δc in the horizontal direction and the vertical direction, respectively. |Δr| ≤ \( \left\lfloor \frac{M}{2}\right\rfloor \) and \( \Delta c\le \left\lfloor \frac{M}{2}\right\rfloor \),因.

The coordinates of all pixels in the neighborhood can be expressed as (r Δ ROC Δ c),) and used as the input of LSSVM. The input of LSSVM can also be subtracted from the coordinate by (r + Δr, c + Δc) and replaced by (Δr, Δc), which is subtracted from the former coordinate.

If M and N are known, the constant vector space as input can be obtained. {(Δrc)||Δr |≤ \( \left\lfloor \frac{M}{2}\right\rfloor \) and \( \Delta c\le \left\lfloor \frac{M}{2}\right\rfloor \)}. The nonlinear relationship between input vectors and pixels can be constructed by LSSVM. Formula (3) is considered as a set of linear equations with a and B as unknown elements.
$$ \left\{\begin{array}{l}a={\left(K\left({x}_i,{x}_j\right)+{\gamma}^{-1}I\right)}^{-1}\left(y-\Theta b\right)\\ {}b=\frac{\Theta^T{\left(K\left({x}_i,{x}_j\right)+{\gamma}^{-1}I\right)}^{-1}y}{\Theta^T{\left(K\left({x}_i,{x}_j\right)+{\gamma}^{-1}I\right)}^{-1}\Theta}\end{array}\right. $$
(8)
The matrices A and B are defined as follows:
$$ \left\{\begin{array}{l}A={\left(K\left({x}_i,{x}_j\right)+{\gamma}^{-1}I\right)}^{-1}\\ {}B=\frac{\Theta^T{\left(K\left({x}_i,{x}_j\right)+{\gamma}^{-1}I\right)}^{-1}}{\Theta^T{\left(K\left({x}_i,{x}_j\right)+{\gamma}^{-1}I\right)}^{-1}\Theta}\end{array}\right. $$
(9)
Then, the expression can be expressed as follows:
$$ \left\{\begin{array}{l}a=A\left(y-\Theta b\right)\\ {}b= By\end{array}\right. $$
(10)
It can be seen from the above derivation that the calculation of matrices A and B is independent of the output y, that is, independent of the gray value of the pixel, related to the input quantity, the type of kernel function, and γ. From the above, the number of samples in the constant vector space is known and fixed, and γ is constant, so once the kernel function is determined, the matrices A and B can be obtained in advance. The matrices A and B only need to be solved once, and the matrix is globally universal, which is essentially a constant matrix. Next, the selected kernel function can be used for image fitting. Let Row × Col be the constant vector space of a pixel symmetric neighborhood of the image to be processed. Like Row = {− 1,0,1}, Col = {− 1,0,1}. The kernel function constructed by the formula is represented as the image gray fitting function in the constant vector space as follows:
$$ f\left(r,c\right)=\sum \limits_{i=1}^n{a}_i\left(\exp \left(-\left({\left|r-{r}_i\right|}^2+{\left|c-{c}_i\right|}^2\right)/{\sigma}^2\right)+{\left(r\cdot {r}_i+c\cdot {c}_i+1\right)}^d\right)+b $$
(11)
In the formula, f (ROC) is the gray level estimate of the corresponding point (ROC), and (ri, ci) is the pixel coordinate as input. For the point (RFC), the first-order partial derivative of the image gray fitting function in horizontal and vertical direction can be obtained:
$$ \frac{\partial f}{\partial r}=\sum \limits_{i=1}^n\Big(-\frac{2}{\sigma^2}\left(r-{r}_i\right)\exp \left(-\left({\left|r-{r}_i\right|}^2+{\left|c-{c}_i\right|}^2\right)/{\sigma}^2\right)+{dr}_i{\left(r\cdot {r}_i+c\cdot {c}_i\right)}^{d-1}A\left(1-\Theta B\right)y $$
(12)
$$ \frac{\partial f}{\partial r}=\sum \limits_{i=1}^n\Big(-\frac{2}{\sigma^2}\left(c-{c}_i\right)\exp \left(-\left({\left|r-{r}_i\right|}^2+{\left|c-{c}_i\right|}^2\right)/{\sigma}^2\right)+{dc}_i{\left(r\cdot {r}_i+c\cdot {c}_i\right)}^{d-1}A\left(1-\Theta B\right)y $$
(13)
The matrix Wr and Wc calculations are introduced.
$$ {W}_r=\sum \limits_{i=1}^n\Big(-\frac{2}{\sigma^2}\left(r-{r}_i\right)\exp \left(-\left({\left|r-{r}_i\right|}^2+{\left|c-{c}_i\right|}^2\right)/{\sigma}^2\right)+{dr}_i{\left(r\cdot {r}_i+c\cdot {c}_i\right)}^{d-1}A\left(1-\Theta B\right)y $$
(14)
$$ {w}_c=\sum \limits_{i=1}^n\Big(-\frac{2}{\sigma^2}\left(c-{c}_i\right)\exp \left(-\left({\left|r-{r}_i\right|}^2+{\left|c-{c}_i\right|}^2\right)/{\sigma}^2\right)+{dc}_i{\left(r\cdot {r}_i+c\cdot {c}_i\right)}^{d-1}A\left(1-\Theta B\right)y $$
(15)
It can be seen from the above derivation that the matrices Wr and Wc are similar to the matrices A and B and independent of the gray value y of the pixels, and related to the input quantity and the type of kernel function, so they can be obtained in advance and become a constant matrix. They are transformed into square arrays with the same size as the pixel (rc) neighborhood Row × Col, and Wr and Wc become gradient operators of the image. The gradient operators Wr, Wc, and the image matrix I (ROC) are used for convolution calculation, and the gradient value matrices GH (ROC) and GV (ROC) in the horizontal and vertical directions of the image are obtained.
$$ {G}_H={W}_H\times I $$
$$ {G}_V={W}_c\times I $$

2.1.3 An algorithm of image edge detection based on CA

You might as well keep in mind: ▽f(x, y) = \( \frac{\partial f}{\partial x}i \) + \( \frac{\partial f}{\partial y}j \) is the gradient of the image, ▽f(x, y) contains grayscale change information: Open image in new window as ▽f(x, y)’s g radient, and e(x, y) can be used as edge detection operator. To simplify the calculation, e(x, y) can also be defined as the sum of the absolute derivatives of partial derivatives fx and fy:
$$ e\left(x,y\right)=\mid {f}_x\left(x,y\right)\mid +\mid {f}_y\left(x,y\right)\mid $$
(16)

On the basis of the above formula theory, many algorithms are proposed. The commonly used edge detection methods are the Roberts edge detection operator, Sobel edge detection operator, Prewitt edge detection operator, Canny edge detection operator, Laplace edge detection operator, and so on. However, the traditional method of edge detection has some drawbacks in time and complexity, so this paper uses CA to detect the image edge.

According to the characteristics of the edge, the gradient value of the pixel on the edge has its particularity (which should be the largest in its corresponding neighborhood), so it is an important information for edge detection. When using 2D CA to detect the edge of gray image, the image gradient value of the matrix obtained from the above formula is regarded as the processing object, and the image gradient value is mapped to the cellular space as the initial state value, so the finite state set is S = {0, …,255} neighborhood selection Von Neumann type, so the size of cell space vector N is 4. Evolution rule R is the key to the algorithm, using evolutionary rules to operate; when the evolution stops, we get the final results of edge detection. First, according to the gradient value, the cells are divided into four categories, expressed as GCi, I ∈{1, 2, 3, 4}.

The rules of evolution are as follows:
  1. 1)

    Classifies all the cells in the central cell neighborhood into the corresponding category.

     
  2. 2)

    The number of cells belonging to the GCi class in the neighborhood is calculated and expressed as Num (GCi).

     
  3. 3)

    Call the maximum value algorithm to get the class that contains the maximum number of cells in the neighborhood and assign it to GCmajority.

     
  4. 4)

    St (GCmajority) is defined to record the cellular state of the neighborhood cell belonging to the GCmajority class at t time.

     
  5. 5)

    Seeking st (GCmajority) and Sum (st (GCmajority).

     
  6. 6)

    Find the value of the Boolean expression Bool = (max (Num(GCi)) = GCmajority) and (Sum(st(GCmajority)) > 254)

     
  7. 7)

    According to the result of Bool, we give different gray values of T + 1 time center cell (Ccx, Ccy).

     
$$ {\displaystyle \begin{array}{l}{N}^{t+1}\left({C}_{\mathrm{cx}},{C}_{\mathrm{cy}}\right)=R\left({N}^t\left({C}_{\mathrm{cx}},{C}_{\mathrm{cy}}\right)\right)\\ {}=R\left({{S^t}_C}_{\mathrm{cx}}+{0}_{,}{C}_{\mathrm{cy}}+0,{{S^t}_C}_{\mathrm{cx}}+{0}_{,}{C}_{\mathrm{cy}}+1,{{S^t}_C}_{\mathrm{cx}}+{1}_{,}{C}_{\mathrm{cy}}+0,{{S^t}_C}_{\mathrm{cx}}+{0}_{,}{C}_{\mathrm{cy}}-1,{{S^t}_C}_{\mathrm{cx}}-{1}_{,}{C}_{\mathrm{cy}}+0\right)\\ {}={{S^t}_C}_{\mathrm{cx}}+{0}_{,}{C}_{\mathrm{cy}}+0\end{array}} $$
(17)
  1. 8)

    It is not until Nt 1 (cx, Ccy) = Nt (Ccx, Ccy) that the evolution reaches a stable state and ends, and the edge pixel set of the image is obtained.

     

2.2 Application of LSSVM-CA model in image dimension measurement

In recent years, image measurement technology has been developed rapidly at home and abroad. It has been widely used in part of geometric parameter measurement, micro-size measurement and appearance detection of precision parts, aerial remote sensing images, light wave interferogram, and the stress and strain field distribution map analysis and many other aspects. The high resolution, high sensitivity, wide spectral response, and large dynamic range of image size measurement system are difficult to compare with traditional measurement methods. Different image processing and analysis methods as well as different detection methods and calculation formulas will bring different errors. In this paper, the LSSVM-CA model is applied to measure the image size of precision parts to verify whether the appearance size is up to standard. On the basis of edge detection of LSSVM-CA model, Gao Si curve fitting interpolation method in gradient direction can be used to locate sub-pixel level, and the accuracy is improved significantly.

Surface fitting is a method with high accuracy, low computation, and strong anti-noise performance, which has been widely used in various scenarios. Such as rock deformation measurement, electronic deformation measurement, and palmprint image correlation matching-based repeat location technology. The premise that the sub-pixel displacement can be successfully calculated by surface fitting method is that the whole pixel matching point of the template can be correctly searched in the whole pixel search stage, and once there is an error in the whole pixel matching point search, then the displacement distance from the sub-pixel measurement phase will be meaningless. Using different fitting functions to fit the correlation coefficient matrix will have a certain influence on the results. The commonly used fitting functions include quadratic function, cubic function, and Gao Si function. In this paper, Gao Si function is used for surface fitting.

The binary Gao Si function can be represented as follows:
$$ f\left(x,y\right)=A\bullet \exp \left[-\frac{{\left(x-{c}_0\right)}^2}{2{\sigma_0}^2}-\frac{{\left(y-{c}_1\right)}^2}{2{\sigma_1}^2}\right] $$
(18)
where A is the amplitude,c0, c1 and σ0, σ1 are the mean and standard deviation in the x-axis and y-axis, respectively. On solving functions f(x, y) coefficient in the logarithms on the two sides of the upper equal sign can be obtained:
$$ Inf= InA-\frac{{\left(x-{c}_0\right)}^2}{2{\sigma_0}^2}-\frac{{\left(y-{c}_1\right)}^2}{2{\sigma_1}^2} $$
(19)
$$ Inf= InA-\frac{{c_0}^2}{2{\sigma_0}^2}-\frac{{c_1}^2}{2{\sigma_1}^2}+\frac{{c_0}^2}{{\sigma_0}^2}x+\frac{{c_1}^2}{{\sigma_1}^2}y-\frac{1}{2{\sigma_0}^2}{x}^2-\frac{1}{2{\sigma_1}^2}{y}^2 $$
(20)
Bring the following into
$$ {\lambda}_0= InA-\frac{{c_0}^2}{2{\sigma_0}^2}-\frac{{c_1}^2}{2{\sigma_1}^2},{\lambda}_1=\frac{{c_0}^2}{{\sigma_0}^2},{\lambda}_2=\frac{{c_1}^2}{{\sigma_1}^2},{\lambda}_3=\frac{1}{2{\sigma_0}^2},{\lambda}_3=\frac{1}{2{\sigma_1}^2} $$
(21)
The formula can be transformed into:
$$ Inf={\lambda}_0+{\lambda}_1x+{\lambda}_2y+{\lambda}_3{x}^2+{\lambda}_4{y}^2 $$
(22)

Using the position in the fitting window and the corresponding correlation coefficient, n × n equations can be obtained. Using the least squares method, the coefficient λ0, λ1, λ2, λ3, λ4, λ5, of the equation can be calculated and the coefficients of the Gaussian function can be obtained.

3 Experimental results

3.1 Image acquisition and preprocessing

The camera used in this experiment system is Nikon d7500. As shown in Fig. 1, the main parameters are as follows: the resolution is 5568 × 3712, and the effective pixel is 20.88 million display screen type.
Fig. 1

Precision rivets of different sizes for experiments and photo acquisition equipment

The parts are verified with precision rivets of different sizes, such as the figure below.

The purpose of image smoothing is to suppress noise and improve image quality, which can be carried out in spatial and frequency domains. The commonly used methods include neighborhood averaging, spatial filtering, and median filtering. The neighborhood averaging method is a local spatial processing method, which uses the gray average of each pixel in the pixel neighborhood to replace the original gray value of the pixel, so that the image can be smoothed. Because the noise in the image belongs to the high-frequency component, the spatial filtering method adopts the low-pass filtering method to remove the noise to realize the image smoothing.

Median filter is a nonlinear processing technique, which can suppress the noise in the image. It is based on the characteristics of images: noise often appears in the form of isolated points, the number of pixels corresponding to these points is very small, and the image is made up of small blocks with more pixels and larger area [12].

No matter the grayscale image obtained directly or the gray image transformed by color image, there is noise in it, which has a great influence on image quality. The median filter cannot only remove the solitary noise, but also preserve the edge characteristics of the image, so that the image will not produce significant blur, which is more suitable for the face image in the experiment. The steps of median filtering are as follows:
  1. (1)

    The template is roamed in the graph, and the center of the template is overlapped with a pixel position in the graph.

     
  2. (2)

    Reading the gray value of each corresponding pixel under the template.

     
  3. (3)

    Put these grayscale values in a row from small to large.

     
  4. (4)

    Find out one of those values in the middle.

     
  5. (5)

    Assign this intermediate value to the pixel at the center of the corresponding template.

     

It can be seen from the above steps that the main function of median filter is to change the value of pixels close to the values of the surrounding pixels when the difference between the gray values of the surrounding pixels is large, so its ability to eliminate the isolated noise pixels is very strong. Because it is not simple to take the mean, it produces less ambiguity. In other words, median filtering can eliminate noise and maintain the details of the image [13]. Examples are as follows:

In order to better achieve the contrast test, this paper performs grayscale and noise processing on the original acquired image, as shown in Fig. 2. The left side is the original picture of the part, the middle is the grayed-out picture, and the right side is the noise picture.
Fig. 2

Original image, grayscale image, and noisy image

4 Discussion

4.1 Image segmentation

As shown in Fig. 2, many of the images acquired by the parts are a combination of multiple parts, so the image needs to be segmented before the image processing is performed, and Fig. 3 shows the grayscale image and the noise-added image of figure part split results.
Fig. 3

Splits the part

As can be seen from the results in Fig. 3, since the distinction between the part and the background image is large, the parts can be well segmented regardless of whether the noise is added or not, and thus, the size of the part can be seen. For measurement, the choice of image segmentation technology is not a major factor affecting measurement accuracy.

4.2 Edge detection algorithm

The essence of edge detection is to extract the boundary line between object and background. The edge is defined as the edge of the image where the gray level changes dramatically. The change of image gray level can be reflected by the gradient of image gray distribution, so we can obtain edge detection operator by using local image differential technique. The classical edge detection method is to construct the edge detection operator for a small neighborhood of the pixel in the original image. The following is the theoretical analysis of several classical edge detection operators, and their performance characteristics are compared and evaluated.

The basic step of edge detection: First, smooth the original image, then sharpen the filtered image. The sharpened image is obtained by filtering, then the edge of the sharpened image is determined and the binary image is obtained. The final image edge is obtained by edge linking. Gradient calculation is easy to be affected by noise, so the image is smoothed first to reduce the influence of noise on the next step. It is necessary to select the appropriate filter when filtering, because the stronger the noise reduction ability of the filter, the greater the influence on the boundary strength. Sharpening filter operation is to strengthen some pixels of local change position which have meaning, so as to determine the change of gray level in a certain point field, so as to make the detection boundary more complete. But like smoothing filter, appropriate sharpening filter operation should be carried out to keep more meaningful points while ensuring those meaningless points. Points and noise points that may interfere with the final detection result will not be strengthened at the same time, making the edge detection result more ideal. Edge detection is an important step in edge detection, which determines the final result of edge detection. The specific operation is to select and remove many of the non-zero gradient points in the image according to different application requirements, which requires the operator to decide on a case-by-case basis, because not all points in the image are meaningful. The criteria for selection or removal vary from case to case. The edge join operation is to use certain method to connect the discontinuous edge of the decision operation into the continuous edge, at the same time to judge the pseudo-edge and remove it if necessary. It affects the final effect of edge detection, that is, the integrity of edge and the existence of breakpoints. In order to compare the edge detection effect of images, two common edge detection methods are selected in this paper. Figure 4 shows the edge extraction results.
Fig. 4

Edge extraction results

The first column in Fig. 4 is the proposed method, and the second and third columns are the comparison methods. It can be seen from the results in Fig. 4 that the proposed method is superior to the other two methods in extracting details of image contours. In order to verify the noise immunity of the image contours in this paper, Fig. 5 shows the edge extraction results of the parts after noise is added.
Fig. 5

Edge extraction after adding noise

As shown in Fig. 5, after adding noise, the image edge detection effect is significantly lower than that of the non-noise image, but comparing the two results, the results extracted by the method proposed in this paper are obviously the most similar to the results without noise. The second column in the figure and the edges of the picture in the third column are greatly disturbed by noise.

4.3 Parameter measurement and dimension identification

After preprocessing the images obtained by the image acquisition equipment, a high-quality rivet image is obtained. In order to complete the measurement of the rivets, rivets must be recognized from the obtained images. Then, the exact coordinate set of rivet contour is obtained by region contour tracking, which lays a good foundation for rivet measurement in the next step. In this chapter, the acquisition process of rivet contour coordinate set is described in detail, which is divided into three parts: rivet image segmentation, rivet image analysis, and rivet image contour tracking.

The standard 2D rivet image is an axisymmetric figure. In the case of horizontal placement of the rivets, the length of the external rectangle is the full length of the rivets, and the environment of the industrial automation production site is very complex, which may result in the rivets being slanted. Therefore, this paper uses the method of finding the minimum external rectangle of rivets to locate the spindle. This method can directly obtain the full length of rivets, and it can solve the influence of rivet orientation on the detection process.

Method for locating rivets spindle steps:
  1. 1)

    The external rectangular area of the rivet contour is obtained, that is, the rivet contour is approximated by placing the rectangle horizontally around the rivet.

     
  2. 2)

    Determine the center of rotation of the image. With the center of rotation (ACB) of the external rectangle of the initial willow nail contour as the center of rotation, the rivet profile is rotated around the point at n each time. The first step of each rotation was 90/n times.

     
  3. 3)

    The degree of rotation corresponding to the minimum area of the outer rectangle is taken from the area of the external rectangle obtained by each rotation, and the coordinates of the upper left corner and the lower right corner of the outer rectangle are calculated. The minimum external rectangle of the rivet profile can be determined by selecting the two-point coordinates in reverse direction.

     

Image measurement technology generally has no special requirements on the environment, and it is very suitable for some traditional measurement methods which are difficult to realize, especially as the online measurement link of automatic pipeline. With the increasing requirements of manufacturing technology of time, the matching image measurement technology is bound to advance to a higher level. Combined with the development of image measurement technology and the current situation in the field of measurement, it can be expected that the image measurement technology will show the following trends: The measurement accuracy has developed from micron to nano-scale, the measurement range has been extended from length to area measurement to shape recognition, and the measurement method has been advanced from off-line measurement to real-time online measurement. The measurement system moves from a single measurement function to an intelligent and automatic system that integrates measurement and control. In short, image measurement technology must achieve high accuracy, high speed, and high efficiency. Therefore, the intelligent measurement system with fast measuring speed, high precision, and high efficiency will become an important developing direction of image measurement technology in the future.

1) The cellular automata model is applied to image enhancement, and the classical image processing rules are transformed into the state evolution rules of cellular automata. Using this rule, the image is smoothed and sharpened, and the simulated image is obtained. The analysis of the simulation results shows that image sharpening can highlight the edge of the image and increase the brightness of the image, and the image smoothing can highlight the trunk part of the original image, reduce the noise, and slow down the brightness of the image. In this paper, the results of the cellular automata (CA) model are compared with those of the traditional Canny and Sobel algorithms. The results show that the effect of the cellular automata model on edge detection is due to other models.

SNR/dB

LSSVM-CA

Canny

Sobel

30

0.993

0.912

0.900

20

0.941

0.988

0.934

10

0.997

0.989

0.999

1) The measurement results of rivets of different sizes are different. The experimental results show that the larger the size of rivets is, the more accurate the measurement results are; the relative error is about 0.03–0.97 mm, and the relative error of small rivets is 0.107, which has more negative effect on the overall measurement effect than that on large-size rivets. This may have an impact on the focus or placement of the image acquisition device.

4) Different placement modes are important factors affecting the measurement results.

Measuring parameters

Normal placement

Inclined placement

absolute error

Overall length

660.0157

660.1074

0.0917

Riveting length

400.0116

399.1569

0.8547

Long diameter

170.0395

171.0325

0.993

Short diameter

120.0659

120.1796

0.1137

Measurement results of rivet parameters in different placement modes

The results show that, regardless of the size of the rivets, the average level of the front placement is higher than that of the side placement. Therefore, if we consider improving the measuring results, we can deal with the parts in the early stage, such as product cleaning and workpiece clamping, avoid due to dirty parts and placement errors, and the correct placement.

5 Conclusions

Part size measurement is a very common and important item in life production. The use of the camera in dimension measurement has many advantages, such as non-contact nondestructive, high accuracy, high speed, and easy automatic processing and control. In this paper, an image edge detection method based on least squares support vector machine (LSSVM) and cellular automata are proposed, which are more efficient and more effective than the traditional algorithm and are limited by the experimental conditions (Fig. 6). The measurement algorithm needs to be improved:
  1. 1)

    In the application of measurement algorithm, we also need to find the best balance point between the calculation speed and the calculation precision. From the practical point of view of the algorithm, we can use hardware to realize the algorithm with high time complexity.

     
  2. 2)

    Based on the image processor, the processing speed of the system can be faster and better meet the requirements of real-time processing online measurement.

     
  3. 3)

    The measurement system can only be used to measure the geometric dimensions of two-dimensional images. It can be considered to collect picture information by using double cameras or even multiple cameras. The system can be used to measure 3D images and broaden the application field of image size measurement.

     
Fig. 6

Quantitative analysis of edge detection performance of different SNR images (E value)

To sum up, this paper has done some analysis and research on the shape and size of precision industrial parts, which has certain theoretical and practical significance. Although a lot of work has been done, but due to time constraints, the system needs to be further improved. The image correction of the camera, the measurement of different shapes in the geometric dimension measurement of the parts, the ability to realize the online detection function, and so on, all of these will be the focus of the next research.

Notes

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

Not applicable.

Availability of data and materials

Please contact author for data requests.

Authors’ contributions

All authors take part in the discussion of the work described in this paper. The author FP wrote the first version of the paper. The authors SW and SL did the part of experiments of the paper and revised it in different versions, respectively. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.
    S. Wolfram, Statistical mechanics of cellular automata. Rev. Mod. Phys. 55(3), 601–644 (1983)MathSciNetCrossRefGoogle Scholar
  2. 2.
    X. Zhiwei, L. Shizhen, T. Yunqi, et al., Airport scene traffic simulation based on agent-cell automata. J. Syst. Simul. 30(3)857–865 (2018)Google Scholar
  3. 3.
    D. Qidong, C. Moxiang, A. Xu, Pedestrian evacuation model of subway based on ant colony cellular automata. J. Comput. (2), 18–21 (2018)Google Scholar
  4. 4.
    Yan Gaoyou. Research on several problems of urban rail transit transfer and emergency based on ACP method [D]. Beijing Jiaotong University, Beijing, 2013Google Scholar
  5. 5.
    L. Kun, Research on Rail Transit Path Selection Model Based on Ant Colony Algorithm and its Application (Beijing Jiaotong University, Beijing, 2011) Google Scholar
  6. 6.
    B. Straatman, R. White, G. Engelen, Towards an automatic calibration procedure for constrained cellular automata. Comput. Environ. Urban Syst. 28(1), 149–170 (2004)CrossRefGoogle Scholar
  7. 7.
    J.D. Lohn, J.A. Reggia, Automatic discovery of self-replicating structures in cellular automata. IEEE Trans. Evol. Comput. 1(3), 165–178 (2002)CrossRefGoogle Scholar
  8. 8.
    A. Wuensche, Classifying cellular automata automatically: finding gliders, filtering, and relating space-time patterns, attractor basins, and the Z parameter. Complexity 4(3), 47–66 (2015)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Ravichandran, R., Ladiwala, N., Nguyen, J., Niemier, M., & Lim, S. K. (2004, April). Automatic cell placement for quantum-dot cellular automata. In Proceedings of the 14th ACM Great Lakes symposium on VLSI, ACM (pp. 332–337)Google Scholar
  10. 10.
    A. Adamatzky, Automatic programming of cellular automata: identification approach. Kybernetes 26(2), 126–135 (1997)CrossRefGoogle Scholar
  11. 11.
    S.E. Reichenbach, Characterizing digital image acquisition devices. Opt. Eng. 30(2), 170 (1991)CrossRefGoogle Scholar
  12. 12.
    P. Vuylsteke, A. Oosterlinck, Range image acquisition with a single binary-encoded light pattern. Pattern Anal. Mach. Intell. IEEE Trans. 12(2), 148–164 (1990)CrossRefGoogle Scholar
  13. 13.
    R. Boellaard, Standards for PET image acquisition and quantitative data analysis. J. Nucl. Med. 50(Suppl_1), 11S–20S (2009)CrossRefGoogle Scholar
  14. 14.
    K.S. Fu, A. Rosenfeld, Pattern recognition and image processing. IEEE Trans. Comput. C-25(12), 1336–1346 (1976)CrossRefGoogle Scholar
  15. 15.
    R. Gross, V. Brajovic, in International Conference on Audio-& Video-Based Biometric Person. An image preprocessing algorithm for illumination invariant face recognition (2003)Google Scholar
  16. 16.
    R. Gross, An image preprocessing algorithm for illumination invariant face recognition. Avbpa, 10–18 (2001)Google Scholar
  17. 17.
    G. Leo, Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1768–1783 (2006)CrossRefGoogle Scholar
  18. 18.
    J. Shi, J. Malik, Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)CrossRefGoogle Scholar
  19. 19.
    P.F. Felzenszwalb, D.P. Huttenlocher, Efficient graph-based image segmentation. Int. J. Comput. Vis. 59(2), 167–181 (2004)CrossRefGoogle Scholar
  20. 20.
    M.S. Nixon, A.S. Aguado, Feature extraction and image processing. Feature Extraction Image Process. Comput. Vis., 37–82 (2002)Google Scholar
  21. 21.
    A.L. Yuille, P.W. Hallinan, D.S. Cohen, Feature extraction from faces using deformable templates. Int. J. Comput. Vis. 8(2), 99–111 (1992)CrossRefGoogle Scholar
  22. 22.
    O.D. Trier, A.K. Jain, T. Taxt, Feature extraction methods for character recognition - a survey. Pattern Recognit. 29(4), 641–662 (1996)CrossRefGoogle Scholar
  23. 23.
    Senjyu, T., Kinjyou, K., & Uezato, K. Parameter measurement of PMSM using adaptive identification. Ryukyu Daigaku Kogakubu Kiyo(Bulletin of the Faculty of Engineering, University of the Ryukyus), (62), 33–39 (2001)Google Scholar
  24. 24.
    Y.N. Lin, C.L. Chen, Automatic IM parameter measurement under sensorless field-oriented control. IEEE Trans. Ind. Electron. 46(1), 111–118 (1999)CrossRefGoogle Scholar
  25. 25.
    Lin, Y. N., & Chen, C. L. Automatic IM parameter measurement under sensorless field-oriented control. IEEE Transactions on Industrial Electronics, 46(1), 111–118 (1999)Google Scholar
  26. 26.
    Zapp, M., & Janocha, H. (1995, September). Geometry measurement as integrated part of the manufacturing process using a moved CCD camera. In Videometrics IV (Vol. 2598, pp. 350-362). International Society for Optics and PhotonicsGoogle Scholar
  27. 27.
    C.D. Montemagno, L.J. Pyrak-Nolte, Fracture network versus single fractures: measurement of fracture geometry with X-ray tomography. Phys. Chem. Earth Part A Solid Earth Geodesy 24(7), 575–579 (1999)CrossRefGoogle Scholar
  28. 28.
    X. Wang, X.D. Zhang, Laser and vision measurement research on parameters of miniature quartz plate-sensitive glass part. Spectrosc. Spectral Anal. 34(6), 1450–1455 (2014)Google Scholar
  29. 29.
    Y. Tsai, J. Wu, Y. Wu, et al., in Image Analysis and Processing – ICIAP 2005. Automatic roadway geometry measurement algorithm using video images (Springer, Berlin Heidelberg, 2005)Google Scholar
  30. 30.
    Arthington, M. R., Cleaver, C., Allwood, J., & Duncan, S. (2014, July). Real-time measurement of ring-rolling geometry using low-cost hardware. In Control (CONTROL), 2014 UKACC International Conference on (pp. 603-608). Google Scholar
  31. 31.
    P.L. Rosin, Training cellular automata for image processing. IEEE Trans. Image Process. 15(7), 2076–2087 (2006)CrossRefGoogle Scholar
  32. 32.
    C.R. Dyer, A. Rosenfeld, Parallel image processing by memory-augmented cellular automata. IEEE Trans. Pattern Anal. Mach. Intell. 3(1), 29–41 (1981)CrossRefGoogle Scholar
  33. 33.
    P.L. Rosin, Image processing using 3-state cellular automata. Comput. Vis. Image Underst. 114(7), 790–802 (2010)CrossRefGoogle Scholar
  34. 34.
    T. Ogura, T. Ikenaga, Real-time morphology processing using highly parallel 2-D cellular automata CAM/sup 2/. IEEE Trans. Image Process. 9(12), 2018–2026 (2000)CrossRefGoogle Scholar

Copyright information

© The Author(s). 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Traffic and TransportationBeijing Jiaotong UniversityBeijingChina
  2. 2.School of Information and Electricity EngineeringHebei University of EngineeringHandanChina
  3. 3.School of Information EngineeringHandan CollegeHandanChina

Personalised recommendations