# Establishment of cellular automata image model and its application in image dimension measurement

- 160 Downloads

**Part of the following topical collections:**

## Abstract

Aiming at how to improve the efficiency of image edge detection, an image edge detection method based on least squares support vector machine (LSSVM) and cellular automata is proposed. Firstly, a new kernel function is constructed based on the Gauss radial basis kernel and polynomial kernel, which enables the LSSVM to fit the gray values of the image pixels accurately. Then, the gradient operator of the image is deduced, and the gradient value of the image is obtained by convolution with the gray value of the image. Then, the cellular automata evolves the gradient value according to the designed local rules to locate and detect the image edge. Simulation results show that the proposed edge detection algorithm is effective, and the new algorithm has higher detection performance than Sobel and Canny algorithms.

## Keywords

Image analysis Cellular automata Image dimension measurement## Abbreviations

- LSSVM
Least squares support vector machine

- SDCC
Self-driving cooperating car

- SVM
Support vector machine

## 1 Introduction

Cellular automata [1] are a discrete dynamic system in space, time, and state. The mapping is similar to cellular automata in some aspects, the mapping is discrete in time evolution, and the value of state variables of cellular automata is continuous. If mapping corresponds to ordinary differential equations in continuous dynamical systems, then cellular automata correspond to partial differential equations in continuous dynamical systems, because partial differential equations are in time. The values of space and state variables are continuous. Therefore, cellular automata have become an extreme representative in the field of discrete dynamical systems. The space of cellular automata is composed of a series of cells arranged in grid distribution. The grid space can be one-dimensional, two-dimensional, or high-dimensional. In addition, grille space can be limited or infinite.

In order to analyze the traffic situation of airport surface, Xing et al. [2] proposed an airport surface traffic simulation method based on the theory of agent and cellular automaton model. The traffic characteristics of airport surface are analyzed. The combination of agent theory and the cellular automaton model is discussed. Different parts of the airport surface traffic are defined respectively as one-dimensional cellular automaton, and aircraft agent is designed, which means that a traffic simulation model is constructed based on agent-cellular automaton. The airport surface traffic is simulated. Simulation results show that the simulation method has the characteristics of simple, high efficiency, and high accuracy, which can reflect the autonomy and individual differences of aircraft taxi in airport traffic system. And the analysis and assessment of the traffic situation has high application value in airport surface.

Aiming at the actual situation of complex structure and the evacuation of subways affected by the subjective conditions of pedestrians, these papers use the ant colony algorithm to solve the large passenger flow under complex building structure to find the optimal evacuation path in the macro view, and on the micro-level, use the intelligent decision model of cellular automaton to construct the underground pedestrian evacuation model of the fusion of ant colony algorithm and cellular automaton and discuss the evacuation efficiency and individual status of a station in Guangzhou Metro during the evacuation of pedestrians [3, 4, 5]. The simulation results can provide the preparation of contingency plans, staff training, the evacuation of passengers, and emergency drills with a reference.

Another major research area of cellular automata is autopilot [5, 6, 7, 8, 9, 10]. In the autopilot study, by establishing a single-lane autopilot-manual driving traffic flow model, the unit length of the cell is refined, and autopilot is fully considered. The difference in reaction time of the manually driven vehicle system reduces the reaction time of the automatic driving system, so that the road capacity is greatly improved.

Image size measurement includes image acquisition [11, 12, 13], image preprocessing [14, 15, 16], image segmentation [17, 18, 19], feature extraction [20, 21, 22], and parameter measurement [23, 24, 25]. The focus of image acquisition is to obtain high-quality image. Image preprocessing can suppress noise and enhance contrast, thus improving the quality of source image. The object of interest is segmented from the background by image segmentation, and the parameters such as dimension are important for feature extraction and parameter measurement. The hardware such as camera and image processing algorithm make up the image size measurement system and finish the measurement work on this platform. Every component of the image size measurement system is indispensable, and the measurement algorithm is the most important.

In the last 20 years, the image measurement technology has developed rapidly at home and abroad and has been widely used in the measurement of geometric parameters of parts [26, 27, 28, 29, 30]. The micro-size measurement and appearance detection of precision parts used are aerial remote sensing image, light wave interferogram, and the analysis of stress and strain field state distribution map, and so on. The high resolution, high sensitivity, wide spectral response, and large dynamic range of image size measurement system are difficult to compare with traditional measurement methods. Image measurement technology generally has no special requirements on the environment, and it is very suitable for some traditional measurement methods which are difficult to realize, especially as the online measurement link of automatic pipeline. With the continuous improvement of manufacturing technology requirements of time, the matching image measurement technology is bound to move to a higher level.

From the existing parts of measurement research results, we found that today’s measurement has the accuracy that does not meet the requirements of the development of the prior art. The measurement process is greatly disturbed by external noise, and it is difficult to realize the shape recognition. Cellular automata have been applied to image processing and have achieved rich results [31, 32, 33, 34]. In these studies, the introduction of cellular automata has improved the accuracy and anti-interference ability of image processing. For this reason, we believe that with cellular automata, the introduction of image size measurement can solve this problem well. Based on the edge detection method of least squares and cellular automaton, this paper establishes an efficient image edge detection model. Firstly, a Gauss radial basis-based LSSVM is established. The method is combined with the cellular automaton model to evolve the gradient values according to local rules to achieve image edge location and detection. The edge of the image detected by the simulation experiment is accurate.

## 2 Proposed method

### 2.1 Image edge detection algorithm based on LSSVM-CA

#### 2.1.1 Construction of least squares support vector machine and its kernel function

The basic idea of SVM is to select a set of feature subsets of the training set (called support vector machine), which can separate classes and provide favorable conditions for classifier generation. At the same time, the complexity of operation is reduced while the classification accuracy is guaranteed. Compared with classical classification algorithms, SVM has obvious advantages in many aspects, such as preventing trained learning and computing speed, but it also has some limitations. So researchers put forward a lot of deformation algorithms of support vector machine; these deformation algorithms are mainly by adding function items, variables, or coefficients to deform the formula, so this chapter mainly describes the deformation algorithm.

#### 2.1.2 Least squares support vector machine (LSSVM)

*n*-dimensional vector and

*l*samples, and their values in a region be represented as (

*x*

_{1},

*y*

_{1}), ⋯⋯(

*x*

_{l},

*y*

_{l}) ∈

*R*

^{n}×

*R*. First, a nonlinear map

*ψ*(⋅) is used to map samples from the original space

*R*

^{n}to the feature space

*ψ*(

*x*) = (

*φ*(

*x*

_{1}),

*φ*(

*x*

_{2}), ⋯

*φ*(

*x*

_{l})). The optimal decision function

*y*(

*x*) =

*w*⋅

*φ*(

*x*) +

*b*is constructed in this high dimensional feature space. In this way, the nonlinear estimation function is transformed into the linear estimation function in the high dimensional feature space. Using the principle of structural risk minimization, finding the extremum of the weight vector

*w*and the offset

*b*is minimized:

Among them, ‖*w*‖^{2} is the complexity of the control model, *c* is the regularization parameter, and *R*_{emp} is the error control function, that is, insensitive loss function.

*ε*loss function, quadratic

*ε*loss function, and Huber loss function. Different types of support vector machines can be constructed by selecting different loss functions. The loss function of the least squares support vector machine in the optimization target is the quadratic term of the error

*ζ*

_{i}. Therefore, the optimization problem is:

s.t: *y*_{i} = *φ*(*x*_{i}) ⋅ *w* + *b* + *ζ*_{i}, *i* = 1, ⋯⋯, *l*.

among which *a*_{i}, *i* = 1, . . … , *l*, is the Lagrange multiplier.

*a*

_{i}=

*cζ*

_{i}, and

*K*(

*x*

_{i},

*x*

_{j}) =

*ϕ*(

*x*

_{i}) ⋅

*ϕ*(

*x*

_{j}),

*K*(

*x*

_{i},

*x*

_{j}) is a symmetric function satisfying the Mercer condition. According to (5), the optimization problem is transformed into solving linear equations:

The image fitting takes the *M* × *N* neighborhood of the pixel as the processing unit. Same as 3 ≤ *M* = *N* ≤ 9. The distance between the pixels in the neighborhood is expressed as Δ*r* and Δ*c* in the horizontal direction and the vertical direction, respectively. |Δ*r*| ≤ \( \left\lfloor \frac{M}{2}\right\rfloor \) and \( \Delta c\le \left\lfloor \frac{M}{2}\right\rfloor \),因.

The coordinates of all pixels in the neighborhood can be expressed as (*r* Δ ROC Δ *c*),) and used as the input of LSSVM. The input of LSSVM can also be subtracted from the coordinate by (*r* + Δ*r*, *c* + Δ*c*) and replaced by (Δ*r*, Δ*c*), which is subtracted from the former coordinate.

*M*and

*N*are known, the constant vector space as input can be obtained. {(Δ

*r*,Δ

*c*)||Δ

*r*|≤ \( \left\lfloor \frac{M}{2}\right\rfloor \) and \( \Delta c\le \left\lfloor \frac{M}{2}\right\rfloor \)}. The nonlinear relationship between input vectors and pixels can be constructed by LSSVM. Formula (3) is considered as a set of linear equations with

*a*and

*B*as unknown elements.

*A*and

*B*are defined as follows:

*A*and

*B*is independent of the output

*y*, that is, independent of the gray value of the pixel, related to the input quantity, the type of kernel function, and

*γ*. From the above, the number of samples in the constant vector space is known and fixed, and

*γ*is constant, so once the kernel function is determined, the matrices

*A*and

*B*can be obtained in advance. The matrices

*A*and

*B*only need to be solved once, and the matrix is globally universal, which is essentially a constant matrix. Next, the selected kernel function can be used for image fitting. Let Row × Col be the constant vector space of a pixel symmetric neighborhood of the image to be processed. Like Row = {− 1,0,1}, Col = {− 1,0,1}. The kernel function constructed by the formula is represented as the image gray fitting function in the constant vector space as follows:

*f*(ROC) is the gray level estimate of the corresponding point (ROC), and (ri, ci) is the pixel coordinate as input. For the point (RFC), the first-order partial derivative of the image gray fitting function in horizontal and vertical direction can be obtained:

*W*

_{r}and

*W*

_{c}calculations are introduced.

*W*

_{r}and

*W*

_{c}are similar to the matrices

*A*and

*B*and independent of the gray value

*y*of the pixels, and related to the input quantity and the type of kernel function, so they can be obtained in advance and become a constant matrix. They are transformed into square arrays with the same size as the pixel (rc) neighborhood Row × Col, and

*W*

_{r}and

*W*

_{c}become gradient operators of the image. The gradient operators

*W*

_{r},

*W*

_{c}, and the image matrix

*I*(ROC) are used for convolution calculation, and the gradient value matrices GH (ROC) and GV (ROC) in the horizontal and vertical directions of the image are obtained.

#### 2.1.3 An algorithm of image edge detection based on CA

*f*(

*x*,

*y*) = \( \frac{\partial f}{\partial x}i \) + \( \frac{\partial f}{\partial y}j \) is the gradient of the image, ▽

*f*(

*x*,

*y*) contains grayscale change information: Open image in new window as ▽

*f*(

*x*,

*y*)’s g radient, and

*e*(

*x*,

*y*) can be used as edge detection operator. To simplify the calculation,

*e*(

*x*,

*y*) can also be defined as the sum of the absolute derivatives of partial derivatives

*f*

_{x}and

*f*

_{y}:

On the basis of the above formula theory, many algorithms are proposed. The commonly used edge detection methods are the Roberts edge detection operator, Sobel edge detection operator, Prewitt edge detection operator, Canny edge detection operator, Laplace edge detection operator, and so on. However, the traditional method of edge detection has some drawbacks in time and complexity, so this paper uses CA to detect the image edge.

According to the characteristics of the edge, the gradient value of the pixel on the edge has its particularity (which should be the largest in its corresponding neighborhood), so it is an important information for edge detection. When using 2D CA to detect the edge of gray image, the image gradient value of the matrix obtained from the above formula is regarded as the processing object, and the image gradient value is mapped to the cellular space as the initial state value, so the finite state set is *S* = {0, …,255} neighborhood selection Von Neumann type, so the size of cell space vector *N* is 4. Evolution rule *R* is the key to the algorithm, using evolutionary rules to operate; when the evolution stops, we get the final results of edge detection. First, according to the gradient value, the cells are divided into four categories, expressed as GCi, *I* ∈{1, 2, 3, 4}.

- 1)
Classifies all the cells in the central cell neighborhood into the corresponding category.

- 2)
The number of cells belonging to the GCi class in the neighborhood is calculated and expressed as Num (GCi).

- 3)
Call the maximum value algorithm to get the class that contains the maximum number of cells in the neighborhood and assign it to GCmajority.

- 4)
St (GCmajority) is defined to record the cellular state of the neighborhood cell belonging to the GCmajority class at

*t*time. - 5)
Seeking st (GCmajority) and Sum (st (GCmajority).

- 6)
Find the value of the Boolean expression Bool = (max (Num(GCi)) = GCmajority) and (Sum(st(GCmajority)) > 254)

- 7)
According to the result of Bool, we give different gray values of

*T*+ 1 time center cell (*C*_{cx},*C*_{cy}).

- 8)
It is not until Nt 1 (cx,

*C*_{cy}) = Nt (*C*_{cx},*C*_{cy}) that the evolution reaches a stable state and ends, and the edge pixel set of the image is obtained.

### 2.2 Application of LSSVM-CA model in image dimension measurement

In recent years, image measurement technology has been developed rapidly at home and abroad. It has been widely used in part of geometric parameter measurement, micro-size measurement and appearance detection of precision parts, aerial remote sensing images, light wave interferogram, and the stress and strain field distribution map analysis and many other aspects. The high resolution, high sensitivity, wide spectral response, and large dynamic range of image size measurement system are difficult to compare with traditional measurement methods. Different image processing and analysis methods as well as different detection methods and calculation formulas will bring different errors. In this paper, the LSSVM-CA model is applied to measure the image size of precision parts to verify whether the appearance size is up to standard. On the basis of edge detection of LSSVM-CA model, Gao Si curve fitting interpolation method in gradient direction can be used to locate sub-pixel level, and the accuracy is improved significantly.

Surface fitting is a method with high accuracy, low computation, and strong anti-noise performance, which has been widely used in various scenarios. Such as rock deformation measurement, electronic deformation measurement, and palmprint image correlation matching-based repeat location technology. The premise that the sub-pixel displacement can be successfully calculated by surface fitting method is that the whole pixel matching point of the template can be correctly searched in the whole pixel search stage, and once there is an error in the whole pixel matching point search, then the displacement distance from the sub-pixel measurement phase will be meaningless. Using different fitting functions to fit the correlation coefficient matrix will have a certain influence on the results. The commonly used fitting functions include quadratic function, cubic function, and Gao Si function. In this paper, Gao Si function is used for surface fitting.

*A*is the amplitude,

*c*

_{0},

*c*

_{1}and

*σ*

_{0},

*σ*

_{1}are the mean and standard deviation in the

*x*-axis and

*y*-axis, respectively. On solving functions

*f*(

*x*,

*y*) coefficient in the logarithms on the two sides of the upper equal sign can be obtained:

Using the position in the fitting window and the corresponding correlation coefficient, *n* × *n* equations can be obtained. Using the least squares method, the coefficient *λ*_{0}, *λ*_{1}, *λ*_{2}, *λ*_{3}, *λ*_{4}, *λ*_{5}, of the equation can be calculated and the coefficients of the Gaussian function can be obtained.

## 3 Experimental results

### 3.1 Image acquisition and preprocessing

The parts are verified with precision rivets of different sizes, such as the figure below.

The purpose of image smoothing is to suppress noise and improve image quality, which can be carried out in spatial and frequency domains. The commonly used methods include neighborhood averaging, spatial filtering, and median filtering. The neighborhood averaging method is a local spatial processing method, which uses the gray average of each pixel in the pixel neighborhood to replace the original gray value of the pixel, so that the image can be smoothed. Because the noise in the image belongs to the high-frequency component, the spatial filtering method adopts the low-pass filtering method to remove the noise to realize the image smoothing.

Median filter is a nonlinear processing technique, which can suppress the noise in the image. It is based on the characteristics of images: noise often appears in the form of isolated points, the number of pixels corresponding to these points is very small, and the image is made up of small blocks with more pixels and larger area [12].

- (1)
The template is roamed in the graph, and the center of the template is overlapped with a pixel position in the graph.

- (2)
Reading the gray value of each corresponding pixel under the template.

- (3)
Put these grayscale values in a row from small to large.

- (4)
Find out one of those values in the middle.

- (5)
Assign this intermediate value to the pixel at the center of the corresponding template.

It can be seen from the above steps that the main function of median filter is to change the value of pixels close to the values of the surrounding pixels when the difference between the gray values of the surrounding pixels is large, so its ability to eliminate the isolated noise pixels is very strong. Because it is not simple to take the mean, it produces less ambiguity. In other words, median filtering can eliminate noise and maintain the details of the image [13]. Examples are as follows:

## 4 Discussion

### 4.1 Image segmentation

As can be seen from the results in Fig. 3, since the distinction between the part and the background image is large, the parts can be well segmented regardless of whether the noise is added or not, and thus, the size of the part can be seen. For measurement, the choice of image segmentation technology is not a major factor affecting measurement accuracy.

### 4.2 Edge detection algorithm

The essence of edge detection is to extract the boundary line between object and background. The edge is defined as the edge of the image where the gray level changes dramatically. The change of image gray level can be reflected by the gradient of image gray distribution, so we can obtain edge detection operator by using local image differential technique. The classical edge detection method is to construct the edge detection operator for a small neighborhood of the pixel in the original image. The following is the theoretical analysis of several classical edge detection operators, and their performance characteristics are compared and evaluated.

As shown in Fig. 5, after adding noise, the image edge detection effect is significantly lower than that of the non-noise image, but comparing the two results, the results extracted by the method proposed in this paper are obviously the most similar to the results without noise. The second column in the figure and the edges of the picture in the third column are greatly disturbed by noise.

### 4.3 Parameter measurement and dimension identification

After preprocessing the images obtained by the image acquisition equipment, a high-quality rivet image is obtained. In order to complete the measurement of the rivets, rivets must be recognized from the obtained images. Then, the exact coordinate set of rivet contour is obtained by region contour tracking, which lays a good foundation for rivet measurement in the next step. In this chapter, the acquisition process of rivet contour coordinate set is described in detail, which is divided into three parts: rivet image segmentation, rivet image analysis, and rivet image contour tracking.

The standard 2D rivet image is an axisymmetric figure. In the case of horizontal placement of the rivets, the length of the external rectangle is the full length of the rivets, and the environment of the industrial automation production site is very complex, which may result in the rivets being slanted. Therefore, this paper uses the method of finding the minimum external rectangle of rivets to locate the spindle. This method can directly obtain the full length of rivets, and it can solve the influence of rivet orientation on the detection process.

- 1)
The external rectangular area of the rivet contour is obtained, that is, the rivet contour is approximated by placing the rectangle horizontally around the rivet.

- 2)
Determine the center of rotation of the image. With the center of rotation (ACB) of the external rectangle of the initial willow nail contour as the center of rotation, the rivet profile is rotated around the point at

*n*each time. The first step of each rotation was 90/*n*times. - 3)
The degree of rotation corresponding to the minimum area of the outer rectangle is taken from the area of the external rectangle obtained by each rotation, and the coordinates of the upper left corner and the lower right corner of the outer rectangle are calculated. The minimum external rectangle of the rivet profile can be determined by selecting the two-point coordinates in reverse direction.

Image measurement technology generally has no special requirements on the environment, and it is very suitable for some traditional measurement methods which are difficult to realize, especially as the online measurement link of automatic pipeline. With the increasing requirements of manufacturing technology of time, the matching image measurement technology is bound to advance to a higher level. Combined with the development of image measurement technology and the current situation in the field of measurement, it can be expected that the image measurement technology will show the following trends: The measurement accuracy has developed from micron to nano-scale, the measurement range has been extended from length to area measurement to shape recognition, and the measurement method has been advanced from off-line measurement to real-time online measurement. The measurement system moves from a single measurement function to an intelligent and automatic system that integrates measurement and control. In short, image measurement technology must achieve high accuracy, high speed, and high efficiency. Therefore, the intelligent measurement system with fast measuring speed, high precision, and high efficiency will become an important developing direction of image measurement technology in the future.

SNR/dB | LSSVM-CA | Canny | Sobel |
---|---|---|---|

30 | 0.993 | 0.912 | 0.900 |

20 | 0.941 | 0.988 | 0.934 |

10 | 0.997 | 0.989 | 0.999 |

1) The measurement results of rivets of different sizes are different. The experimental results show that the larger the size of rivets is, the more accurate the measurement results are; the relative error is about 0.03–0.97 mm, and the relative error of small rivets is 0.107, which has more negative effect on the overall measurement effect than that on large-size rivets. This may have an impact on the focus or placement of the image acquisition device.

Measuring parameters | Normal placement | Inclined placement | absolute error |
---|---|---|---|

Overall length | 660.0157 | 660.1074 | 0.0917 |

Riveting length | 400.0116 | 399.1569 | 0.8547 |

Long diameter | 170.0395 | 171.0325 | 0.993 |

Short diameter | 120.0659 | 120.1796 | 0.1137 |

Measurement results of rivet parameters in different placement modes

The results show that, regardless of the size of the rivets, the average level of the front placement is higher than that of the side placement. Therefore, if we consider improving the measuring results, we can deal with the parts in the early stage, such as product cleaning and workpiece clamping, avoid due to dirty parts and placement errors, and the correct placement.

## 5 Conclusions

- 1)
In the application of measurement algorithm, we also need to find the best balance point between the calculation speed and the calculation precision. From the practical point of view of the algorithm, we can use hardware to realize the algorithm with high time complexity.

- 2)
Based on the image processor, the processing speed of the system can be faster and better meet the requirements of real-time processing online measurement.

- 3)
The measurement system can only be used to measure the geometric dimensions of two-dimensional images. It can be considered to collect picture information by using double cameras or even multiple cameras. The system can be used to measure 3D images and broaden the application field of image size measurement.

To sum up, this paper has done some analysis and research on the shape and size of precision industrial parts, which has certain theoretical and practical significance. Although a lot of work has been done, but due to time constraints, the system needs to be further improved. The image correction of the camera, the measurement of different shapes in the geometric dimension measurement of the parts, the ability to realize the online detection function, and so on, all of these will be the focus of the next research.

## Notes

### Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

### Funding

Not applicable.

### Availability of data and materials

Please contact author for data requests.

### Authors’ contributions

All authors take part in the discussion of the work described in this paper. The author FP wrote the first version of the paper. The authors SW and SL did the part of experiments of the paper and revised it in different versions, respectively. All authors read and approved the final manuscript.

### Competing interests

The authors declare that they have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## References

- 1.S. Wolfram, Statistical mechanics of cellular automata. Rev. Mod. Phys.
**55**(3), 601–644 (1983)MathSciNetCrossRefGoogle Scholar - 2.X. Zhiwei, L. Shizhen, T. Yunqi, et al., Airport scene traffic simulation based on agent-cell automata. J. Syst. Simul.
**30**(3)857–865 (2018)Google Scholar - 3.D. Qidong, C. Moxiang, A. Xu, Pedestrian evacuation model of subway based on ant colony cellular automata. J. Comput. (2), 18–21 (2018)Google Scholar
- 4.Yan Gaoyou. Research on several problems of urban rail transit transfer and emergency based on ACP method [D]. Beijing Jiaotong University, Beijing, 2013Google Scholar
- 5.L. Kun, Research on Rail Transit Path Selection Model Based on Ant Colony Algorithm and its Application (Beijing Jiaotong University, Beijing, 2011) Google Scholar
- 6.B. Straatman, R. White, G. Engelen, Towards an automatic calibration procedure for constrained cellular automata. Comput. Environ. Urban Syst.
**28**(1), 149–170 (2004)CrossRefGoogle Scholar - 7.J.D. Lohn, J.A. Reggia, Automatic discovery of self-replicating structures in cellular automata. IEEE Trans. Evol. Comput.
**1**(3), 165–178 (2002)CrossRefGoogle Scholar - 8.A. Wuensche, Classifying cellular automata automatically: finding gliders, filtering, and relating space-time patterns, attractor basins, and the Z parameter. Complexity
**4**(3), 47–66 (2015)MathSciNetCrossRefGoogle Scholar - 9.Ravichandran, R., Ladiwala, N., Nguyen, J., Niemier, M., & Lim, S. K. (2004, April). Automatic cell placement for quantum-dot cellular automata. In Proceedings of the 14th ACM Great Lakes symposium on VLSI, ACM (pp. 332–337)Google Scholar
- 10.A. Adamatzky, Automatic programming of cellular automata: identification approach. Kybernetes
**26**(2), 126–135 (1997)CrossRefGoogle Scholar - 11.S.E. Reichenbach, Characterizing digital image acquisition devices. Opt. Eng.
**30**(2), 170 (1991)CrossRefGoogle Scholar - 12.P. Vuylsteke, A. Oosterlinck, Range image acquisition with a single binary-encoded light pattern. Pattern Anal. Mach. Intell. IEEE Trans.
**12**(2), 148–164 (1990)CrossRefGoogle Scholar - 13.R. Boellaard, Standards for PET image acquisition and quantitative data analysis. J. Nucl. Med.
**50**(Suppl_1), 11S–20S (2009)CrossRefGoogle Scholar - 14.K.S. Fu, A. Rosenfeld, Pattern recognition and image processing. IEEE Trans. Comput.
**C-25**(12), 1336–1346 (1976)CrossRefGoogle Scholar - 15.R. Gross, V. Brajovic, in
*International Conference on Audio-& Video-Based Biometric Person*. An image preprocessing algorithm for illumination invariant face recognition (2003)Google Scholar - 16.R. Gross, An image preprocessing algorithm for illumination invariant face recognition. Avbpa, 10–18 (2001)Google Scholar
- 17.G. Leo, Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell.
**28**(11), 1768–1783 (2006)CrossRefGoogle Scholar - 18.J. Shi, J. Malik, Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell.
**22**(8), 888–905 (2000)CrossRefGoogle Scholar - 19.P.F. Felzenszwalb, D.P. Huttenlocher, Efficient graph-based image segmentation. Int. J. Comput. Vis.
**59**(2), 167–181 (2004)CrossRefGoogle Scholar - 20.M.S. Nixon, A.S. Aguado, Feature extraction and image processing. Feature Extraction Image Process. Comput. Vis., 37–82 (2002)Google Scholar
- 21.A.L. Yuille, P.W. Hallinan, D.S. Cohen, Feature extraction from faces using deformable templates. Int. J. Comput. Vis.
**8**(2), 99–111 (1992)CrossRefGoogle Scholar - 22.O.D. Trier, A.K. Jain, T. Taxt, Feature extraction methods for character recognition - a survey. Pattern Recognit.
**29**(4), 641–662 (1996)CrossRefGoogle Scholar - 23.Senjyu, T., Kinjyou, K., & Uezato, K. Parameter measurement of PMSM using adaptive identification. Ryukyu Daigaku Kogakubu Kiyo(Bulletin of the Faculty of Engineering, University of the Ryukyus), (62), 33–39 (2001)Google Scholar
- 24.Y.N. Lin, C.L. Chen, Automatic IM parameter measurement under sensorless field-oriented control. IEEE Trans. Ind. Electron.
**46**(1), 111–118 (1999)CrossRefGoogle Scholar - 25.Lin, Y. N., & Chen, C. L. Automatic IM parameter measurement under sensorless field-oriented control. IEEE Transactions on Industrial Electronics,
**46**(1), 111–118 (1999)Google Scholar - 26.Zapp, M., & Janocha, H. (1995, September). Geometry measurement as integrated part of the manufacturing process using a moved CCD camera. In Videometrics IV (Vol. 2598, pp. 350-362). International Society for Optics and PhotonicsGoogle Scholar
- 27.C.D. Montemagno, L.J. Pyrak-Nolte, Fracture network versus single fractures: measurement of fracture geometry with X-ray tomography. Phys. Chem. Earth Part A Solid Earth Geodesy
**24**(7), 575–579 (1999)CrossRefGoogle Scholar - 28.X. Wang, X.D. Zhang, Laser and vision measurement research on parameters of miniature quartz plate-sensitive glass part. Spectrosc. Spectral Anal.
**34**(6), 1450–1455 (2014)Google Scholar - 29.Y. Tsai, J. Wu, Y. Wu, et al., in
*Image Analysis and Processing – ICIAP 2005*. Automatic roadway geometry measurement algorithm using video images (Springer, Berlin Heidelberg, 2005)Google Scholar - 30.Arthington, M. R., Cleaver, C., Allwood, J., & Duncan, S. (2014, July). Real-time measurement of ring-rolling geometry using low-cost hardware. In Control (CONTROL), 2014 UKACC International Conference on (pp. 603-608). Google Scholar
- 31.P.L. Rosin, Training cellular automata for image processing. IEEE Trans. Image Process.
**15**(7), 2076–2087 (2006)CrossRefGoogle Scholar - 32.C.R. Dyer, A. Rosenfeld, Parallel image processing by memory-augmented cellular automata. IEEE Trans. Pattern Anal. Mach. Intell.
**3**(1), 29–41 (1981)CrossRefGoogle Scholar - 33.P.L. Rosin, Image processing using 3-state cellular automata. Comput. Vis. Image Underst.
**114**(7), 790–802 (2010)CrossRefGoogle Scholar - 34.T. Ogura, T. Ikenaga, Real-time morphology processing using highly parallel 2-D cellular automata CAM/sup 2/. IEEE Trans. Image Process.
**9**(12), 2018–2026 (2000)CrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.