Ridge Polynomial Neural Network for Non-destructive Eddy Current Evaluation
Motivated by the slow learning properties of Multi-Layer Perceptrons (MLP) which utilize computationally intensive training algorithms, such as the backpropagation learning algorithm, and can get trapped in local minima, this work deals with ridge Polynomial Neural Networks (RPNN), which maintain fast learning properties and powerful mapping capabilities of single layer High Order Neural Networks (HONN). The RPNN is constructed from a number of increasing orders of Pi-Sigma units, which are used to solving inverse problems in electromagnetic Non-Destructive Evaluation (NDE). The mentioned inverse problems were solved using Artificial Neural Network (ANN) for building polynomial functions to approximate the correlation between searched parameters and field distribution over the surface. The inversion methodology combines the RPNN network and the Finite Element Method (FEM). The RPNN are used as inverse models. FEM allows the generation of the data sets required by the RPNN parameter adjustment. A data set is constituted of input (normalized impedance, frequency) and output (lift-off and conductivity) pairs. In particular, this paper investigates a method for measurement the lift-off and the electrical conductivity of conductive workpiece. The results show the applicability of RPNN to solve non-destructive eddy current problems instead of using traditional iterative inversion methods which can be very time-consuming. RPNN results clearly demonstrate that the network generate higher profit returns with fast convergence on various noisy NDE signals.
KeywordsFinite Element Method Inverse Problem Solution Eddy Current Problem Functional Link Neural Network Finite Element Method Solver
Unable to display preview. Download preview PDF.
- 2.De Alcantara, N.P., Alexandre, J., De Carvalho, M.: Computational investigation on the use of FEM and ANN in the non-destructive analysis of metallic tubes. In: 10th Biennial conference on electromagnetic field computation ICEFC, Italy (2002)Google Scholar
- 3.Lawrence, S., Giles, C.L.: Overfitting and Neural Networks: Conjugate Gradient and Backpropagation. In: International Joint Conference on Neural Network, Italy, pp. 114–119 (2000)Google Scholar
- 5.Chen, A.S., Leung, M.T.: Regression Neural Network for error correction in foreign exchange forecasting and trading. Computers & Operations Research, 1049–1068 (2004)Google Scholar
- 6.Leerink, L.R., Giles, C.L., Horne, B.G., Jabri, M.A.: Learning with product units. Advances in Neural Information Processing Systems, 537–544 (1995)Google Scholar
- 7.Shin, Y., Ghosh, J.: The Pi-Sigma Networks: An efficient Higherorder Neural Network for pattern classification and function approximation. In: Proceedings of International Joint Conference on Neural Networks, Washington, vol. 1, pp. 13–18 (July 1991)Google Scholar
- 9.Shin, Y., Ghosh, J.: Computationally efficient invariant pattern recognition with higher order Pi-Sigma Networks. The University of Texas at Austin (1992)Google Scholar