Neural Networks in Signal Processing

  • Rekha Govil
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 38)


Nuclear Engineering has matured during the last decade. In research & design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN’s can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN’s, statistical learning, eigen structure based processing and generalization structures.


Neural Network Multivariate Adaptive Regression Spline Signal Processing Application Adaptive Resonance Theory Functional Expansion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    D.E. Rumeihart and D.E. Hinton, and R.J. Williams, “Learning representation by backpropagating errors,” Nature, vol. 323, no. 9, pp. 533536, 1986.Google Scholar
  2. [2]
    D.E. Rumelhart and J.L. McClelland, “Parallel distributed processing, explorations in the microstructure of cognition,” in Foundations, vol 1. Cambridge, MA: MIT Press, 1986.Google Scholar
  3. [3]
    M.J.D. Powell, “Radial basis functions for multivariate interpolation: A review,” Tech. Rep. DAMPT 1985/NAl2, Dept. Appl. Math. Theoretical Phys., Cabridge Univ., Cabridge, U.K., 1985.Google Scholar
  4. [4]
    D.S. Broomhead and D.Lowe. Broomhead and D.Lowe, “Radial basis-function, multi-variable functional interpolation and adaptive networks,” Royal Signals Radar Est. Memo. 4148, Mar. 28, 1988.Google Scholar
  5. [5]
    G.A. Carpenter and S.A. Grossberg, “ART2: Self-organization of stable category recognition codes for analog input patterns,” Appl. Opt., vol. 26, no. 3, pp. 4919–4930, 1987.CrossRefGoogle Scholar
  6. [6]
    “The ART of adaptive pattern recognizing neural network,” IEEE Comput. Mag., pp. 77–88, Mar. 1988.Google Scholar
  7. [7].
    T. Kohonen, “Self-organized formation of topologically correct feature maps,” Biolog. Cybern., vol. 43, pp. 59–69, 1982.MathSciNetCrossRefzbMATHGoogle Scholar
  8. [8]
    “Self-organizing maps: Optimization approaches,” in Proc. Int. Conf. Artif. Neural Networks, Espoo, Finland, June 1991, pp. 981–990.Google Scholar
  9. [9]
    D.R. Hush and B.G. Horne, “Progress in supervised neural networks,” IEEE Signal Processing Mag., vol. 10, pp 8–39, Jan. 1993.CrossRefGoogle Scholar
  10. [10]
    A. Lapedes and R. Farber, “Nonlinear signal processing using neural networks: Prediction and system modeling,” Tech. Rep. LA-UR87–2662, Los Alamos Nat. Lab., Los Alamos, NM, 1987.Google Scholar
  11. [11]
    S. Haykin, “Neural networks expand SP’s horizons: Advanced algorithms for signal processing simultaneously account for nonlinearity, nonstationarity, and non-Gaussianity,” IEEE Signal Processing Mag., vol. 13, pp 24–49, Mar. 1996.CrossRefGoogle Scholar
  12. [12]
    A. Lapedes and R. Farber, “Nonlinear Signal Processing using Neural Networks: Prediction and System Modeling,” LA-VR-87–2662, Los Alamos National Laboratory, New Mexico, 1987.Google Scholar
  13. [13]
    D. Broomhead, D. Lowe, “Multivariable Functional Interpolation and Adaptive Networks,” Complex Systems 2, pp. 321–355, 1988.MathSciNetzbMATHGoogle Scholar
  14. [14]
    I.W. Sandberg and L. Xu. “Uniform Approximation and Gamma Networks,” Neural Networks, 10: 781–784, 1997.CrossRefGoogle Scholar
  15. [15]
    S. Haykin and X. Li, “Detection of Signals in Noise,” Proc. of IEEE, 13. S. Roberts and L. Tarassenko, “Automated Sleep EEg Analysis Using an RBF Network,” in Applications of Neural Networks, (ed. Murray ), Kluwer, 1995.Google Scholar
  16. [16]
    T.M. Caelli, D.McG. Squire and T. P. J. Wild, “Model-Based Neural Networks” Neural Networks, vol. 6, pp. 613–625, 1993.CrossRefGoogle Scholar
  17. [17]
    J.A. Anderson and J. P. Sutton, “A Network of Networks: Computation and Neurobiology,” World Congress of Neural Networks, vol. 1, pp. 561–568, 1995.Google Scholar
  18. [18]
    W.X. Wen, H. Liu and A. Jennings, “Self-Generating Neural Networks,” Proc. IJCNN’92, Baltimore, June 1992.Google Scholar
  19. [19]
    L. Guan, J.A. Anderson and J.P. Sutton, “A Network of Networks Processing Model for Image Regularization,” IEEE Trans. Neural Networks, vol. 8, no. 1, pp. 169–174, 1997.CrossRefGoogle Scholar
  20. [20]
    N. Srinvasa and R. Sharma, “SOIM: A Self–Organizing Invertible Map With Applications in Active Vision,” IEEE Trans. Neural Networks, vol. 8, no.’3, pp. 758–773, 1997.Google Scholar
  21. [21]
    S.H. Lin, S.Y. Kung, and L.J. Lin, “Face Recognition/Detection by Probabilistic Decision-based Neural Networks,” IEEE Transactions on Neural Networks, vol. 8, no. 1, pp. 114–132, 1997.CrossRefGoogle Scholar
  22. [22]
    J. Cao, M. Ahmad and M. Shridhar, “A Hierarchical Neural Network Architecture for Handwritten Numeral Recognition,” Pattern Recognition, vol. 30, no. 2, pp. 289–294, 1997.CrossRefGoogle Scholar
  23. [23]
    R. Bajaj and S. Chaudhury, “Signature Verification Using Multiple Neural Networks,” Pattern Recognition, vol. 30, no. 1, pp. 1–7, 1997.CrossRefGoogle Scholar
  24. [24]
    L. Breiman and J.H. Friedman, “Estimating Optimal Transformations for Multiple Regression and Correlation,” Journal of the American Statistical Association, 80: 580–619, 1985.MathSciNetCrossRefzbMATHGoogle Scholar
  25. [25]
    R. Tibshirani, “Estimating Transformations for Regression via Additive And Variance Stabilizing,” Journal of the American Statistical Association, Vol. 83, pp. 394–405, 1988.MathSciNetCrossRefGoogle Scholar
  26. [26]
    J.H. Friedman and W. Stuetzle, “Projection Pursuit Regression,” Journal of the American Statistical Association, Vol. 76, No. 376, pp. 817–823, December 1981.MathSciNetCrossRefGoogle Scholar
  27. [27]
    L. Breiman, J.H. Friedman, R. Olshen and C.J. Stone, Classification and Regression Trees, Wadsworth, Belmont, California, 1984.Google Scholar
  28. [28]
    J.H. Friedman, “Multivariate Adaptive Regression Splines (MARS), with Discussion,” Annals of Statistics, Vol. 19, No. 1, pp. 1–141, March 1991.MathSciNetCrossRefzbMATHGoogle Scholar
  29. [29]
    J.N. Hwang and P.S. Lewis, “From Nonlinear Optimization to Neural Network Learning,” In Proc. 24th Asilomar Conf on Signals, Systems, Computers, pp. 985–989, Pacific Grove, CA, November 1990.Google Scholar
  30. [30]
    J.N. Hwang, S.R. Lay, M. Maechler, D. Martin, J. Schimert, “Regression Modeling in Back-Propagation and Projection Pursuit Learning,” IEEE Trans. on Neural Networks, 5 (3): 342–353, May, 1994.Google Scholar
  31. [31]
    Fa-Long Luo and Rolf Unbehaueu, “ Neural Networks for Eigen-Structure Based Signal Processing,” IEEE Signal Processing Magazine, pp 28–40, Nov. 1997.Google Scholar
  32. [32]
    S.Haykin, Neural Networks; A compreheusive foundation, New York MacMillan, 1994.Google Scholar
  33. [33]
    N. Van Trees, Detection, Estimation and Modulation Theory: Part I. New York: Willy, 1968.Google Scholar
  34. [34]
    M.I. Jordon and R.A. Jacobs, “Hierarchical mixtures of experts and the EM algorithm,” Neural Compute.,Vol 6,ppl8l-214 1994.Google Scholar
  35. [35]
    T. Kaminuma G. Mastsumoto, “Biocomputers,” Chapman and Hall publishers, New York, 1991.Google Scholar
  36. [36]
    N.H. Farhat, “Optoelectronic Neural Networks and Learning Machines,” IEEE Circuits and Devices magzine, pp 32–41, Sept. 1989Google Scholar
  37. [37]
    T.H. Chao, “An Integrated Optoelectronic ATR processor,” Proc. of JPL workshop on Neural Network Practical Application and Prospects, pp. 193208, Pasadena, CA, May 1994.Google Scholar
  38. [38]
    T.P. Caudell, S.P. Smith, G.O. Johnson, D.C. Wansch, “An Application of Neural Networks to Group Technology,” Proc. at SPE, Application of Neural Networks II, Vol. 1469,pp. 612–621.Google Scholar
  39. [39]
    P. Kolodzy, “Multidimensional Machine Vision using Neural Networks,” IEEE Intl Conf. on Neural Networks, Vol., II pp 747–758, 1987.Google Scholar
  40. [40]
    Y. Harada, E. Goto, “Artificial Neural Networks with Josephson Devices,” IEEE Transaction on Magnetics, Vol. 27, pp. 2863–2866, Mar. 1991.CrossRefGoogle Scholar
  41. [41]
    Y. Mizugaki, K. Nakajima, Y. Sawada, T. Yamashita, “Super Conducting Implementation of Neural Networks using fluxon pluses,” IEEE Trans. on Applied Superconductivity, Vol. 3, No. 1, pp. 2765–2768, Mar. 1993.CrossRefGoogle Scholar
  42. [42]
    D. Ruan, P. D’hondt, P. Govaerts and E. Kerre (eds.), Fuzzy Logic and Intelligent Technologies in Nuclear Science. Singapore (1994).Google Scholar
  43. [43]
    R.E.Uhrig, Artificial Neural Networks and Potential Applications to Nuclear Power Plants. In: A.S. Jovanovic, A.C. Lucia and S. Fukuda (eds.) Knowledge-Based System Applications in Power Plant and Structural Engineering. EUR 15408 EN (1984), pp. 23–42.Google Scholar
  44. [44]
    L. Yaeger, R. Lyon, B. Webb, “Effective Training of A Neural Network Character Classifier for Word Recognition,” Proc. Advances in Neural Information Processing Systems 9, pp. 807–813, MIT Press, 1997.Google Scholar
  45. [45]
    A. Katsaggelos and S. P. R. Kumar, “Signal and Multistep Iterative Image Restoration and VLSI implementation,” Signal Processing, Vol. 16, pp. 2940, Jan. 1989.MathSciNetCrossRefGoogle Scholar
  46. [46]
    G. Puskurius, L. Feldkamp, “Neurocontrol of Nonlinear Dynamical Systems with Kalman Filter Trained Recurrent Networks,” IEEE Trans. Neural Networks, 5 (2): 279–297, 1994.CrossRefGoogle Scholar
  47. [47]
    T. Kohonen, E. Oja, O.Simula, A Visa, J. Kangas, “Engineering Applications of the Self-Organizing Map,” Proceedings of IEEE, 84 (10): 1358–1384, October 1996.Google Scholar
  48. [48]
    O. Karaali, G. Corrigan, I. Gerson and N. Massey: “Text-to-Speech Conversion with Neural Networks: A Recurrent TDNN Approach,” Eurospeech 1997.Google Scholar
  49. [49]
    T. Bell, T. Sejnowski, “An Information-Maximization Approach to Blind Source Deconvolution,” Neural Computation, 7: 1129–1159, 1995.CrossRefGoogle Scholar
  50. [50]
    S. Haykin and J. Principe,“Dynamic Modelling of Chaotic Time Series with Neural Networks, ” IEEE Signal Processing Magazine, 13 (2): 24–49, 1996.CrossRefGoogle Scholar
  51. [51]
    Y.H. Tseng, J.N. Hwang, and Florence Sheehan, “Three-Dimensional Object Representation and Invariant Recognition Using Continuous Distance Transform Neural Networks,” IEEE Trans. on Neural Networks, special issue on Pattern Recognition, Vol. 8, No. 1, pp. 141–147, January 1997.Google Scholar
  52. [52]
    S. Maeda, T. Aoki, and T. Higuchi, “Toward multiwave optoelectronics for 3-D parallel computing,” IEEE Int’l Solid-state Circuits Conference, pp 132133, San Francisco, CA, Feb. 1993.Google Scholar
  53. [53]
    A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, K.J. Lang, “Phoneme Recognition Using Time-Delay Neural Networks,” IEEE Trans. On Accoustics, Speech, and Signal Processing, 37 (3): 328–339, March 1989.Google Scholar
  54. [54]
    L.R. Rabiner, B.H. Juang, “An Introduction to Hidden Markov Models,” IEEE ASSP Magazine, pp. 4–16, Jan. 1986.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Rekha Govil
    • 1
  1. 1.Department of Computer Science & ElectronicsBanasthali VidyapithIndia

Personalised recommendations