Simulation Studies of On-Line Identification of Complex Processes with Neural Networks

  • Francisco Cubillos
  • Gonzalo Acuña
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3972)


This paper analyzes various formulations for the recursive training of neural networks that can be used for identifying and optimizing nonlinear processes on line. The study considers feedforward type networks (FFNN) adapted by three different methods: inverse Hessian matrix approximation, calculation of the inverse Hessian matrix using a Gauss-Newton recursive sequential algorithm, and calculation of the inverse Hessian matrix in a recursive type Gauss-Newton algorithm. The study is completed using two network structures that are linear in the parameters: a radial basis network and a principal components network, both trained using a recursive least squares algorithm. The corresponding algorithms and a comparative test consisting of the on-line estimation of a reaction rate are detailed. The results indicate that all the structures were capable of converging satisfactorily in a few iteration cycles, FFNN type networks showing better prediction capacity, but the computational effort of the recursive algorithms is greater.


Radial Basis Function Singular Vector Recursive Little Square Radial Basis Function Radial Basis Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Network 2, 359–366 (1989)CrossRefGoogle Scholar
  2. 2.
    Ge, S.S., Hang, C.C., Lee, T.H.: Stable Adaptive Neural Network Control. Springer, Heidelberg (2002)MATHGoogle Scholar
  3. 3.
    Zhang, T., Ge, S.S., Hang, C.C.: Adaptive Neural Network Control for Strict Feed-back Nonlinear Systems using Backstepping Design. Automatica 36, 1835–1846 (2000)MATHMathSciNetGoogle Scholar
  4. 4.
    Wilson, R., Martinez, T.: The General Inefficiency of Batch Training for Gradient Descent Learning. Neural Networks 16, 1429–1451 (2003)CrossRefGoogle Scholar
  5. 5.
    Van der Smagt, P.: Minimisation Methods for Training Feedforward Neural Networks. Neural Networks 7, 1–11 (1994)CrossRefGoogle Scholar
  6. 6.
    Peel, C., Willis, M., Tham, M.: A Fast Procedure for Training NN. J. Proc. Contr. 2 (1992)Google Scholar
  7. 7.
    Oja, E.: Principal Components, Minor Components and Linear Neural Networks. Neural Networks 5, 927–935 (1992)CrossRefGoogle Scholar
  8. 8.
    Spooner, J.T., Passino, K.: Decentralized Adaptive Control of Nonlinear Systems Using Radial Basis Neural Networks. IEEE Transactions on Auto. Control 44, 2050–2057 (1999)MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Moody, J., Darken, C.: Fast Learning in Networks of Locally Tuned Processing Units. Neural Computation 1, 281–294 (1989)CrossRefGoogle Scholar
  10. 10.
    Chen, S., Billings, S., Grant, P.: Recursive Hybrid Algorithm for Non Linear System Identification using Radial Basis Functions Networks. Int. J. Control 55, 1051–1070 (1991)CrossRefMathSciNetGoogle Scholar
  11. 11.
    Dhib, R., Hyson, W.: Neural Network Identification of Styrene Free Radical Polymerization. In: Polymer Reaction Engineering, USA, vol. 10, pp. 101–113 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Francisco Cubillos
    • 1
  • Gonzalo Acuña
    • 1
  1. 1.Facultad de IngenieríaUniversidad de Santiago de Chile,USACHSantiagoChile

Personalised recommendations