Advertisement

Computational Optimization and Applications

, Volume 38, Issue 2, pp 195–216 | Cite as

A recursive algorithm for nonlinear least-squares problems

  • A. Alessandri
  • M. Cuneo
  • S. Pagnan
  • M. SanguinetiEmail author
Article

Abstract

The solution of nonlinear least-squares problems is investigated. The asymptotic behavior is studied and conditions for convergence are derived. To deal with such problems in a recursive and efficient way, it is proposed an algorithm that is based on a modified extended Kalman filter (MEKF). The error of the MEKF algorithm is proved to be exponentially bounded. Batch and iterated versions of the algorithm are given, too. As an application, the algorithm is used to optimize the parameters in certain nonlinear input–output mappings. Simulation results on interpolation of real data and prediction of chaotic time series are shown.

Keywords

Nonlinear programming Nonlinear least squares Extended Kalman filter Recursive optimization Batch algorithms 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bates, D.M., Watts, D.G.: Nonlinear Regression and Its Applications. Wiley, New York (1988) zbMATHGoogle Scholar
  2. 2.
    Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont (1999) zbMATHGoogle Scholar
  3. 3.
    Bertsekas, D.P.: Incremental least–squares methods and the extended Kalman filter. SIAM J. Optim. 6(3), 807–822 (1996) zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Feldkamp, L.A., Prokhorov, D.V., Eagen, C.F., Yuan, F.: Enhanced multi-stream Kalman filter training for recurrent networks. In: Suykens, J., Vandewalle, J. (eds.) Nonlinear Modeling: Advanced Black-Box Techniques, pp. 29–53. Kluwer Academic, Dordrecht (1998) Google Scholar
  5. 5.
    Moriyama, H., Yamashita, N., Fukushima, M.: The incremental Gauss–Newton algorithm with adaptive stepsize rule. Comput. Optim. Appl. 26(2), 107–141 (2003) zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Shuhe, H.: Consistency for the least squares estimator in nonlinear regression model. Stat. Probab. Lett. 67(2), 183–192 (2004) zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Reif, K., Günter, S., Yaz, E., Unbehauen, R.: Stochastic stability of the discrete-time extended Kalman filter. IEEE Trans. Autom. Control 44(4), 714–728 (1999) zbMATHCrossRefGoogle Scholar
  8. 8.
    Kůrková, V., Sanguineti, M.: Learning with generalization capability by kernel methods of bounded complexity. J. Complex. 21(3), 350–367 (2005) CrossRefzbMATHGoogle Scholar
  9. 9.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representation by error propagation. In: Rumelhart, D.E., McClelland, J.L., PDP Research Group (eds.) Parallel Distributed Processing: Explorations in the Microstructures of Cognition, vol. I: Foundations, pp. 318–362. MIT, Cambridge (1986) Google Scholar
  10. 10.
    Widrow, B., Lehr, M.A.: 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proc. IEEE 78(9), 1415–1442 (1990) CrossRefGoogle Scholar
  11. 11.
    Ortega, J.M.: Numerical Analysis: A Second Course. SIAM, Philadelphia (1990). Reprint of the 1972 edition by Academic, Now York zbMATHGoogle Scholar
  12. 12.
    Jazwinski, A.H.: Stochastic Processes and Filtering Theory. Academic, New York (1970) zbMATHGoogle Scholar
  13. 13.
    Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer. 8, 143–196 (1999) MathSciNetCrossRefGoogle Scholar
  14. 14.
    Park, J., Sandberg, I.W.: Universal approximation using radial–basis–function networks. Neural Comput. 3(2), 246–257 (1991) Google Scholar
  15. 15.
    Heskes, T., Wiegerinck, W.: A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning. IEEE Trans. Neural Netw. 7(4), 919–925 (1996) CrossRefGoogle Scholar
  16. 16.
    Demuth, H., Beale, M.: Neural Network Toolbox—User’s Guide. The Math Works, Natick (2000) Google Scholar
  17. 17.
    Bell, B.M., Cathey, F.W.: The iterated Kalman filter update as a Gauss–Newton method. IEEE Trans Autom. Control 38(2), 294–297 (1993) zbMATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Fletcher, R.: Practical Methods of Optimization. Wiley, Chichester (1987) zbMATHGoogle Scholar
  19. 19.
    Battiti, R.: First- and second-order methods for learning: between steepest descent and Newton’s method. Neural Comput. 4(2), 141–166 (1992) Google Scholar
  20. 20.
    Tollenaere, T.: SuperSAB: fast adaptive backpropagation with good scaling properties. Neural Netw. 3(5), 561–573 (1990) CrossRefGoogle Scholar
  21. 21.
    Jacobs, R.A.: Increased rates of convergence through learning rate adaption. Neural Netw. 1(4), 295–307 (1988) CrossRefGoogle Scholar
  22. 22.
    Denton, J.W., Hung, M.S.: A comparison of nonlinear optimization methods for supervised learning in multilayer feedforward neural networks. Eur. J. Oper. Res. 93(2), 358–368 (1996) zbMATHCrossRefGoogle Scholar
  23. 23.
    Stinchcombe, M., White, H.: Approximation and learning unknown mappings using multilayer feedforward networks with bounded weights. In: Proc. Int. Joint Conf. on Neural Networks IJCNN’90, pp. III7–III16 (1990) Google Scholar
  24. 24.
    Singhal, S., Wu, L.: Training multilayer perceptrons with the extended Kalman algorithm. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 1, pp. 133–140. Morgan Kaufmann, San Mateo (1989) Google Scholar
  25. 25.
    Ruck, D.W., Rogers, S.K., Kabrisky, M., Maybeck, P.S., Oxley, M.E.: Comparative analysis of backpropagation and the extended Kalman filter for training multilayer perceptrons. IEEE Trans. Pattern Anal. Mach. Intell. 14(6), 686–691 (1992) CrossRefGoogle Scholar
  26. 26.
    Iiguni, Y., Sakai, H., Tokumaru, H.: A real-time learning algorithm for a multilayered neural network based on the extended Kalman filter. IEEE Trans. Signal Process. 40(4), 959–966 (1992) CrossRefGoogle Scholar
  27. 27.
    Schottky, B., Saad, D.: Statistical mechanics of EKF learning in neural networks. J. Phys. A: Math. Gen. 32(9), 1605–1621 (1999) zbMATHCrossRefGoogle Scholar
  28. 28.
    Nishiyama, K., Suzuki, K.: H -learning of layered neural networks. IEEE Trans. Neural Netw. 12(6), 1265–1277 (2001) CrossRefGoogle Scholar
  29. 29.
    Leung, C.-S., Tsoi, A.-C., Chan, L.W.: Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks. IEEE Trans. Neural Netw. 12(6), 1314–1332 (2001) CrossRefGoogle Scholar
  30. 30.
    Alessandri, A., Sanguineti, M., Maggiore, M.: Optimization-based learning with bounded error for feedforward neural networks. IEEE Trans. Neural Netw. 13(2), 261–273 (2002) CrossRefGoogle Scholar
  31. 31.
    Prechelt, L.: PROBEN 1—A set of neural network benchmark problems and benchmarking rules. Tech. Rep. 21/94, Fakultät für Informatik, Universität Karlsruhe, Germany, September 1994, Anonymous FTP: /pub/papers/techreports/1994/1994-21.ps.gzonftp.ira.uka.de
  32. 32.
    Mackey, M.C., Glass, L.: Oscillation and chaos in physiological control systems. Science 197, 287–289 (1977) CrossRefGoogle Scholar
  33. 33.
    Hardy, G., Littlewood, J.E., Polya, G.: Inequalities. Cambridge University Press, Cambridge (1989) Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • A. Alessandri
    • 1
  • M. Cuneo
    • 2
  • S. Pagnan
    • 2
  • M. Sanguineti
    • 3
    Email author
  1. 1.Department of Production Engineering, Thermoenergetics, and Mathematical Models (DIPTEM)University of GenoaGenovaItaly
  2. 2.Institute of Intelligent Systems for AutomationISSIA-CNR National Research Council of ItalyGenovaItaly
  3. 3.Department of Communications, Computer and System Sciences (DIST)University of GenoaGenovaItaly

Personalised recommendations