Skip to main content
Log in

Adaptive learning with guaranteed stability for discrete-time recurrent neural networks

  • Published:
Journal of Central South University of Technology Aims and scope Submit manuscript

Abstract

To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real time recurrent learning, the weights of the recurrent neural networks were updated online in terms of Lyapunov stability theory in the proposed learning algorithm, so the learning stability was guaranteed. With the inversion of the activation function of the recurrent neural networks, the proposed learning algorithm can be easily implemented for solving varying nonlinear adaptive learning problems and fast convergence of the adaptive learning process can be achieved. Simulation experiments in pattern recognition show that only 5 iterations are needed for the storage of a 15 × 15 binary image pattern and only 9 iterations are needed for the perfect realization of an analog vector by an equilibrium state with the proposed learning algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

References

  1. GUPTA L, MCAVOY M. Investigating the prediction capabilities of the simple recurrent neural network on real temporal sequences[J]. Pattern Recognition, 2000, 33(12): 2075–2081.

    Article  Google Scholar 

  2. GUPTA L, MCAVOY M, PHEGLEY J. Classification of temporal sequences via prediction using the simple recurrent neural network[J]. Pattern Recognition, 2000, 33(10): 1759–1770.

    Article  Google Scholar 

  3. PARLOS A G, PARTHASARATHY S, ATIYA A F. Neuro-predictive process control using on-line controller adaptation [J] IEEE Trans on Control Systems Technology, 2001, 9(5): 741–755.

    Article  Google Scholar 

  4. ZHU Q M, GUO L. Stable adaptive neurocontrol for nonlinear discrete-time systems[J]. IEEE Trans on Neural Networks, 2004, 15(3): 653–662.

    Article  Google Scholar 

  5. LEUNG C S, CHAN L W. Dual extended Kalman filtering in recurrent neural networks[J]. Neural Networks, 2003, 16(2): 223–239.

    Article  Google Scholar 

  6. LI Hong-ru, GU Shu-sheng. A fast parallel algorithm for a recurrent neural network[J]. Acta Automatica Sinica, 2004, 30(4): 516–522. (in Chinese)

    MathSciNet  Google Scholar 

  7. MANDIC D P, CHAMBERS J A. A normalised real time recurrent learning algorithm[J]. Signal Processing, 2000, 80(9): 1909–1916.

    Article  Google Scholar 

  8. MANDIC D P, CHAMBERS J A. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability[M]. Chichester: John Wiley & Sons, Ltd, 2001.

    Book  Google Scholar 

  9. WERBOS P J. Backpropagation through time: What it does and how to do it[J]. Proc IEEE, 1990, 78(10): 1550–1560.

    Article  Google Scholar 

  10. WILLIAMS R J, ZIPSER D. A learning algorithm for continually running fully recurrent neural networks[J]. Neural Computation, 1989, 1(2): 270–280.

    Article  Google Scholar 

  11. ATIYA A F, PARLOS A G. New results on recurrent network training: Unifying the algorithms and accelerating convergence[J]. IEEE Trans on Neural Networks, 2000, 11(3): 697–709.

    Article  Google Scholar 

  12. DENG Hua, LI Han-xiong. A novel neural network approximate inverse control for unknown nonlinear discrete dynamical systems[J] IEEE Trans on Systems, Man and Cybernetics—Part B, 2005, 35(1): 115–123.

    Article  Google Scholar 

  13. JIN L, GUPTA M M. Stable dynamic backpropagation learning in recurrent neural networks [J]. IEEE Trans on Neural Networks, 1999, 10(6): 1321–1334.

    Article  Google Scholar 

  14. NARENDRA K S, PARTHASARATHY K. Gradient methods for the optimization of dynamical systems containing neural networks[J]. IEEE Trans on Neural Networks, 1991, 2(2): 252–262.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deng Hua  (邓华).

Additional information

Foundation item: Project(50276005) supported by the National Natural Science Foundation of China; Projects (2006CB705400, 2003CB716206) supported by National Basic Research Program of China

Rights and permissions

Reprints and permissions

About this article

Cite this article

Deng, H., Wu, Yh. & Duan, Ja. Adaptive learning with guaranteed stability for discrete-time recurrent neural networks. J Cent. South Univ. Technol. 14, 685–689 (2007). https://doi.org/10.1007/s11771-007-0131-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11771-007-0131-z

Key words

Navigation