A Constrained Regularization Approach for Input-Driven Recurrent Neural Networks

Original Research

Abstract

We introduce a novel regularization approach for a class of input-driven recurrent neural networks. The regularization of network parameters is constrained to reimplement a previously recorded state trajectory. We derive a closed-form solution for network regularization and show that the method is capable of reimplementing harvested dynamics. We investigate important properties of the method and the regularized networks and show that the regularization improves task-specific generalization on a combined prediction and non-linear sequence transduction task. The approach has strong theoretical and practical implications.

Keywords

Recurrent neural networks Regularization Reservoir computing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Guez A., Protopopsecu V., Barhen J.: On the stability, storage capacity, and design of nonlinear continuous neural networks. IEEE Trans. Syst. Man Cybernet. 18(1), 80–87 (1988)CrossRefGoogle Scholar
  2. 2.
    Kelly D.G.: Stability in contractive nonlinear neural networks. IEEE Trans. Biomed. Eng. 37, 231–242 (1990)CrossRefGoogle Scholar
  3. 3.
    Forti M.: On global asymptotic stability of a class of nonlinear systems arising in neural network theory. J. Differ. Equ. 113, 246–264 (1994)MathSciNetMATHCrossRefGoogle Scholar
  4. 4.
    Forti M., Tesi A.: New Conditions for global stability of neural networks with application to linear and quadratic programming problems. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 42(7), 354–366 (1995)MathSciNetMATHCrossRefGoogle Scholar
  5. 5.
    Fang Y., Kincaid T.G.: Stability analysis of dynamical neural networks. IEEE Trans. Neural Netw. 7(4), 996–1005 (1996)CrossRefGoogle Scholar
  6. 6.
    Liao X., Chen G., Sanchez Edgar N.: Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach. Neural Netw. 15(7), 855–866 (2002)CrossRefGoogle Scholar
  7. 7.
    Steil J.J.: Local structural stability of recurrent networks with time-varying weights. Neurocomputing 48(1), 39–51 (2002)MATHCrossRefGoogle Scholar
  8. 8.
    Steil, J.J.: Input–Output Stability of Recurrent Neural Networks with Time-Varying Parameters. Cuvellier, Goettingen (2000)Google Scholar
  9. 9.
    Fromion V., Monaco S., Normand-Cyrot D.: Asymptotic properties of incrementally stable systems. IEEE Trans. Autom. Control 41(5), 721–732 (1996)MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Atiya A.F., Parlos A.G.: New results on recurrent network training: unifying the algorithms and accelerating convergence. IEEE Trans. Neural Netw. 11(3), 697–709 (2000)CrossRefGoogle Scholar
  11. 11.
    Steil, J.J., Ritter, H.: Recurrent learning of input–output stable behaviour in function space: a case study with the Roessler attractor. In: Proceedings of ICANN, pp. 761–766, 1999Google Scholar
  12. 12.
    Steil J.J.: Online stability of backpropagation-decorrelation recurrent learning. Neurocomputing 69(7–9), 642–650 (2006)CrossRefGoogle Scholar
  13. 13.
    Jäger H., Haas H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)CrossRefGoogle Scholar
  14. 14.
    Jäger, H.: The “echo state” approach to analysing and training recurrent neural networks. GMD Report 148, 2001Google Scholar
  15. 15.
    Maass W., Natschläger T., Markram H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)MATHCrossRefGoogle Scholar
  16. 16.
    Steil, J.J.: Backpropagation-decorrelation: recurrent learning with O(N) complexity. In: Proceedings of IJCNN, vol. 1, pp. 843–848, 2004Google Scholar
  17. 17.
    Verstraeten D., Schrauwen B., D’Haene M., Stroobandt D.: An experimental unification of reservoir computing methods. Neural Netw. 20, 391–403 (2007)MATHCrossRefGoogle Scholar
  18. 18.
    Rolf, M., Steil, J.J., Gienger, M.: Efficient exploration and learning of whole body kinematics. In: IEEE 8th international conference on Development and learning, pp. 1–7, 2009Google Scholar
  19. 19.
    Steil Jochen J.: Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning. Neural Netw. 20(3), 353–364 (2007)MATHCrossRefGoogle Scholar
  20. 20.
    Jaeger H., Lukoševičius M., Popovici D., Siewert U.: Optimization and applications of echo state networks with leaky integrator neurons. Neural Netw. 20, 335–352 (2007)MATHCrossRefGoogle Scholar
  21. 21.
    Ozturk M.C., Xu D., Principe J.C.: Analysis and design of echo state networks. Neural Comput. 19(1), 111–138 (2007)MATHCrossRefGoogle Scholar
  22. 22.
    Wierstra, D., Gomez, F.J., Schmidhuber, J.: Modeling systems with internal state using evolino. In: Genetic and evolutionary computation conference, pp. 1795–1802, 2005Google Scholar
  23. 23.
    Jäger, H.: Short term memory in echo state networks. GMD Report 152, 2002Google Scholar
  24. 24.
    Wyffels, F., Schrauwen, B., Stroobandt, D.: Stable output feedback in reservoir computing using ridge regression. In: ICANN, pp. 808–817, 2008Google Scholar
  25. 25.
    Bertsekas D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982)MATHGoogle Scholar
  26. 26.
    Schiller U.D., Steil J.J.: Analyzing the weight dynamics of recurrent learning algorithms. Neurocomputing 63C, 5–23 (2005)Google Scholar

Copyright information

© Foundation for Scientific Research and Technological Innovation 2010

Authors and Affiliations

  1. 1.Research Institute for Cognition and Robotics (CoR-Lab)Bielefeld UniversityBielefeldGermany

Personalised recommendations