Time Window Width Influence on Dynamic BPTT(h) Learning Algorithm Performances: Experimental Study

  • V. Scesa
  • P. Henaff
  • F. B. Ouezdou
  • F. Namoun
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)


The purpose of the research addressed in this paper is to study the influence of the time window width in dynamic truncated BackPropagation Through Time BPTT(h) learning algorithms. Statistical experiments based on the identification of a real biped robot balancing mechanism are carried out to raise the link between the window width and the stability, the speed and the accuracy of the learning. The time window width choice is shown to be crucial for the convergence speed of the learning process and the generalization ability of the network. Although, a particular attention is brought to a divergence problem (gradient blow up) observed with the assumption where the net parameters are constant along the window. The limit of this assumption is demonstrated and parameters evolution storage, used as a solution for this problem, is detailed.


Learning Rate Convergence Speed Recurrent Neural Network Window Width Generalization Ability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Mohamed, B., Gravez, F., Ouezdou, F.B.: Emulation of the dynamic effects of human torso during walking gait. Journal of Mechanical Design 126, 830–841 (2004)CrossRefGoogle Scholar
  2. 2.
    Tsung, F.-S.: Modeling Dynamical Systems with Recurrent Neural Networks. PhD thesis, Department of Computer Science. University of California, San Diego (1994)Google Scholar
  3. 3.
    Nguyen, M.H., Cottrell, G.W.: Tau Net: A neural network for modeling temporal variability. Neurocomputing 15, 249–271 (1997)MATHCrossRefGoogle Scholar
  4. 4.
    Hochreiter, S., Younger, A.S., Conwell, P.R.: Learning to learn using gradient descent. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 87–94. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  5. 5.
    Pearlmutter, B.A.: Gradient calculation for dynamic recurrent neural networks: a survey. Transactions on Neural Networks 6(5), 1212–1228 (1995)CrossRefGoogle Scholar
  6. 6.
    Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10), 1550–1560 (1990)CrossRefGoogle Scholar
  7. 7.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation Parallel distributed processing: explorations in the microstructure of cognition. In: Rumelhart, D.E., Mc- Clelland, J.L., the PDP Research Group (eds.), pp. 318–362. MIT Press, Cambridge (1986)Google Scholar
  8. 8.
    Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent connectionist networks. In: Chauvin, Y., Rumelhart, D.E. (eds.) Backpropagation: Theory, Architectures, and Applications, Erlbaum, Hillsdale, NJ (1990)Google Scholar
  9. 9.
    Williams, R.J., Peng, J.: An efficient gradient–based algorithm for on–line training of recurrent network trajectories. Neural Computation, vol. 2, pp. 490–501. MIT Press, Cambridge (1990)Google Scholar
  10. 10.
    Campolucci, P., Uncini, A., Piazza, F., Rao, B.D.: On-Line Learning Algorithms for Locally Recurrent Neural Networks. IEEE-NN 10, 253 (1999)Google Scholar
  11. 11.
    Vukobratovic, M., Borovac, B.: Zero-moment point – thirty five years of its life. International Journal of Humanoid Robotics 1(1), 157–173 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • V. Scesa
    • 1
  • P. Henaff
    • 1
  • F. B. Ouezdou
    • 1
  • F. Namoun
    • 2
  1. 1.LISVUniversité de Versailles St QuentinVélizyFrance
  2. 2.BIA companyConflans Ste HonorineFrance

Personalised recommendations