Neural Network Training with Extended Kalman Filter Using Graphics Processing Unit

  • Peter Trebatický
  • Jiří Pospíchal
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5164)


The graphics processing unit has evolved through the years into the powerful resource for general purpose computing. We present in this article the implementation of Extended Kalman filter used for recurrent neural networks training, which most computational intensive tasks are performed on the GPU. This approach achieves significant speedup of neural network training process for larger networks.


Graphic Processing Unit Hide Neuron Extend Kalman Filter Recurrent Neural Network Cholesky Factorization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Čerňanský, M., Makula, M., Beňušková, L.: Processing Symbolic Sequences by Recurrent Neural Networks Trained by Kalman Filter-Based Algorithms. In: SOFSEM 2004 (2004) ISBN 80-86732-19-3 58–65Google Scholar
  2. 2.
    Dongarra, J.: Basic Linear Algebra Subprograms Technical Forum Standard. Int. J. of High Performance Applications and Supercomputing 16(1), 1–111 (2002)CrossRefGoogle Scholar
  3. 3.
    Haykin, S.: Kalman Filtering and Neural Networks. John Wiley & Sons, Inc., New York (2002) ISBN: 0-471-36998-5 Google Scholar
  4. 4.
    Jung, J.: Cholesky Decomposition and Linear Programming on a GPU. Sholarly Paper. University of MarylandGoogle Scholar
  5. 5.
    Kalman, R.E.: A New Approach to Linear Filtering and Prediction Problems. Trans. of the ASME, Series D, Journal of Basic Engineering 82, 35–45 (1960)Google Scholar
  6. 6.
    Kyoung-Su, O., Keechul, J.: GPU Implementation of Neural Networks. Pattern Recognition 37, 1311–1314 (2004)zbMATHCrossRefGoogle Scholar
  7. 7.
    Patel, G.S.: Modeling Nonlinear Dynamics with Extended Kalman Filter Trained Recurrent Multilayer Perceptrons. McMaster University (2000)Google Scholar
  8. 8.
    Prokhorov, D.V.: Kalman Filter Training of Neural Networks: Methodology and Applications. Ford Research Laboratory (2002)Google Scholar
  9. 9.
    Trebatický, P.: Recurrent Neural Network Training with the Kalman Filter-based Techniques. Neural network world 15(5), 471–488 (2005)Google Scholar
  10. 10.
    Trebatický, P.: Recurrent Neural Network Training with the Extended Kalman Filter. IIT SRC: Proc. In: Informatics and Info. Technologies, 57–67 (2005)Google Scholar
  11. 11.
    Automatically Tuned Linear Algebra Software (ATLAS),
  12. 12.
    Compute Unified Device Architecture (CUDA),

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Peter Trebatický
    • 1
  • Jiří Pospíchal
    • 1
  1. 1.Faculty of Informatics and Information TechnologiesSlovak University of Technology in BratislavaBratislavaSlovakia

Personalised recommendations