Advertisement

Multi-dimensional Recurrent Neural Networks

  • Alex Graves
  • Santiago Fernández
  • Jürgen Schmidhuber
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4668)

Abstract

Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multi-dimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks, thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.

Keywords

Hide Layer Recurrent Neural Network Handwritten Digit Forward Pass Recurrent Connection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bridle, J.S.: Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In: Fogleman-Soulie, F., Herault, J. (eds.) Neurocomputing: Algorithms, Architectures and Applications, pp. 227–236. Springer, Heidelberg (1990)Google Scholar
  2. 2.
    Gers, F., Schraudolph, N., Schmidhuber, J.: Learning precise timing with LSTM recurrent networks. Journal of Machine Learning Research 3, 115–143 (2002)CrossRefGoogle Scholar
  3. 3.
    Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the International Conference on Machine Learning, ICML 2006, Pittsburgh, USA (2006)Google Scholar
  4. 4.
    Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks 18(5-6), 602–610 (2005)CrossRefGoogle Scholar
  5. 5.
    Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: Kremer, S.C., Kolen, J.F. (eds.) A Field Guide to Dynamical Recurrent Neural Networks, IEEE Computer Society Press, Los Alamitos (2001)Google Scholar
  6. 6.
    Hochreiter, S., Schmidhuber, J.: Long Short-Term Memory. Neural Computation 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  7. 7.
    Hülsken, F., Wallhoff, F., Rigoll, G.: Facial expression recognition with pseudo-3d hidden markov models. In: Proceedings of the 23rd DAGM-Symposium on Pattern Recognition, pp. 291–297. Springer, London, UK (2001)Google Scholar
  8. 8.
    Jiten, J., Mérialdo, B., Huet, B.: Multi-dimensional dependency-tree hidden Markov models. In: ICASSP 2006. 31st IEEE International Conference on Acoustics, Speech, and Signal Processing, Toulouse, France, May 14-19, 2006, IEEE Computer Society Press, Los Alamitos (2006)Google Scholar
  9. 9.
    Joshi, D., Li, J., Wang, J.Z.: Parameter estimation of multi-dimensional hidden markov models: A scalable approach. p. III, pp. 149–152 (2005)Google Scholar
  10. 10.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  11. 11.
    Li, J., Najmi, A., Gray, R.M.: Image classification by a two-dimensional hidden markov model. IEEE Transactions on Signal Processing 48(2), 517–533 (2000)CrossRefGoogle Scholar
  12. 12.
    McCarter, G., Storkey, A.: Air Freight Image Segmentation Database, http://homepages.inf.ed.ac.uk/amos/afreightdata.html
  13. 13.
    Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45, 2673–2681 (1997)CrossRefGoogle Scholar
  14. 14.
    Simard, P.Y., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: ICDAR ’03. Proceedings of the Seventh International Conference on Document Analysis and Recognition, p. 958. IEEE Computer Society Press, Washington, DC (2003)Google Scholar
  15. 15.
    Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent networks and their computational complexity. In: Chauvin, Y., Rumelhart, D.E. (eds.) Back-propagation: Theory, Architectures and Applications, pp. 433–486. Lawrence Erlbaum Publishers, Hillsdale, NJ (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Alex Graves
    • 1
  • Santiago Fernández
    • 1
  • Jürgen Schmidhuber
    • 1
    • 2
  1. 1.IDSIA , Galleria 2, 6928 Manno-LuganoSwitzerland
  2. 2.TU Munich, Boltzmannstr. 3, 85748 Garching, MunichGermany

Personalised recommendations