Advertisement

Implementing Neural Networks Efficiently

  • Ronan Collobert
  • Koray Kavukcuoglu
  • Clément Farabet
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7700)

Abstract

Neural networks and machine learning algorithms in general require a flexible environment where new algorithm prototypes and experiments can be set up as quickly as possible with best possible computational performance. To that end, we provide a new framework called Torch7, that is especially suited to achieve both of these competing goals. Torch7 is a versatile numeric computing framework and machine learning library that extends a very lightweight and powerful programming language Lua. Its goal is to provide a flexible environment to design, train and deploy learning machines. Flexibility is obtained via Lua, an extremely lightweight scripting language. High performance is obtained via efficient OpenMP/SSE and CUDA implementations of low-level numeric routines. Torch7 can also easily be interfaced to third-party software thanks to Lua’s light C interface.

Keywords

Neural Network Convolutional Neural Network Stochastic Gradient Descent Neural Network Toolbox Convolutional Layer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Bengio, Y.: Theano: a CPU and GPU math expression compiler. In: Proceedings of the Python for Scientific Computing Conference, SciPy (2010)Google Scholar
  2. 2.
    Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Lechevallier, Y., Saporta, G. (eds.) Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT 2010), Paris, France, pp. 177–187. Springer (August 2010)Google Scholar
  3. 3.
    Bottou, L., LeCun, Y.: SN: A simulator for connectionist models. In: Proceedings of NeuroNimes 1988, Nimes, France (1988)Google Scholar
  4. 4.
    Le, Q.V., Coates, A., Prochnow, B., Ng, A.Y.: On optimization methods for deep learning. Learning, 265–272 (2011)Google Scholar
  5. 5.
    LeCun, Y., Bottou, L.: Lush reference manual. Technical report (2002), code, http://lush.sourceforge.net
  6. 6.
    LeCun, Y., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 9–50. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  7. 7.
    Martens, J.: Deep learning via hessian-free optimization. In: Proceedings of the 27th International Conference on Machine Learning (ICML), vol. 951 (2010)Google Scholar
  8. 8.
    Vinyals, O., Povey, D.: Krylov subspace descent for deep learning. Arxiv preprint arXiv:1111.4259 (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Ronan Collobert
    • 1
  • Koray Kavukcuoglu
    • 2
  • Clément Farabet
    • 3
    • 4
  1. 1.Idiap Research InstituteMartignySwitzerland
  2. 2.NEC Laboratories AmericaPrincetonUSA
  3. 3.Courant Institute of Mathematical SciencesNew York UniversityNew YorkUSA
  4. 4.Université Paris-Est Équipe A3SI - ESIEE ParisFrance

Personalised recommendations