Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review

  • Tomaso Poggio
  • Hrushikesh Mhaskar
  • Lorenzo Rosasco
  • Brando Miranda
  • Qianli Liao
Open Access
Review Special Issue on Human Inspired Computing

DOI: 10.1007/s11633-017-1054-2

Cite this article as:
Poggio, T., Mhaskar, H., Rosasco, L. et al. Int. J. Autom. Comput. (2017). doi:10.1007/s11633-017-1054-2

Abstract

The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.

Keywords

Machine learning neural networks deep and shallow networks convolutional neural networks function approximation deep learning 

Copyright information

© The Author(s) 2017

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Tomaso Poggio
    • 1
  • Hrushikesh Mhaskar
    • 2
    • 3
  • Lorenzo Rosasco
    • 1
  • Brando Miranda
    • 1
  • Qianli Liao
    • 1
  1. 1.Center for Brains, Minds, and Machines, McGovern Institute for Brain ResearchMassachusetts Institute of TechnologyCambridgeUSA
  2. 2.Department of MathematicsCalifornia Institute of TechnologyPasadenaUSA
  3. 3.Institute of Mathematical SciencesClaremont Graduate UniversityClaremontUSA

Personalised recommendations