Skip to main content

Let’s Open the Black Box of Deep Learning!

  • Conference paper
  • First Online:
Business Intelligence and Big Data (eBISS 2017)

Part of the book series: Lecture Notes in Business Information Processing ((LNBIP,volume 324))

Included in the following conference series:

  • 1205 Accesses

Abstract

Deep learning is one of the fastest growing areas of machine learning and a hot topic in both academia and industry. This tutorial tries to figure out what are the real mechanisms that make this technique a breakthrough with respect to the past. To this end, we will review what is a neural network, how we can learn its parameters by using observational data, some of the most common architectures (CNN, LSTM, etc.) and some of the tricks that have been developed during the last years.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The number of weights we must learn for a \((M \times N)\) convolution kernel is only \((M \times N)\), which is independent of the size of the image.

  2. 2.

    The computation of useful vectors for words is out of the scope of this tutorial, but the most common method is word embedding, an unsupervised method that is based on shallow neural networks.

References

  1. Hebb, D.O.: The Organization of Behavior. Wiley & Sons, New York (1949)

    Google Scholar 

  2. Rosenblatt, F.: the perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65(6), 386–408 (1958). Cornell Aeronautical Laboratory

    Article  Google Scholar 

  3. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)

    Article  Google Scholar 

  4. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS 2012), pp. 1097–1105. Curran Associates Inc., USA (2012)

    Google Scholar 

  5. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  6. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML-2010), pp. 807–814 (2010)

    Google Scholar 

  7. Csáji, B.C.: Approximation with Artificial Neural Networks, vol. 24, p. 48. Faculty of Sciences, Etvs Lornd University, Hungary (2001)

    Google Scholar 

  8. Sutton, R.S.: Two problems with backpropagation and other steepest-descent learning procedures for networks. In: Proceedings of 8th Annual Conference Cognitive Science Society (1986)

    Google Scholar 

  9. Duchi, J., Hazan, E., Singer, Y.: Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

  10. Kingma, D., Jimmy B.A.: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  11. Bottou, L.: Online Algorithms and Stochastic Approximations. Online Learning and Neural Networks. Cambridge University Press, Cambridge (1998)

    MATH  Google Scholar 

  12. Mozer, M.C.: A focused backpropagation algorithm for temporal pattern recognition. In: Chauvin, Y., Rumelhart, D. Backpropagation: Theory, Architectures, and Applications, pp. 137–169. ResearchGate, Lawrence Erlbaum Associates, Hillsdale (1995). Accessed 21 Aug 2017

    Google Scholar 

  13. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  14. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling arXiv:1412.3555 (2014)

  15. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D.G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: a system for large-scale machine learning. In: OSDI, vol. 16, pp. 265–283 (2016)

    Google Scholar 

  16. Paszke, A., Gross, S., Chintala, S.: PyTorch. GitHub repository (2017). https://github.com/orgs/pytorch/people

  17. Chollet, F.: (2017). Keras (2015). http://keras.io

  18. Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep learning, vol. 1. MIT press, Cambridge (2016)

    MATH  Google Scholar 

  19. Nielsen, M.A.: Neural Networks and Deep Learning. Determination Press (2015)

    Google Scholar 

Download references

Acknowledgement

This work was partially supported by TIN2015-66951-C2 and SGR 1219 grants. I thank the anonymous reviewers for their careful reading of the manuscript and their many insightful comments and suggestions. I also want to acknowledge the support of NVIDIA Corporation with the donation of a Titan X Pascal GPU. Finally, I would like to express my sincere appreciation to the organizers of the Seventh European Business Intelligence & Big Data Summer School.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jordi Vitrià .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vitrià, J. (2018). Let’s Open the Black Box of Deep Learning!. In: Zimányi, E. (eds) Business Intelligence and Big Data. eBISS 2017. Lecture Notes in Business Information Processing, vol 324. Springer, Cham. https://doi.org/10.1007/978-3-319-96655-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96655-7_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-96654-0

  • Online ISBN: 978-3-319-96655-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics