Skip to main content

Introduction to Deep Learning

  • Chapter
  • First Online:
Advanced Deep Learning for Engineers and Scientists

Abstract

Deep learning has gained increasing attention in automatic speech recognition, computer vision, natural language processing, drug discovery toxicology, audio recognition, bioinformatics, and automatic driving of vehicles due to its potential benefits such as feature extraction and data classification problems. It is an evolving research domain in diverse applications which increases the overall potential cost benefits for maintenance and refurbishes activities. Deep learning is an ubiquitous technology, and the machine learning algorithms assist in modelling high-level abstract view of data by means of processing layers which encompasses complex structures. The software tools in this area provide finer representations from massive volume of unlabeled data. The software in deep learning identifies patterns in the form of digital representation such as images, data, sound, etc. According to Gartner’s hype cycle, deep learning is on “permanent peak” since 2015, and HFS research survey states that 86% of respondents believe that the technology makes a huge business impact in the industry sector.

It is a rapid growing domain with a set of powerful techniques and huge amount of computational power, where machine identifies objects and translates the recognized speech in real time. The main key aspects of deep learning are (i) models comprising several stages or layers of nonlinear information processing and (ii) methodologies for supervised or unsupervised learning for feature extraction at an abstraction level. The significant reasons for its popularity are the increased size of training data set, chip processing capabilities, and the recent advancement in signal processing and machine learning research. The deep learning techniques are effectually exploiting intricate nonlinear functions to acquire hierarchical and distributed feature representations in the perspective of utilizing both labeled and unlabeled data. The different deep learning methods and architectures such as convolutional deep neural network (CDNN), deep neural network (DNN), recurrent neural network (RNN), deep belief network (DBN), artificial neural network (ANN), and long short-term memory (LSTM) are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Kuhn, Deanna, et al. Handbook of Child Psychology. Vol. 2, Cognition, Perception, and Language. Wiley, 1998

    Google Scholar 

  2. Restak, Richard M. and David Grubin. The Secret Life of the Brain. Joseph Henry Press, 2001

    Google Scholar 

  3. McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5(4), 115–133 (1943)

    Article  MathSciNet  Google Scholar 

  4. Mountcastle, V.B.: Modality and topographic properties of single neurons of cat’s somatic sensory cortex. J. Neurophysiol. 20(4), 408–434 (1957)

    Article  Google Scholar 

  5. Hubel, D.H., Wiesel, T.N.: Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 195, 215–243 (1968)

    Article  Google Scholar 

  6. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv Neural Inf Process Syst, 1097–1105 (2012)

    Google Scholar 

  7. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–9

    Google Scholar 

  8. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016)

    Google Scholar 

  9. The Michael J. Fox Foundation for Parkinson’s Research: subtyping Parkinson’s disease with deep learning models. https://www.michaeljfox.org/foundation/grant-detail.php

  10. Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1, 270–280 (1989)

    Article  Google Scholar 

  11. Lecun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE. 86, 2278–2324 (1998)

    Article  Google Scholar 

  12. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science. 313, 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  13. Smolensky P. Information processing in dynamical systems: Foundations of harmony theory (No. CU-CS-321-86). Colorado University at Boulder Dept of Computer Science 1986

    Google Scholar 

  14. Liu S, Liu S, Cai W, et al. Early diagnosis of Alzheimer’s disease with deep learning. In: International Symposium on Biomedical Imaging, Beijing, China 2014, 1015–1018

    Google Scholar 

  15. Brosch, T., Tam, R.: Manifold learning of brain MRIs by deep learning. Med Image Comput Comput Assist Interv. 16, 633–640 (2013)

    Google Scholar 

  16. Manning, C.D., Raghavan, P., Schutze, H.: Introduction to Information Retrieval, vol. 3. Cambridge university press, Cambridge (2008)

    Book  Google Scholar 

  17. Pham T, Tran T, Phung D, et al. Deep Care: a deep dynamic memory model for predictive medicine. arXiv 2016. https://arxiv.org/abs/1602.00357

  18. Choi E, Bahadori MT, Schuetz A, et al. Doctor AI: predicting clinical events via recurrent neural networks. arXiv 2015. http://arxiv.org/abs/1511.05942v11

  19. Miotto, R., Li, L., Kidd, B.A., et al.: Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. 6, 26094 (2016)

    Article  Google Scholar 

  20. Singh, H., Bathla, A.K.: A survey on speech recognition. Int. J. Adv. Res. Comput. Eng. Technol. 2(6), 2186–2189 (2013)

    Google Scholar 

  21. Xie, Y., Le, L., Zhou, Y., Raghavan, V.V.: Deep learning for natural language processing. In: Handbook of statistics. Elsevier, Amsterdam, The Netherlands (2018)

    Google Scholar 

  22. Cambria, E., White, B.: Jumping NLP curves: a review of natural language processing research. IEEE Comput. Intell. Mag. 9(2), 48–57 (2014)

    Article  Google Scholar 

  23. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J Machine Learn Res. 12(Aug), 2493–2537 (2011)

    MATH  Google Scholar 

  24. Prakash, K. B., Ruwali, A., Kanagachidambaresan, G. R.: Introduction, in to tensor flow, programming with tensor flow, EIA/Springer innovations in communication and computing. https://doi.org/10.1007/978-3-030-57077-4_1

  25. JHA, A.K., Ruwali, A., Prakash, K.B., Kanagachidambaresan, G.R.: Tensor Flow Basics, programming with tensor flow, EIA/Springer innovations in communication and computing. https://doi.org/10.1007/978-3-030-57077-4_2

  26. Kanagachidambaresan, G.R., Manohar Vinoothna, G., Prakash, K.B.: Visualizations, programming with tensor flow, EIA/Springer innovations in communication and computing. https://doi.org/10.1007/978-3-030-57007-4_3

  27. Prakash, K.B., Ruwali, A., Kanagachidambaresan, G.R.: Regression, programming with tensor flow, EIA/Springer innovations in communication and computing. https://doi.org/10.1007/978-3-030-57007-4_4

  28. Vadla, P.K., Ruwali, A., Lakshmi, M.V.P., Kanagachidambaresan, G.R.: Neural network, programming with tensor flow, EIA/Springer innovations in communication and computing. https://doi.org/10.1007/978-3-030-57007-4_5

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Indrakumari, R., Poongodi, T., Singh, K. (2021). Introduction to Deep Learning. In: Prakash, K.B., Kannan, R., Alexander, S., Kanagachidambaresan, G.R. (eds) Advanced Deep Learning for Engineers and Scientists. EAI/Springer Innovations in Communication and Computing. Springer, Cham. https://doi.org/10.1007/978-3-030-66519-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66519-7_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66518-0

  • Online ISBN: 978-3-030-66519-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics