Advertisement

Training Supervised Deep Learning Networks

  • M. Arif Wani
  • Farooq Ahmad Bhat
  • Saduf Afzal
  • Asif Iqbal Khan
Chapter
Part of the Studies in Big Data book series (SBD, volume 57)

Abstract

Training supervised deep learning networks involves obtaining model parameters using labeled dataset to allow the network to map an input data to a class label. The labeled dataset consists of training examples, where each example is a pair of an input data and a desired class label. The deep model parameters allow the network to correctly determine the class labels for unseen instances. This requires the model to generalize from the training dataset to unseen instances.

Bibliography

  1. Cho, J., Lee, K., Shin, E., Choy, G., Do, S.: How much data is needed to train a medical image deep learning system to achieve necessary high accuracy? arXiv preprint arXiv:1511.06348. 19 Nov 2015
  2. Dauphin, Y.N., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., Bengio, Y.: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In: Advances in Neural Information Processing Systems, pp. 2933–2941 (2014)Google Scholar
  3. Erickson, B.J., Korfiatis, P., Kline, T.L., Akkus, Z., Philbrick, K., Weston, A.D.: Deep learning in radiology: does one size fit all? J. Am. Coll. Radiol. 15(3), 521–526 (2018)CrossRefGoogle Scholar
  4. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. Book in preparation for MIT Press. URL http://www.deeplearningbook.org (2016)
  5. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)Google Scholar
  6. Kingma, D.P., Adam, J.B.: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 22 Dec 2014
  7. Kumar, S.K.: On weight initialization in deep neural networks. arXiv preprint arXiv:1704.08863. 28 Apr 2017
  8. Nowlan, S.J., Hinton, G.E.: Simplifying neural networks by soft weight-sharing. Neural Comput. 4(4), 473–493 (1992)CrossRefGoogle Scholar
  9. Sussillo, D., Abbott, L.F.: Random walk initialization for training very deep feedforward networks. arXiv preprint arXiv:1412.6558. 19 Dec 2014
  10. Wilson, D.R., Martinez, T.R.: The general inefficiency of batch training for gradient descent learning. Neural Netw. 16(10), 1429–1451 (2003)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • M. Arif Wani
    • 1
  • Farooq Ahmad Bhat
    • 2
  • Saduf Afzal
    • 3
  • Asif Iqbal Khan
    • 4
  1. 1.Department of Computer SciencesUniversity of KashmirSrinagarIndia
  2. 2.Education DepartmentGovernment of Jammu and KashmirKashmirIndia
  3. 3.Islamic University of Science and TechnologyKashmirIndia
  4. 4.Department of Computer SciencesUniversity of KashmirSrinagarIndia

Personalised recommendations