Advertisement

Unsupervised Deep Learning Architectures

  • M. Arif Wani
  • Farooq Ahmad Bhat
  • Saduf Afzal
  • Asif Iqbal Khan
Chapter
Part of the Studies in Big Data book series (SBD, volume 57)

Abstract

The cascade of multiple layers of a deep learning architecture can be learnt in an unsupervised manner for the tasks like pattern analysis. A deep learning architecture can be trained one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine. Unsupervised deep learning algorithms are important because unlabeled data is more abundant than the labeled data. For applications with large volumes of unlabeled data, a two-step procedure is used: in the first step, a deep neural network is pretrained in an unsupervised manner; in the second step, a small portion of the unlabeled data is manually labeled, and then used for supervised fine-tuning of the deep neural network.

Bibliography

  1. Afzal, S., Wani, M.A.: Improving performance of deep networks on handwritten digit classification. In: 2017 4th International Conference on Computing for Sustainable Global Development (INDIACOM), pp. 4238–4241. IEEE (2017)Google Scholar
  2. Afzal, S., Wani, M.A.: Deep neural network architectures: a review. In: 2018 5th International Conference on Computing for Sustainable Global Development (INDIACOM), pp. 3024–3030. IEEE (2018)Google Scholar
  3. Afzal, S., Wani, M.A.: Training and model structure of deep architectures. Artif. Intell. Syst. Mach. Learn. 10(2), 38–46 (2018)Google Scholar
  4. Bengio, Y.: Learning deep architectures for AI. Found. Trends® Mach. Learn. 2(1), 1–127 (2009)Google Scholar
  5. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  6. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In Advances in neural information processing systems, pp. 153–160 (2007)Google Scholar
  7. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.A., Vincent, P., Bengio, S.: Why does unsupervised pre-training help deep learning?. J. Mach. Learn. Res. 11(Feb), 625–660 (2010)Google Scholar
  8. Goroshin, R., LeCun, Y.: Saturating auto-encoders. arXiv preprint arXiv:1301.3577 (2013)
  9. Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)CrossRefGoogle Scholar
  10. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)Google Scholar
  11. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  12. Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 10(Jan), 1–40 (2009)Google Scholar
  13. Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area V2. In: Advances in Neural Information Processing Systems, pp. 873–880 (2008)Google Scholar
  14. Poultney, C., Chopra, S., Cun, Y.L.: Efficient learning of sparse representations with an energy-based model. In: Advances in Neural Information Processing Systems, pp. 1137–1144 (2007)Google Scholar
  15. Rifai, S., Vincent, P., Muller, X., Glorot, X., & Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on International Conference on Machine Learning, pp. 833–840. Omnipress (2011)Google Scholar
  16. Tieleman, T., Hinton, G.: Using fast weights to improve persistent contrastive divergence. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1033–1040. ACM (2009)Google Scholar
  17. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P. A. Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on Machine learning, pp. 1096–1103. ACM (2008)Google Scholar
  18. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11(Dec), 3371–3408 (2010)Google Scholar
  19. Wani, M.A., Afzal, S.: Gain parameter and dropout-based fine tuning of deep networks. Int. J. Intell. Inf. Database Syst. 11(4), 236–254 (2018a)Google Scholar
  20. Wani, M.A., Afzal, S.: Optimization of deep network models through fine tuning. Int. J. Intell. Comput. Cybern. 11(3), 386–403 (2018b)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • M. Arif Wani
    • 1
  • Farooq Ahmad Bhat
    • 2
  • Saduf Afzal
    • 3
  • Asif Iqbal Khan
    • 4
  1. 1.Department of Computer SciencesUniversity of KashmirSrinagarIndia
  2. 2.Education DepartmentGovernment of Jammu and KashmirKashmirIndia
  3. 3.Islamic University of Science and TechnologyKashmirIndia
  4. 4.Department of Computer SciencesUniversity of KashmirSrinagarIndia

Personalised recommendations