Advertisement

Frontiers of Computer Science

, Volume 12, Issue 6, pp 1140–1148 | Cite as

Convolutional adaptive denoising autoencoders for hierarchical feature extraction

  • Qianjun Zhang
  • Lei ZhangEmail author
Research Article
  • 49 Downloads

Abstract

Convolutional neural networks (CNNs) are typical structures for deep learning and are widely used in image recognition and classification. However, the random initialization strategy tends to become stuck at local plateaus or even diverge, which results in rather unstable and ineffective solutions in real applications. To address this limitation, we propose a hybrid deep learning CNN-AdapDAE model, which applies the features learned by the AdapDAE algorithm to initialize CNN filters and then train the improved CNN for classification tasks. In this model, AdapDAE is proposed as a CNN pre-training procedure, which adaptively obtains the noise level based on the principle of annealing, by starting with a high level of noise and lowering it as the training progresses. Thus, the features learned by AdapDAE include a combination of features at different levels of granularity. Extensive experimental results on STL-10, CIFAR-10, andMNIST datasets demonstrate that the proposed algorithm performs favorably compared to CNN (random filters), CNNAE (pre-training filters by autoencoder), and a few other unsupervised feature learning methods.

Keywords

convolutional neural networks annealing denoising autoencoder adaptive noise level pre-training 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61322203 and 61332002).

Supplementary material

11704_2016_6107_MOESM1_ESM.ppt (330 kb)
Supplementary material, approximately 329 KB.

References

  1. 1.
    Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Computation, 2006, 18(7): 1527–1554MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Salakhutdinov R, Larochelle H. Efficient learning of deep Boltzmann machines. ResearchGate, 2010, 9(8): 693–700Google Scholar
  3. 3.
    LeCun Y, Boser B, Denker J S, Henderson D, Howard R E, HubbardW, Jackel L D. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1989, 1(4): 541–551CrossRefGoogle Scholar
  4. 4.
    Tan S Q, Li B. Stacked convolutional auto-encoders for steganalysis of digital images. In: Proceedings of Asia-Pacific Signal and Information Processing Association. 2014, 1–4Google Scholar
  5. 5.
    Erhan D, Bengio Y, Courville A, Manzagol P A, Vincent P, Bengio S. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 2010, 11(3): 625–660MathSciNetzbMATHGoogle Scholar
  6. 6.
    Bengio Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009, 2(1): 1–127MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Masci J, Meier U, Ciresan D, Schmidhuber J. Stacked convolutional auto-encoders for hierarchical feature extraction. In: Proceedings of the 21st International Conference on Artificial Neural Networks. 2011, 52–59Google Scholar
  8. 8.
    Lee H, Grosse R, Ranganath R, Ng A Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of International Conference on Machine Learning. 2009, 609–616Google Scholar
  9. 9.
    Ji M Q, Fang L, Zheng H T, Strese M, Steinbach E. Preprocessing-free surface material classification using convolutional neural networks pretrained by sparse Autoencoder. In: Proceedings of the 25th IEEE International Workshop on Machine Learning for Signal Processing. 2015Google Scholar
  10. 10.
    Coates A, Ng A Y, Lee H. An analysis of single-layer networks in unsupervised feature learning. Journal of Machine Learning Research, 2011, 15: 215–223Google Scholar
  11. 11.
    Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. Technical Report. 2009Google Scholar
  12. 12.
    Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504–507MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol P A. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 2010, 11(6): 3371–3408MathSciNetzbMATHGoogle Scholar
  14. 14.
    Olshausen B A, Field D J. Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision Research, 1997, 37(23): 3311–3325CrossRefGoogle Scholar
  15. 15.
    Ranzato M, Boureau Y L, Lecun Y. Sparse feature learning for deep belief networks. Advances in Neural Information Processing Systems, 2007, 1185–1192Google Scholar
  16. 16.
    Lee H, Ekanadham C, Ng A Y. Sparse deep belief net model for visual area V2. Advances in Neural Information Processing Systems, 2008, 20: 873–880Google Scholar
  17. 17.
    Dahl J V, Koch K C, Kleinhans E, Ostwald E, Schulz G, Buell U, Hanrath P. Convolutional networks and applications in vision. In: Proceedings of IEEE International Symposium on Circuits and Systems. 2010, 253–256Google Scholar
  18. 18.
    Agarwal A, Triggs B. Hyperfeatures -multilevel local coding for visual recognition. In: Proceedings of European Conference on Computer Vision. 2006, 30–43Google Scholar
  19. 19.
    Geras K J, Sutton C. Scheduled denoising autoencoders. 2014, arXiv preprint arXiv:1406.3269Google Scholar
  20. 20.
    Chandra B, Sharma R K. Adaptive noise schedule for denoising autoencoder. In: Proceedings of International Conference on Neural Information Processing. 2014, 535–542Google Scholar
  21. 21.
    Coates A, Ng A Y. Selecting receptive fields in deep networks. Advances in Neural Information Processing Systems, 2011, 2528–2536Google Scholar
  22. 22.
    Hui K Y. Direct modeling of complex invariances for visual object features. In: Proceedings of International Conference on Machine Learning. 2013Google Scholar
  23. 23.
    Dosovitskiy A, Springenberg J T, Riedmiller M, Brox T. Discriminative unsupervised feature learning with convolutional neural networks. Advances in Neural Information Processing Systems, 2014, 766–774Google Scholar
  24. 24.
    Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning ap plied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278–2324CrossRefGoogle Scholar
  25. 25.
    Krizhevsky A. Convolutional deep belief networks on CIFAR-10. Technical Report. 2010Google Scholar

Copyright information

© Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Machine Intelligence Laboratory, College of Computer ScienceSichuan UniversityChengduChina

Personalised recommendations