Advertisement

Unsupervised Dimension Reduction for Image Classification Using Regularized Convolutional Auto-Encoder

  • Chaoyang Xu
  • Ling WuEmail author
  • Shiping Wang
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 943)

Abstract

Unsupervised dimension reduction has gained widespread attention. Most of previous work performed poorly on image classification due to taking no account of neighborhood relations and spatial localities. In this paper, we propose the ‘regularized convolutional auto-encoder’, which is a variant of auto-encoder that uses the convolutional operation to extract low-dimensional representations. Each auto-encoder is trained with cluster regularization terms. The contributions of this work are presented as follows: First, we perform different sized filter convolution in parallel and abstract a low-dimensional representation from images cross scales simultaneously. Second, we introduce a cluster regularized rule on auto-encoders to reduce the classification error. Extensive experiments conducted on six publicly available datasets demonstrate that the proposed method significantly reduces the classification error after dimension reduction.

Keywords

Deep learning Auto-encoder Convolutional neural network Dimension reduction Unsupervised learning 

Notes

Acknowledgments

This work is partly supported by the National Natural Science Foundation of China under Grant No. 61502104, the Fujian Young and Middle-aged Teacher Education Research Project under Grant No. JT180478.

References

  1. 1.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  2. 2.
    Hotelling, H.: Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24(6), 417–520 (1933)CrossRefGoogle Scholar
  3. 3.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  4. 4.
    Mcinnes, L., Healy, J.: UMAP: uniform manifold approximation and projection for dimension reduction (2018)CrossRefGoogle Scholar
  5. 5.
    Wickelmaier, F.: An introduction to MDS. Sound Quality Research Unit (2003)Google Scholar
  6. 6.
    Tenenbaum, J.B., De Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)CrossRefGoogle Scholar
  7. 7.
    He, X., Cai, D., Yan, S., Zhang, H.J.: Neighborhood preserving embedding. In: Tenth IEEE International Conference on Computer Vision, pp. 1208–1213 (2005)Google Scholar
  8. 8.
    Wang, S., Wang, H.: Unsupervised feature selection via low-rank approximation and structure learning. Knowl.-Based Syst. 124, 70–79 (2017)CrossRefGoogle Scholar
  9. 9.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Nair, V., Hinton, G.E.: 3D object recognition with deep belief nets. In: International Conference on Neural Information Processing Systems, pp. 1339–1347 (2009)Google Scholar
  11. 11.
    Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: International Conference on Machine Learning, pp. 1096–1103 (2008)Google Scholar
  12. 12.
    Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: ICML (2011)Google Scholar
  13. 13.
    Zhang, J., Yu, J., Tao, D.: Local deep-feature alignment for unsupervised dimension reduction. IEEE Trans. Image Process. 27(5), 1–1 (2018)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Lin, M., Chen, Q., Yan, S.: Network in network. Comput. Sci. (2013)Google Scholar
  15. 15.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. (2014)Google Scholar
  16. 16.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)Google Scholar
  17. 17.
    Ma, G., Yang, X., Zhang, B., Shi, Z.: Multi-feature fusion deep networks. Neurocomputing 218, 164–171 (2016). http://www.sciencedirect.com/science/article/pii/S0925231216309559CrossRefGoogle Scholar
  18. 18.
    Levina, E., Bickel, P.J.: Maximum likelihood estimation of intrinsic dimension. In: International Conference on Neural Information Processing Systems, pp. 777–784 (2004)Google Scholar
  19. 19.
    Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. In: International Conference on Neural Information Processing Systems, pp. 161–168 (2007)Google Scholar
  20. 20.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification, pp. 1026–1034 (2015)Google Scholar
  21. 21.
    Berger, J.O.: Statistical Decision Theory and Bayesian Analysis, 2nd edn. Springer, New York (1986). ISBN 0-387-96098-8Google Scholar
  22. 22.
    Jarrett, K., Kavukcuoglu, K., Ranzato, M., Lecun, Y.: What is the best multi-stage architecture for object recognition? In: IEEE International Conference on Computer Vision, pp. 2146–2153 (2010)Google Scholar
  23. 23.
    Bottou, L.: Large-scale machine learning with stochastic gradient descent, pp. 177–186 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.School of Information EngineeringPutian UniversityPutianChina
  2. 2.School of Economics and ManagementFuzhou UniversityFuzhouChina
  3. 3.College of Mathematics and Computer ScienceFuzhou UniversityFuzhouChina

Personalised recommendations