Skip to main content
Log in

A restorable autoencoder as a method for dimensionality reduction

  • Original Paper
  • Published:
Journal of the Korean Physical Society Aims and scope Submit manuscript

Abstract

In this paper, we propose a restorable autoencoder model as a non-linear method for reducing dimensionality. While non-linear methods can reduce the dimensionality of data more effectively than linear methods, they are, in general, not able to restore the original data from the dimensionality-reduced result. This is because non-linear methods provide a non-linear relationship between the original data and their dimensionality-reduced result. With the advantages of both linear and non-linear methods, the proposed model not only maintains an effective dimensionality reduction but also provides an observation-wise linear relationship with which the original data can be restored from the dimensionality-reduced result. We assessed the effectiveness of the proposed model and compared it with the linear method of principal component analysis and the non-linear methods of typical autoencoders using MNIST and Fashion-MNIST data sets. We demonstrated that the proposed model was more effective than or comparable to the compared methods in terms of the loss function and the reconstruction of input images. We also showed that the lower-dimensional projection obtained by the proposed model produced better or comparable classification results than that by the compared methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. W.M. Brown, S. Martin, S.N. Pollock, E.A. Coutsias, J. Watson, Algorithmic dimensionality reduction for molecular structure analysis. J. Chem. Phys. 129, 064118 (2008). https://doi.org/10.1063/1.2968610

    Article  ADS  Google Scholar 

  2. M. Praprotnik, L. Delle Site, K. Kremer, Multiscale simulation of soft matter: from scale bridging to adaptive resolution. Annu. Rev. Phys. Chem. 59, 545 (2008). https://doi.org/10.1146/annurev.physchem.59.032607.093707

    Article  ADS  Google Scholar 

  3. R. Everaers, M.R. Ejtehadi, Interaction potentials for soft and hard ellipsoids. Phys. Rev. E 67, 041710 (2003). https://doi.org/10.1103/PhysRevE.67.041710

    Article  ADS  Google Scholar 

  4. D. Huang, H. Abdel-Khalik, C. Rabiti, F. Gleicher, Dimensionality reducibility for multi-physics reduced order modeling. Ann. Nucl. Energy 110, 526 (2017). https://doi.org/10.1016/j.anucene.2017.06.045

    Article  Google Scholar 

  5. U. Kruger, J. Zhang, L. Xie, Developments and Applications of Nonlinear Principal Component Analysis: A Review, Edited by Gorban AN, Kégl B, Wunsch DC, Zinovyev AY (Springer, Berlin Heidelberg, 2008).

    Book  Google Scholar 

  6. D. Donoho (200) High-dimensional data analysis: the curses and blessings of dimensionality (AMS Math Challenges Lecture. 2000), Chap. 1

  7. J. Fan and R. Li (2006) In proceedings of the 25th international congress of mathematicians (Madrid, Spain, August 22–30, 2006)

  8. J. Tenenbaum, V. Silva, J. Langford, A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319 (2000). https://doi.org/10.1126/science.290.5500.2319

    Article  ADS  Google Scholar 

  9. D. Rumelhart, G. Hinton, and R. Williams (1985) Learning internal representations by error propagation. Available from: https://app.dimensions.ai/details/publication/pub.1091744995. Accessed 12 Dec 2020

  10. H. Abdi, L. Williams, Principal component analysis, WIREs. Comput. Stat. 2, 433 (2010). https://doi.org/10.1002/wics.101

    Article  Google Scholar 

  11. M. Scholz, M. Fraunholz, and J. Selbig, Nonlinear principal component analysis: Neural network models and applications, edited by Gorban AN, Kégl B, Wunsch DC, Zinovyev AY (Springer, Berlin Heidelberg; 2008)

  12. L. Van Der Maaten, E. Postma, J. Van den Herik, Dimensionality reduction: a comparative review. J. Mach. Learn. Res. 10, 66 (2009)

    Google Scholar 

  13. J. Cunningham, Z. Ghahramani, Linear dimensionality reduction: survey, insights, and generalizations. J. Mach. Learn. Res. 16, 2859 (2015)

    MathSciNet  MATH  Google Scholar 

  14. S. Ladjal, A. Newson, and C. Pham, A PCA-like Autoencoder, ArXiv.abs/1904.01277 (2009).

  15. M. Kramer, Nonlinear principal component analysis using autoassociative neural networks. AIChE. J. 37, 233 (1991). https://doi.org/10.1002/aic.690370209

    Article  Google Scholar 

  16. D. Dong, T. McAvoy, Nonlinear principal component analysis-based on principal curves and neural networks. Comput. Chem. Eng. 20, 65 (1996). https://doi.org/10.1016/0098-1354(95)00003-K

    Article  Google Scholar 

  17. R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. Douglas, H. Seung, Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405, 947 (2000). https://doi.org/10.1038/35016072

    Article  ADS  Google Scholar 

  18. D. Erhan, Y. Bengio, A. Courville, P. Manzagol, P. Vincent, S. Bengio, Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625 (2010)

    MathSciNet  MATH  Google Scholar 

  19. G. Hinton, R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science 313, 504 (2006). https://doi.org/10.1126/science.1127647

    Article  ADS  MathSciNet  MATH  Google Scholar 

  20. G. Hinton, S. Osindero, Y. Teh, A fast learning algorithm for deep belief nets. Neural. Comput. 18, 1527 (2006). https://doi.org/10.1162/neco.2006.18.7.1527

    Article  MathSciNet  MATH  Google Scholar 

  21. Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, Greedy Layer-Wise Training of Deep Networks, in Proceedings of the 19th International Conference on Neural Information Processing Systems. NIPS’06. (Cambridge, MA, USA: MIT Press, 2006)

  22. L. Prechelt, In: Montavon G, Orr GB, Müller KR, editors. Early Stopping-But When? (Springer, Berlin Heidelberg, 2012). https://doi.org/10.1007/978-3-642-35289-8_5

  23. C. Cortes, V. Vapnik, Support-vector networks. Mach. Learn. 20, 273 (1995). https://doi.org/10.1023/A:1022627411411

    Article  MATH  Google Scholar 

  24. W. Greene, Econometric Analysis, 7th edn. (Pearson education, Boston, 2012).

    Google Scholar 

  25. M. Kuhn, A short introduction to the caret package. http://cran.r-project.org/web/packages/caret/vignettes/caret.pdf. Last accessed: 2020-02-12

  26. F. Nielsen, Introduction to HPC with MPI for Data Science. Springer. Cham. (2016). https://doi.org/10.1007/978-3-319-21903-5_8

    Article  Google Scholar 

  27. D. Meyer, Support vector machines. R. News. 1, 23 (2020)

    Google Scholar 

  28. C. Chang and C. Lin, LIBSVM: a library for support vector machines. http://www.csie.ntu.edu.tw/ cjlin/libsvm. Last accessed: 2020-12-12

  29. https://www.rdocumentation.org/packages/nnet/versions/7.3-14. Last accessed: 2020-12-12

  30. D. Powers, Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol. 2, 37 (2011)

    Google Scholar 

  31. K. Brodersen, C. Ong, K. Stephan, and J. Buhmann, The Balanced Accuracy and Its Posterior Distribution, in Proceedings of the 20th International Conference on Pattern Recognition (Istanbul, Turkey, August 23–26, 2010) 3121–3124

  32. D. Joanes and C. Gill, comparing measures of sample Skewness and kurtosis. Journal of the Royal Statistical Society Series D (The Statistician) 47, 183 (1998). Available from: http://www.jstor.org/stable/2988433. Accessed 12 Dec 2020

Download references

Acknowledgements

This work was supported by the research grant of the Kongju National University in 2020.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chang-Yong Lee.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 1805 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jeong, Y., Kim, S. & Lee, CY. A restorable autoencoder as a method for dimensionality reduction. J. Korean Phys. Soc. 78, 315–327 (2021). https://doi.org/10.1007/s40042-021-00074-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40042-021-00074-6

Keywords

Navigation