Skip to main content

Efficacy of Autoencoders on Image Dataset

  • Conference paper
  • First Online:
Sentiment Analysis and Deep Learning

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1432))

  • 723 Accesses

Abstract

Autoencoder is an unsupervised learning technique widely used over different types of data that gets the output exactly like the input. The major usage of autoencoders lie in feature extraction, dimensionality reduction, image denoising, compression, etc. This paper concentrates on the implementation of undercomplete, sparse and variational autoencoder over the Modified National Institute of Standards and Technology (MNIST) dataset and analyse the efficiency using loss and activation function. The effect of the number of epochs in building the model is also analysed. For the dataset, sparse autoencoder performs better than the undercomplete when implemented with Adam as an activation function with mean square error as loss function. When compared with the variational autoencoder (VAE), VAE performs better when implemented using Adam function.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Kovenko, V., & Bogacha, I. (2020). A comprehensive study of autoencoders’ applications related to images. In IT&I-2020 Information Technology and Interactions, 02–03 December 2020, KNU Taras Shevchenko, Kyiv, Ukraine. CEUR Workshop Proceedings (CEUR-WS.org).

    Google Scholar 

  2. le Cun, Y. (1989). A theoretical framework for back-propagation. In Proceedings of the 1998 Connection Models Summer School, Carnegie-Mellon University.

    Google Scholar 

  3. Seb. (2021). An introduction to neural network loss functions. Posted on 28 September 2021. https://programmathically.com/an-introduction-to-neural-network-loss-functions/

  4. Sharma, S., Sharma, S., & Athaiya, A. (2020). Activation functions in neural networks. International Journal of Engineering Applied Sciences and Technology, 4(12), 310–316. ISSN 2455-2143, Published Online April 2020 in IJEAST. http://www.ijeast.com

  5. Patterson, J., & Gibson, A. (2017). Deep learning: A practitioners approach. O’Reilly Publications. ISBN 978-1-491-91425-0

    Google Scholar 

  6. Singh, A., & Ogunfunmi, T. An overview of variational autoencoders for source separation, finance, and bio-signal applications.

    Google Scholar 

  7. Article. https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/

  8. Jordan, J. (2018). https://www.jeremyjodan.me/autoencoders/

  9. Alexandre, D., et al. An autoencoder based learned image compressor: Description of challenge proposal. https://arxiv.org/abs/1902.07385, https://doi.org/10.48550/arXiv.1902.07385

  10. Mentzer, F., Agustsson, E., Tschannen, M., Timofte, R., & Van Gool, L. (2018). Conditional probability models for deep image compression. arXiv preprint arXiv:1801.04260

  11. Nag, A. Lecture Notes. https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf

  12. Article. https://www.v7labs.com/blog/autoencoders-guide

  13. Arpit, D., et al. (2016). Why regularized auto-encoders learn sparse representation. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA (Vol. 48). JMLR: W&CP.

    Google Scholar 

  14. Mienye, I. D., Sun, Y., & Wang, Z. (2020). Improved sparse autoencoder based artificial neural network approach for prediction of heart disease. Journal of Informatics in Medicine Unlocked, 18, 100307. ISSN 2352-9148.https://doi.org/10.1016/j.imu.2020.100307

  15. Wan, Z., He, H., & Tang, B. (2018). A generative model for sparse hyperparameter determination. IEEE Transactions on Big Data, 4(1), 2–10. https://doi.org/10.1109/TBDATA.2017.2689790

  16. Oinar, C. (2021). https://towardsdatascience.com/variational-autoencoder-55b288f2e2e0

  17. Elbattah, M., Loughnane, C., Guérin, J.-L., Carette, R., Cilia, F., & Dequen, G. (2021). Variational autoencoder for image-based augmentation of eye-tracking data. Journal of Imaging, 7, 83. https://doi.org/10.3390/jimaging7050083

    Article  Google Scholar 

  18. Wan, Z., Zhang, Y., & He, H. (2017). Variational autoencoder based synthetic data generation for imbalanced learning. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017 (pp. 1–7).

    Google Scholar 

  19. Luo, Y., Zhu, L. Z., Wan, Z. Y., & Lu, B. L. (2020). Data augmentation for enhancing EEG-based emotion recognition with deep generative models. Journal of Neural Engineering, 17(5), 056021. https://doi.org/10.1088/1741-2552/abb580 PMID: 33052888.

    Article  Google Scholar 

  20. Ozdenizci, O., & Erdogmus, D. (2021). On the use of generative deep neural networks to synthesize artificial multichannel EEG signals. arXiv 2021, arXiv:2102.08061. Available online https://arxiv.org/abs/2102.08061. Accessed on 2 May 2021.

  21. Biffi, C., Oktay, O., Tarroni, G., Bai, W., De Marvao, A., Doumou, G., Rajchl, M., Bedair, R., Prasad, S., Cook, S., et al. (2018). Learning interpretable anatomical features through deep generative models: Application to cardiac remodelling. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, 16–20 September 2018 (pp. 464–471).

    Google Scholar 

  22. Pesteie, M., Abolmaesumi, P., & Rohling, R. N. (2019). Adaptive augmentation of medical data using independently conditional variational auto-encoders. IEEE Transactions on Medical Imaging, 38(12), 2807–2820.https://doi.org/10.1109/TMI.2019.2914656. Epub 2019 May 6. PMID: 31059432.

  23. Cerrolaza, J. J., Li, Y., Biffi, C., Gomez, A., Sinclair, M., Matthew, J., Knight, C., Kainz, B., & Rueckert, D. (2018). 3D Fetal skull reconstruction from 2DUS via deep conditional generative networks. In J. A. Schnabel, C. Davatzikos, C. Alberola-López, G. Fichtinger, & A. F. Frangi (Eds.), Medical Image Computing and Computer Assisted Intervention—MICCAI 2018—21st International Conference, 2018, Proceedings (pp. 383–391). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11070 LNCS). Springer. https://doi.org/10.1007/978-3-030-00928-1_44

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Anupama Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Anupama Kumar, S., Dharani, A., Chakravorty, C. (2023). Efficacy of Autoencoders on Image Dataset. In: Shakya, S., Du, KL., Ntalianis, K. (eds) Sentiment Analysis and Deep Learning. Advances in Intelligent Systems and Computing, vol 1432. Springer, Singapore. https://doi.org/10.1007/978-981-19-5443-6_73

Download citation

Publish with us

Policies and ethics