Abstract
Autoencoder is an unsupervised learning technique widely used over different types of data that gets the output exactly like the input. The major usage of autoencoders lie in feature extraction, dimensionality reduction, image denoising, compression, etc. This paper concentrates on the implementation of undercomplete, sparse and variational autoencoder over the Modified National Institute of Standards and Technology (MNIST) dataset and analyse the efficiency using loss and activation function. The effect of the number of epochs in building the model is also analysed. For the dataset, sparse autoencoder performs better than the undercomplete when implemented with Adam as an activation function with mean square error as loss function. When compared with the variational autoencoder (VAE), VAE performs better when implemented using Adam function.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kovenko, V., & Bogacha, I. (2020). A comprehensive study of autoencoders’ applications related to images. In IT&I-2020 Information Technology and Interactions, 02–03 December 2020, KNU Taras Shevchenko, Kyiv, Ukraine. CEUR Workshop Proceedings (CEUR-WS.org).
le Cun, Y. (1989). A theoretical framework for back-propagation. In Proceedings of the 1998 Connection Models Summer School, Carnegie-Mellon University.
Seb. (2021). An introduction to neural network loss functions. Posted on 28 September 2021. https://programmathically.com/an-introduction-to-neural-network-loss-functions/
Sharma, S., Sharma, S., & Athaiya, A. (2020). Activation functions in neural networks. International Journal of Engineering Applied Sciences and Technology, 4(12), 310–316. ISSN 2455-2143, Published Online April 2020 in IJEAST. http://www.ijeast.com
Patterson, J., & Gibson, A. (2017). Deep learning: A practitioners approach. O’Reilly Publications. ISBN 978-1-491-91425-0
Singh, A., & Ogunfunmi, T. An overview of variational autoencoders for source separation, finance, and bio-signal applications.
Jordan, J. (2018). https://www.jeremyjodan.me/autoencoders/
Alexandre, D., et al. An autoencoder based learned image compressor: Description of challenge proposal. https://arxiv.org/abs/1902.07385, https://doi.org/10.48550/arXiv.1902.07385
Mentzer, F., Agustsson, E., Tschannen, M., Timofte, R., & Van Gool, L. (2018). Conditional probability models for deep image compression. arXiv preprint arXiv:1801.04260
Nag, A. Lecture Notes. https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf
Arpit, D., et al. (2016). Why regularized auto-encoders learn sparse representation. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA (Vol. 48). JMLR: W&CP.
Mienye, I. D., Sun, Y., & Wang, Z. (2020). Improved sparse autoencoder based artificial neural network approach for prediction of heart disease. Journal of Informatics in Medicine Unlocked, 18, 100307. ISSN 2352-9148.https://doi.org/10.1016/j.imu.2020.100307
Wan, Z., He, H., & Tang, B. (2018). A generative model for sparse hyperparameter determination. IEEE Transactions on Big Data, 4(1), 2–10. https://doi.org/10.1109/TBDATA.2017.2689790
Oinar, C. (2021). https://towardsdatascience.com/variational-autoencoder-55b288f2e2e0
Elbattah, M., Loughnane, C., Guérin, J.-L., Carette, R., Cilia, F., & Dequen, G. (2021). Variational autoencoder for image-based augmentation of eye-tracking data. Journal of Imaging, 7, 83. https://doi.org/10.3390/jimaging7050083
Wan, Z., Zhang, Y., & He, H. (2017). Variational autoencoder based synthetic data generation for imbalanced learning. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017 (pp. 1–7).
Luo, Y., Zhu, L. Z., Wan, Z. Y., & Lu, B. L. (2020). Data augmentation for enhancing EEG-based emotion recognition with deep generative models. Journal of Neural Engineering, 17(5), 056021. https://doi.org/10.1088/1741-2552/abb580 PMID: 33052888.
Ozdenizci, O., & Erdogmus, D. (2021). On the use of generative deep neural networks to synthesize artificial multichannel EEG signals. arXiv 2021, arXiv:2102.08061. Available online https://arxiv.org/abs/2102.08061. Accessed on 2 May 2021.
Biffi, C., Oktay, O., Tarroni, G., Bai, W., De Marvao, A., Doumou, G., Rajchl, M., Bedair, R., Prasad, S., Cook, S., et al. (2018). Learning interpretable anatomical features through deep generative models: Application to cardiac remodelling. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, 16–20 September 2018 (pp. 464–471).
Pesteie, M., Abolmaesumi, P., & Rohling, R. N. (2019). Adaptive augmentation of medical data using independently conditional variational auto-encoders. IEEE Transactions on Medical Imaging, 38(12), 2807–2820.https://doi.org/10.1109/TMI.2019.2914656. Epub 2019 May 6. PMID: 31059432.
Cerrolaza, J. J., Li, Y., Biffi, C., Gomez, A., Sinclair, M., Matthew, J., Knight, C., Kainz, B., & Rueckert, D. (2018). 3D Fetal skull reconstruction from 2DUS via deep conditional generative networks. In J. A. Schnabel, C. Davatzikos, C. Alberola-López, G. Fichtinger, & A. F. Frangi (Eds.), Medical Image Computing and Computer Assisted Intervention—MICCAI 2018—21st International Conference, 2018, Proceedings (pp. 383–391). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11070 LNCS). Springer. https://doi.org/10.1007/978-3-030-00928-1_44
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Anupama Kumar, S., Dharani, A., Chakravorty, C. (2023). Efficacy of Autoencoders on Image Dataset. In: Shakya, S., Du, KL., Ntalianis, K. (eds) Sentiment Analysis and Deep Learning. Advances in Intelligent Systems and Computing, vol 1432. Springer, Singapore. https://doi.org/10.1007/978-981-19-5443-6_73
Download citation
DOI: https://doi.org/10.1007/978-981-19-5443-6_73
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-5442-9
Online ISBN: 978-981-19-5443-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)