Abstract
Deep learning has attracted much attention because of its ability to extract complex features automatically. Unsupervised pre-training plays an important role in the process of deep learning, but the monitoring information provided by the sample of labeling is still very important for feature extraction. When the regression forecasting problem with a small amount of data is processed, the advantage of unsupervised learning is not obvious. In this paper, the pre-training phase of the stacked denoising autoencoder was changed from unsupervised learning to supervised learning, which can improve the accuracy of the small sample prediction problem. Through experiments on UCI regression datasets, the results show that the improved stacked denoising autoencoder is better than the traditional stacked denoising autoencoder.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bai Y., Chen Z., Xie .J, li C.: Daily reservoir inflow forecasting using multiscale deep feature learning with hybrid models. Journal of Hydrology. 532, 193–206 (2015).
Hinton G., Osindero S., Teh Y.: A fast learning algorithm for deep belief nets. Neural Computation. 18(7), 1527–1554 (2006).
Verbancsics P., Harguess J.: Image classification using generative neuron evolution for deep learning. Winter Conference on Applications of Computer Vision. 488–493 (2015).
Li D., Hinton G., Kingsbury B.: New types of deep neural network learning for speech recognition and related applications: an overview. International Conference on Acoustics, Speech and Signal Processing. IEEE 8599–8603 (2013).
Chen Y., Zheng D., Zhao T.: Chinese relation extraction based on deep belief nets. Journal of Software. 23(10), 2572–2585 (2012).
Yu k., Jia L., Chen Y., Xu W.: deep learning: Yesterday, Today, and Tomorrow. Journal of Computer Research and Development. 50(09), 1799–1804 (2013).
Jiang Z., Chen Y., Gao L.: A supervised dynamic topic model. Acta Scientiarum Naturalium Universitatis Pekinensis. 51(02), 367–376 (2015).
Hu Q., Zhang R., Zhou Y.: Transfer learning for short-term wind speed prediction with deep neural networks. Renewable Energy. 85, 83–95 (2016).
Vincent P., Larochelle H., Bengio Y., Manzagol P.: Extracting and composing robust features with denoising autoencoders. International Conference, Helsinki, Finland, June. Hu Q., Zhang R., Zhou Y.: Transfer learning for short-term wind speed prediction with deep neural networks. Renewable Energy. 85, 83–95 (2016).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, X., Mu, S., Shi, A., Lin, Z. (2019). A Stacked Denoising Autoencoder Based on Supervised Pre-training. In: Panigrahi, B., Trivedi, M., Mishra, K., Tiwari, S., Singh, P. (eds) Smart Innovations in Communication and Computational Sciences. Advances in Intelligent Systems and Computing, vol 670. Springer, Singapore. https://doi.org/10.1007/978-981-10-8971-8_14
Download citation
DOI: https://doi.org/10.1007/978-981-10-8971-8_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-8970-1
Online ISBN: 978-981-10-8971-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)