Skip to main content

A Stacked Denoising Autoencoder Based on Supervised Pre-training

  • Conference paper
  • First Online:
Smart Innovations in Communication and Computational Sciences

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 670))

  • 685 Accesses

Abstract

Deep learning has attracted much attention because of its ability to extract complex features automatically. Unsupervised pre-training plays an important role in the process of deep learning, but the monitoring information provided by the sample of labeling is still very important for feature extraction. When the regression forecasting problem with a small amount of data is processed, the advantage of unsupervised learning is not obvious. In this paper, the pre-training phase of the stacked denoising autoencoder was changed from unsupervised learning to supervised learning, which can improve the accuracy of the small sample prediction problem. Through experiments on UCI regression datasets, the results show that the improved stacked denoising autoencoder is better than the traditional stacked denoising autoencoder.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai Y., Chen Z., Xie .J, li C.: Daily reservoir inflow forecasting using multiscale deep feature learning with hybrid models. Journal of Hydrology. 532, 193–206 (2015).

    Google Scholar 

  2. Hinton G., Osindero S., Teh Y.: A fast learning algorithm for deep belief nets. Neural Computation. 18(7), 1527–1554 (2006).

    Google Scholar 

  3. Verbancsics P., Harguess J.: Image classification using generative neuron evolution for deep learning. Winter Conference on Applications of Computer Vision. 488–493 (2015).

    Google Scholar 

  4. Li D., Hinton G., Kingsbury B.: New types of deep neural network learning for speech recognition and related applications: an overview. International Conference on Acoustics, Speech and Signal Processing. IEEE 8599–8603 (2013).

    Google Scholar 

  5. Chen Y., Zheng D., Zhao T.: Chinese relation extraction based on deep belief nets. Journal of Software. 23(10), 2572–2585 (2012).

    Google Scholar 

  6. Yu k., Jia L., Chen Y., Xu W.: deep learning: Yesterday, Today, and Tomorrow. Journal of Computer Research and Development. 50(09), 1799–1804 (2013).

    Google Scholar 

  7. Jiang Z., Chen Y., Gao L.: A supervised dynamic topic model. Acta Scientiarum Naturalium Universitatis Pekinensis. 51(02), 367–376 (2015).

    Google Scholar 

  8. Hu Q., Zhang R., Zhou Y.: Transfer learning for short-term wind speed prediction with deep neural networks. Renewable Energy. 85, 83–95 (2016).

    Google Scholar 

  9. Vincent P., Larochelle H., Bengio Y., Manzagol P.: Extracting and composing robust features with denoising autoencoders. International Conference, Helsinki, Finland, June. Hu Q., Zhang R., Zhou Y.: Transfer learning for short-term wind speed prediction with deep neural networks. Renewable Energy. 85, 83–95 (2016).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaomin Mu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, X., Mu, S., Shi, A., Lin, Z. (2019). A Stacked Denoising Autoencoder Based on Supervised Pre-training. In: Panigrahi, B., Trivedi, M., Mishra, K., Tiwari, S., Singh, P. (eds) Smart Innovations in Communication and Computational Sciences. Advances in Intelligent Systems and Computing, vol 670. Springer, Singapore. https://doi.org/10.1007/978-981-10-8971-8_14

Download citation

Publish with us

Policies and ethics