Skip to main content

Recurrent Neural Networks

  • Chapter
  • First Online:
Deep Learning with Azure

Abstract

The previous chapter showed how a deep learning model—specifically CNNs—could be applied to images. The process could be decoupled into a feature extractor that figures out the optimal hidden-state representation of the input (in this case a vector of feature maps) and a classifier (typically a fully connected layer). This chapter focuses on the hidden-state representation of other forms of data and explores RNNs. RNNs are especially useful for analyzing sequences, which is particularly helpful for natural language processing and time series analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This might be different between CPU and GPU.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Mathew Salvaris, Danielle Dean, Wee Hyong Tok

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Salvaris, M., Dean, D., Tok, W.H. (2018). Recurrent Neural Networks. In: Deep Learning with Azure. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3679-6_7

Download citation

Publish with us

Policies and ethics