Abstract
The previous chapter showed how a deep learning model—specifically CNNs—could be applied to images. The process could be decoupled into a feature extractor that figures out the optimal hidden-state representation of the input (in this case a vector of feature maps) and a classifier (typically a fully connected layer). This chapter focuses on the hidden-state representation of other forms of data and explores RNNs. RNNs are especially useful for analyzing sequences, which is particularly helpful for natural language processing and time series analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This might be different between CPU and GPU.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Mathew Salvaris, Danielle Dean, Wee Hyong Tok
About this chapter
Cite this chapter
Salvaris, M., Dean, D., Tok, W.H. (2018). Recurrent Neural Networks. In: Deep Learning with Azure. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3679-6_7
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3679-6_7
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3678-9
Online ISBN: 978-1-4842-3679-6
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)