Abstract
So why do we need a recurrent neural network (RNN)? Let’s try to answer that with an example, or analogy. When reading a new article, people have two options. First, if they can’t understand it, they can read articles that the new article is based on, for background information. Otherwise, they do understand the new article, based on some prior knowledge of the subject, without an immediate need to read similar articles. In both cases, their ability to understand the new article is enabled by some preexisting or preaquired knowledge. But they don't need to go back to the phase of learning alphabets or numbers; they only need to know what this article is about. This is the way recurrent neural networks work.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsAuthor information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Hisham El-Amir and Mahmoud Hamdy
About this chapter
Cite this chapter
El-Amir, H., Hamdy, M. (2020). Sequential Models. In: Deep Learning Pipeline. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-5349-6_12
Download citation
DOI: https://doi.org/10.1007/978-1-4842-5349-6_12
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-5348-9
Online ISBN: 978-1-4842-5349-6
eBook Packages: Professional and Applied ComputingProfessional and Applied Computing (R0)Apress Access Books