Skip to main content

Sequential Models

  • Chapter
  • First Online:

Abstract

So why do we need a recurrent neural network (RNN)? Let’s try to answer that with an example, or analogy. When reading a new article, people have two options. First, if they can’t understand it, they can read articles that the new article is based on, for background information. Otherwise, they do understand the new article, based on some prior knowledge of the subject, without an immediate need to read similar articles. In both cases, their ability to understand the new article is enabled by some preexisting or preaquired knowledge. But they don't need to go back to the phase of learning alphabets or numbers; they only need to know what this article is about. This is the way recurrent neural networks work.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Hisham El-Amir and Mahmoud Hamdy

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

El-Amir, H., Hamdy, M. (2020). Sequential Models. In: Deep Learning Pipeline. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-5349-6_12

Download citation

Publish with us

Policies and ethics