Skip to main content

Resilient Distributed Datasets

  • Chapter
  • First Online:
Book cover Beginning Apache Spark 2

Abstract

This chapter covers the oldest foundational concept in Spark called resilient distributed datasets (RDDs). To truly understand how Spark works, you must understand the essence of RDDs. They provide an extremely solid foundation that other abstractions are built upon. The ideas behind RDDs are pretty unique in the distributed data processing framework landscape, and they were introduced in a timely manner to solve the pressing needs of dealing with the complexity and efficiency of iterative and interactive data processing use cases. Starting with Spark 2.0, Spark users will have fewer needs for directly interacting with RDD, but having a strong mental model of how RDD works is essential. In a nutshell, Spark revolves around the concept of RDDs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing”

  2. 2.

    https://static.googleusercontent.com/media/research.google.com/en//archive/mapreduce-osdi04.pdf

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Hien Luu

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Luu, H. (2018). Resilient Distributed Datasets. In: Beginning Apache Spark 2. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3579-9_3

Download citation

Publish with us

Policies and ethics