Skip to main content

Working with Data

  • Chapter
  • First Online:
Modern Data Engineering with Apache Spark
  • 2219 Accesses

Abstract

The last chapter introduced you to the Spark architecture and programming model. We took a quick tour of the core Spark components and APIs and finished up with an exercise that introduced you to the spark-shell and the DataFrame API. You also saw your first glimpse of the Spark SQL API, which empowers you to express complex analytical queries quickly and easily in a structured way. It also that cleanly abstracts away the underlying complexities when composing difficult SQL expressions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+
from €37.37 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

eBook
EUR 17.99
Price includes VAT (Netherlands)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 65.39
Price includes VAT (Netherlands)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Haines, S. (2022). Working with Data. In: Modern Data Engineering with Apache Spark. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-7452-1_3

Download citation

Publish with us

Policies and ethics