Abstract
As Spark evolves and matures as a unified data processing engine with more features in each new release, its programming abstraction also evolves. The resilient distributed dataset (RDD) was the initial core programming abstraction when Spark was introduced to the world in 2012. In Spark version 1.6, a new programming abstraction, called Structured APIs, was introduced. This is the new and preferred way for the data engineering tasks, such as performing data processing or building data pipelines. The Structured APIs were designed to enhance developer productivity with easy-to-use, intuitive and expressive APIs. The new programming abstract requires the data available in a structured format, and the data computation logic needs to follow a certain structure. Armed with these two pieces of information, Spark can perform the necessary and sophisticated optimizations to speed up data processing applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Luu, H. (2021). Spark SQL: Foundation. In: Beginning Apache Spark 3. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-7383-8_3
Download citation
DOI: https://doi.org/10.1007/978-1-4842-7383-8_3
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-7382-1
Online ISBN: 978-1-4842-7383-8
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)