Abstract
In the last chapter, we looked at common patterns and techniques for harnessing the powerful core functionality available to us when transforming data using Spark SQL and the DataFrame APIs. While we certainly covered a lot of ground, we purposefully skipped over some of the more exciting capabilities available to us under the Spark SQL umbrella. Along that line, wouldn’t it seem to only make sense that we should be capable of connecting to and working directly with remote databases from the comfort of Apache Spark SQL? Additionally, wouldn’t it also be advantageous to use SQL's strongly typed semantics when reading data into Spark? Couldn’t we somehow also marry these rich type systems (inherent to Java/Scala) with both SQL and the strong internal typing mechanics of Apache Spark itself? Luckily, that is exactly what you will learn to do in this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Haines, S. (2022). Bridging Spark SQL with JDBC. In: Modern Data Engineering with Apache Spark. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-7452-1_5
Download citation
DOI: https://doi.org/10.1007/978-1-4842-7452-1_5
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-7451-4
Online ISBN: 978-1-4842-7452-1
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)