Abstract
Before diving headfirst into what Kubernetes (K8s) is, and how Apache Spark fits into the distributed K8s ecosystem, it is important to first begin by stating simply that Kubernetes enables Apache Spark applications to run in isolation, pairing elastic scalability with the runtime consistency of containers, collocated in independent micro-environments called pods (which you’ll learn about soon). Ultimately, you can rely on consistent runtime environments, without having to deal with the pain and other hardships of multi-tenancy in a share-everything ecosystem. Rather, imagine each application running in its own isolated world, which at a high-level act similarly to the local environments you’ve run to power Spark applications using docker-compose on the local data platform we’ve been constructing throughout this book.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Haines, S. (2022). Deploying Mission-Critical Spark Applications on Kubernetes. In: Modern Data Engineering with Apache Spark. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-7452-1_15
Download citation
DOI: https://doi.org/10.1007/978-1-4842-7452-1_15
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-7451-4
Online ISBN: 978-1-4842-7452-1
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)