Building and Deploying Container Workloads on Azure

Azure Kubernetes Service (AKS)

Your browser needs to be JavaScript capable to view this video

Try reloading this page, or reviewing your browser settings

In this video you will learn about Azure Container Services, for the most advanced container scenarios.

Keywords

  • Azure
  • Containers
  • complex container scenarios
  • AKS
  • kubernetes

About this video

Author(s)
Peter De Tender
First online
19 October 2021
DOI
https://doi.org/10.1007/978-1-4842-7807-9_7
Online ISBN
978-1-4842-7807-9
Publisher
Apress
Copyright information
© Peter De Tender 2021

Video Transcript

Cool. So up until now, we talked about Azure Container Registry, the landing page for offline container images, walked you through the runtime, Azure container instance, allowing you to run your container workloads, or moving to Azure app services and using a web app for containers. Both scenarios are quite useful, but maybe not good enough for the most advanced, more demanding container scenarios.

And that brings me to a third option in Azure, using Azure Kubernetes Service. Azure Kubernetes Service, or AKS in short, is the ultimate way to host and run your container workloads on Azure. Bringing in everything from Kubernetes, but optimized to run as an Azure service, it’s almost like the best of both worlds. The Kubernetes engine within AKS is 100% Kubernetes, so providing built-in features like auto-scaling, rolling updates, service discovery, integrated load-balancing, session affinity, resource sizing, and a lot more.

All of the complex tasks and scenarios that typically come up when running more complex workloads are all managed running within Kubernetes. Technically, an AKS service is based on an Azure virtual machine scale set, where the nice thing is that you don’t have to, and technically you cannot really, manage the underlying infrastructure components. It’s all offered as a service. Once AKS is deployed, you have a master node and worker nodes.

I like to describe the master node as the brain because that’s where all the intelligence and the configuration leaves. The worker nodes are, again, Azure virtual machines. And those are literally hosting the containers, which in Kubernetes terminology, we call a POD. A POD by itself can be a single container or a collection of multiple containers. Now given the fact that AKS infrastructure is 100% relying on Azure infrastructure as a service, it completely integrates with everything else that Azure offers from that perspective– virtual networks, network security groups, Azure load balancers or application gateway, and allowing you to run public internet-facing container workloads or keeping everything private within the Azure network.

Deploying an AKS cluster is rather straightforward. And I’ll show you parts of that in my upcoming demo. Just like any other Azure service, deployment is executed from Azure portal using command line or template-based deployments, integrating with Azure resource manager templates or using Terraform. You could start small with a two or three node cluster. Even a single node would work, but you will lose high availability there. But it’s technically doable. And from there, scaling out, adding additional virtual machines into the cluster.

Once AKS is up and running, you can use a Kubernetes YML definition file, using Helm charts or using Azure DevOps or any other DevOps tool supporting CI/CD pipeline mechanisms to go from source code to base container to hosting it on AKS. And AKS by itself supports Linux and Windows as the underlying container nodes.

In a full flow, the scenario could look like this. On the left hand side, we start with a development environment. Just to name some examples, could be Visual Studio VS Code integrating with Azure Dev Spaces and building out a dev and test cluster. From there, you might integrate with Azure DevOps Repos, maybe using GitHub or GitHub Enterprise as a source control mechanism and creating configuration deployment scripts out of Helm charts.

Azure Container Registry will be used out of our CI/CD pipelines to provide automatic builds and creating new offline container images. Once the container lands in Azure Container Registry, we’re going to move it up and run it in our AKS production cluster and, from there, performing optimization monitoring out of, for example, Azure Monitor. So, without joking, but deploying AKS– although the deployment itself shouldn’t always be that hard, it really entails a full IT project starting from the developer responsible for the source code, responsible for creating the actual container image, and integrating moving it into Azure Container Registry by using some DevOps tool, where I’m using Azure DevOps just as an example.

From there, it moves over to the operations teams, where the operations teams will take control of the AKS control plane, managing the infrastructure, managing the AKS service, and eventually integrating with Azure Monitor, Azure Log Analytics, to validate the end-to-end runtime of the workload and the full cluster-based scenario.

The last couple of words here around the actual operations and management of your AKS cluster and workloads– the easiest is starting from a Kubernetes YML file, which by itself is a configuration script definition file having all necessary parameters to inform AKS how to run the container. Most operations are managed from a Kubernetes kubectl command line, a tool that comes with Kubernetes in Azure but also in other Kubernetes-hosted service environments.

Other options a little bit more Azure or Microsoft oriented is managing it from Visual Studio Code, using a VS Code extension for Kubernetes, helping in somehow taking away the complexity of using command line or also providing an integration with Azure Monitor out of the Azure portal.

So then let’s switch to yet another demo where I’m going to show you what it takes to deploy an AKS-clustered environment. From there, walking you through the core basics of creating a Kubernetes YML file and establishing that integration between our ACR that I talked about in a previous video and how it allows you to run your workloads on Azure. The first step would be building up an integration between kubectl as the command line tool and your AKS environment. You would use az aks get-credentials. That would be the command pointing to the resource group in Azure pointing at the actual cluster.

And from there, without any typos at least, you could run kubectl get nodes. And again, I’m using the default, which means I have three virtual machines statuses ready. They’re waiting to be used. One of them is running for more than a year already, so pretty high available, no downtime whatsoever, and then expanding gradually over some demos.

And I’m using version 1.80. And that’s interesting to know a little bit about supportability. So when we receive the new Kubernetes bits, we are going to test them on the Azure platform.