Kubernetes is an open-source system to allow for automating the deployment, scaling and administration of containerised applications. AKS is Azure Kubernetes Services which is Microsoft’s managed service for Kubernetes running in Azure.
We are going to get into more technical details as we delve in further in this blog, but to start it is worth understanding why AKS is so popular and has synergy with companies’ movement into a more microservice design coupled with the implementation of DevOps process (CI/CD). Simply put, AKS is a core element of that design.
In today’s world of application development, not many designs are based on a monolithic approach, the opposite is true, microservices is the preferred architectural design. The monolithic way is where we embed all logic within a single unit. They are all highly coupled together with a very tight relationship. This has an impact of scalability demands, agility and resiliency. With this design it means the code base becomes complex which ultimately leads to slower releases, more feedback required and exhaustive testing to get your product updates live.
The foundations of a microservices approach generally are:
- Split the unit of code based on a business functionality.
- Ideally, should not share a common database.
- Each microservice is independent so you could deploy independently.
- Use of a Gateway is encouraged.
So, understanding these core foundational concepts the below diagram summarises a high-level design.
Where does AKS fit in here? This becomes the technology where you will run your workload in the form of containers – an example would be ASP.NET core web app for your service thus Kubernetes becomes the container orchestration platform.
We have spoken about AKS and the movement into DevOps and microservices, but we cannot forget about containers. Containers is the next step in evolution from virtualisation. It is more lightweight than virtualisation because with containers we can virtualise the operating system to run many workloads on that single operating system. This is not the case with normal virtualisation technologies.
The advantages of this include:
- Less overhead – Looking at the above image it’s clear to see less compute is needed to run the application.
- Better consistency – By this I mean when part of the DevOps process you know the image is the same, runs the same and will behave the same.
- Better efficiency- With a container approach it allows for fast patching and development.
This is then the foundation for building our apps.
So, we know that we will be building an image that ultimately will get deployed into AKS – how does this fit in with DevOps – CI/CD. This is a typical architecture from Microsoft https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/secure-devops-for-kubernetes. I want to highlight the key points of how the CI/CD elements connect to AKS.
As you can see from steps 1 and 2, this is where the developer works and commits their code to source control. As this stage there maybe be a GitHub action that responds to this and then goes through the process of triggering a release/deployment pipeline at step 3.
Using the container image, we store it in ACR – Azure Container Registry as shown at step 5 (the above diagram shows this was pushed to the ACR beforehand). We see that Helm charts (a packaging tool that helps you install and manage the lifecycle of Kubernetes applications) are used to get ready for deployment at step 4. The other elements in the diagram such as steps 6-9 are about auditing, monitoring and governance.
Technical details of Azure Kubernetes Services (AKS)
As mentioned earlier AKS is Microsoft’s service for Kubernetes where they support, install and build the infrastructure needed for Kubernetes, it’s not an easy thing to do. Amazon and Google also provide this service, but we will look at AKS today.
When you create a cluster, Microsoft manages the AKS control plane and you only pay for the nodes that run your applications (customer managed section shown below).
Let’s focus on the customer-managed section. The node is important, this is the compute for your workloads. The VM size for your nodes defines the CPUs, memory and type available based on Ubuntu or Windows Server 2019. Within AKS they use virtual machine scale sets. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes control plane. Within a node there is a concept of a pod. A pod is the smallest, most basic deployable object in Kubernetes. A Pod represents a single instance of a running process in your cluster.
The concepts and relationships can be quite tricky to grasp at the start. I suggest further reading on this topic which can be found here: https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/
Creating Azure Kubernetes Services (AKS)
With Azure you have many ways of creating an AKS cluster. These include methods from the Azure portal itself, Azure CLI, ARM, PowerShell and even Terraform which is a very popular 3rd party tool within the IaC (Infrastructure as code) space. For this blog post we walk through creating one via the Azure Portal.
Click create cluster and work through the wizard. Let’s look at some important areas.
As you can see above the first screen is the basic information. For production I highly recommend using 3 availability zones, the node size and autoscaling options should be tweaked as per your requirements.
The next section is about your node pools. User node pools serve the primary purpose of hosting your application pods, you could run your apps on the system node but I do not usually do this.
The idea of virtual nodes is that it allows you to burst out containers to nodes backed by serverless Azure Container Instances. This can provide fast burst scaling options, please see the following for more details: https://docs.microsoft.com/en-gb/azure/aks/virtual-nodes-portal
From an authentication perspective, you will want to think about Azure AD Auth as shown below.
So, if you enable this option, you can then assign Kubernetes roles to groups or users within Azure AD. Then in the next phase, you could integrate your apps running in AKS with Azure AD. This means the pods will be able to communicate with other Azure resources such as Azure Key Vault – this is sometimes called Azure AD Pod managed identity.
The networking section requires further reading. As you can see below you need to think about this carefully especially if you want to use the private cluster feature.
Please see the following guide from Microsoft – https://docs.microsoft.com/en-gb/azure/aks/configure-azure-cni
Now finally you are ready to think about what other Azure Integrations you would like to set up at build time.
As you can see above, I linked my AKS to my ACR (where my app images are held) and I enabled Azure Monitor.
Once done hit the create button.
Then you will have the ability to use Azure CLI to connect to the cluster and issue kubectl commands. This is useful once you have deployments into AKS.
arun@Azure:~$ az account set –subscription 713e8a43
arun@Azure:~$ az aks get-credentials –resource-group acisql –name blobeatercluster
$ kubectl get deployments –all-namespaces=true
Having Kubernetes in a managed environment like Azure makes things like upgrading versions very easy.
You also have access to some handy metrics as shown below.
You can further customise this by adding your own metrics and filters.
Hopefully from this introduction to AKS you can see the benefits of using Kubernetes. From faster developer productivity to becoming a core component when moving to a microservice approach. Also, when running it on Azure, Microsoft will do all the hard work such as advanced Infrastructure setup and configurations that are needed to run a successful cluster.