We live in a time of renaissance for what is now called devops maturity. It has never been easier to develop and deploy to different environments complex, distributed systems of computer software. A lot of this ease is courtesy of a technology known as Kubernetes (K8s) which is an open-source system for automating deployment, scaling, and management of containerized applications. But Kubernetes by itself doesn’t complete the picture. There are also some other popular open source software systems that contribute greatly to this renaissance.
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Helm helps you provision and deploy your services in a Kubernetes cluster but how can you easily provision the cluster itself? The best way to do that today is by using Terraform which is an open-source Infrastructure as Code (IaC) software tool that provides a consistent Command Line Interface (CLI) workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.
I maintain a repo on github where I implement feature identical microservices in various programming languages and technology stacks. Part of the evaluation process is to run each service implementation in a load test lab. This blog covers how I use Kubernetes, Helm, and Terraform in order to provision, configure, launch, collect, and report on the results of that load test lab.
In 2014, Google introduced Kubernetes as an open source version of their proprietary large scale cluster management system called Borg. Prior to that, the only open source container orchestration and management system was Apache Marathon on Mesos. The industry quickly embraced Kubernetes over Mesos or Docker Swarm.
With Kubernetes, you build and package your software into Docker images. You apply manifests (written in YAML) that inform the control plane on how to run the image as a container on the nodes in the cluster. There are many different types of manifests used to do this. For the load test lab, I use services, deployments, config maps, jobs, and ingress. An ingress is how you can access a deployed service from outside the cluster. There are two other mechanisms for doing that; declare the deployment as type load balancer in the manifest or port forward to the service. The control plane provides an API by which you can apply manifests or port forward. The kubectl program provides access to that API from a CLI.
The same service is going to have different values depending on what environment it is running in. A Kubernetes config map can help you parameterize some of those values but not all of them. For example, you can use a config map to determine what host runs the database that your service needs to connect to but you cannot use a config map to drive how many replicas of your service to run, what should the service’s compute resources be limited to, or what version of the software to load.
That is why it is pretty important to be able to have a template of each manifest that you can resolve with parameters in order to apply changes to the control plane. This is such an important requirement that there are many competing open source projects out there doing this. I personally have run into fabric8, jkube, sbt-kubeyml, and bazelbuild/rules_k8s that all do this. Helm was the first templating system for Kubernetes. Helm is the most popular and, in my opinion, it is the best.
With Helm, you create a chart for your entire stack. You have template versions of each manifest. You have a configuration for each environment. You manage releases by installing or uninstalling the desired configuration via the Helm CLI tool. Each template is a mixture of K8s manifest YAML and Golang template language plus sprig template functions.
Helm makes it easy to deploy your services to a Kubernetes cluster but how can you provision the cluster itself? In the old days, you had to either manually use a complicated web app or you had to configure and run some rudimentary, overly complicated, and buggy software such as kops or kube-aws. There is a CLI tool called kubeadm that gives you a low level ability to create a control plane and join nodes to it. You might consider using kubeadm in order to create a cluster outside of any cloud. Each major cloud vendor also provides that capability via their own Software Developers Kit (SDK) such as the eksctl, gcloud, or az CLI tools.
Terraform was first introduced by Hashicorp at about the same time as Kubernetes. You use Terraform to fully automate the provisioning process on ephemeral compute resources. This is critical for adopting Continuous Integration Continuous Deployment (CI / CD) workflows that require the provisioning of the cluster or any Platform as a Service (PaaS) resources. With Terraform, you have a fully functional change automation solution using a resource graph with an execution plan.
In the configuration, you specify some Terraform settings, a provider, resources and variables usually grouped into modules. There are three types of variables; input, output, and local. They can be static values, expressions, or function calls. Terraform manages internal state in order to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures.
A provider is a Golang program that plugs into the Terraform CLI. Each provider publishes a schema that, when applied, uses the resources and variables to drive calls to the underlying cloud provider’s SDK. Hashicorp maintains a publicly available registry of these providers and modules that are contributed by many different organizations.
There are multiple providers for provisioning Kubernetes clusters to the three major cloud vendors; EKS, GKE, and AKS. You can specify all of the important configuration settings such as node instance type, max number of nodes, and Virtual Private Cloud (VPC) subnet IP ranges.
Although you can deploy databases and other stateful services inside Kubernetes, current IT consensus wisdom is to run these types of components outside of the Kubernetes cluster in production environments. To that end, there are Terraform provider plugins for just about any PaaS you can imagine on the three most popular cloud vendors; AWS, GCP, and Azure. You may also decide to use PaaS in development environments in order to maintain dev/prod parity. PaaS offerings include SQL (MySql or Postgres), Kafka, Elastic Search, Caching (Memcached or Redis), and Mongo DB. At the time of this writing, the Terraform registry includes 1,315 providers and 6,701 modules.
Here are some lessons learned while using Helm and Terraform with Kubernetes to run the load test lab on the different public clouds.
Personally, I prefer Hashicorp’s own collection of provider plugins and modules because they are mature, full featured, flexible, stable, yet easy to use.
You can choose to run Terraform either locally on your machine or in the vendor’s cloud shell which is a web app designed to look like a CLI. It may seem to be easier to use the cloud shell because it comes pre-installed and configured to use their SDK. I prefer to run Terraform locally. It may require more setup to install and configure everything but you get to easily port forward to anything running in your cluster.
I keep talking about CLIs but it is not too hard to get any or all of these tools to run properly from a Jenkins pipeline or Github action.
In order to complete the load test evaluation, I need to access the Kibana and Grafana web applications. I also download the performance results from Elastic Search for subsequent analysis in Apache Druid.
Typically, you have to load the schema for a database before it can be used by the service. For the load test lab, I just deploy the databases inside the cluster. You can initialize the database any way you like with commands specified in the postStart handler part of a deployment manifest. With that approach the database is properly initialized by the time that the K8s control plane sees it as available.
I use conditional expressions a lot in my Helm chart templates. There are three ways that I use Kubernetes here; backend deving with the integration test, frontend deving with a GUI, and running the actual load test itself. Different services need to be deployed in these configurations. You can conditionally apply an entire manifest based on configuration values used as logic switches.
Currently, there are thirteen different implementations of the service being load tested. They don’t always rely on precisely the same configuration. Which implementation to test is identified as a value in the configuration. Not only does the deployment manifest rely on that value to load the right image but also conditionally determines how to configure the environment variables for the Docker container.
There are Helm chart libraries publicly available on the Internet. I have not had a lot of success using them because, frankly, what I have found so far is not very high in quality. It is better just to write and maintain your own Helm charts rather than to depend on others.
Let’s talk a little bit about the individual cloud providers themselves. Under the covers, AWS depends on CloudFormation to provision EKS which, in turn, depends a lot on IAM. You have to make sure that the account you configured the AWS CLI to use has sufficient privileges for provisioning EKS. Don’t attempt to reuse an already existing VPC. You should create a new VPC every time you create an EKS cluster and destroy the VPC every time you decommission the corresponding EKS cluster.
GKE used to be the sweetest K8s cluster implementation around. They have changed some default networking recently that renders load balancer service types as ineffective. It’s tricky to get the ingress right because some apps use funky redirects. That’s why I always use port forwarding now. With the GKE provider, you have to specify the project_id and region variables in the configuration so you have to edit the terraform.tfvars file before you can provision the cluster.
I was hoping to include an evaluation of Terraform provisioned AKS with this blog but a bug in the sign up funnel for Azure prevented me from doing that. Even though I was attempting to subscribe with a Microsoft account correctly identified as living in California, the part of the funnel where I am to enter the credit card says that I live in Canada. I attempted to remedy that through a chat session with customer support. I had to disclose all kinds of personal information over a long period of time but never got unblocked.
CI / CD powered microservice architecture has come a long way in reducing release anxiety. These favorable results would never have occurred without sufficient devops maturity. The completely automated and reliable way to provision, configure, and deploy entire stacks of services in the developer, integration test, and production environments is key to the current success of modern enterprise software development. Kubernetes, Terraform, and Helm have emerged as the best technologies for provisioning, managing, and orchestrating container based services.