Automating quality checks for Kubernetes YAMLs

If you have ever wondered how to make sure that your YAML Kubernetes objects are defined correctly and are following industry best practices, this blog post is for you. In a few paragraphs, I’ll show you how to create a GitHub Actions workflow that will first analyze K8s object definitions using Datree, then deploy it on a cluster and run some tests.

  • Deployment and testing an example application in a real cluster (GKE).

Workflow structure

First of all you need a GitHub repository to hold our YAML files. I’m using my old project — k8s-helm-helmfile. This repository has three folders, each containing a different approach to deploy applications into Kubernetes clusters. You can read more about those approaches in my previous blog posts about vanilla K8s, Helm and helmfile deployments.

  • on - a condition based on which a workflow will be triggered. A workflow will only be started when changes are committed on the master branch.

Datree analysis

In this part you will use a free tool called Datree, which analyzes K8s definitions and will stop the workflow if it finds any problems. It’s very important to have a safety net like this, so you can feel confident that even if you make a mistake, or aren’t aware of best practices, an assistant will keep you on track.

> curl | /bin/bash
> datree test adminer-deployment.yaml

Testing on GKE

After making sure that templates are okay from Datree ‘s point of view, move to deploying them to a real Kubernetes cluster (which is not production) and then check if everything is working there, e.g. if a website is available over the internet, etc.

> gcloud config set project k8s-helm-helmfile> gcloud config set compute/region europe-central2> gcloud config set compute/zone europe-central2-a
> gcloud iam service-accounts create helm-github-actions-service
> gcloud iam service-accounts list
> gcloud projects add-iam-policy-binding k8s-helm-helmfile \
--member=serviceAccount:<EMAIL> \
--role=roles/container.admin \
--role=roles/storage.admin \
--role=roles/container.clusterAdmin \
--role=roles/iam.serviceAccountUser \
> gcloud iam service-accounts keys create key.json --iam-account=<EMAIL>> export GKE_SA_KEY=$(cat key.json | base64)
> printenv GKE_SA_KEY
  • namespace - specifies the K8s namespace where the app will be installed,
  • chart - gives information about the location of a Helm chart,
  • helm - indicates which version of Helm will be used,
  • value-files - file used to override the default values from a Helm chart, in my case it's an Adminer's values.yaml file (the Helm chart I use for testing, which deploys popular database client - Adminer),
  • values - this parameter works pretty the same as previous one - it's used to override the default values from a Helm chart, but instead of doing it with a file, we can directly specify values that need to be overridden; here, I'm overriding only the Service kind, as by default it's a ClusterIP, but I don't want to change it in the adminer.yaml file.


And that’s it for today! I hope that this blog post encourages you to build something like this on your own, give Datree (or any other static code analysis tool) a try, and set up a cluster for automated tests, so you will feel more confident about the changes you made in your code base. It can all be set up and operational in a flash.

Java Software Developer, DevOps newbie, constant learner, podcast enthusiast.