Deployment of multiple apps on Kubernetes cluster — Walkthrough

With this blog post I would like to show you how you can deploy couple applications on minikube (local Kubernetes) cluster.

Architecture

Before making hands dirty let’s see the overall architecture that we want to deploy:

  • backend service (kanban-app, written in Java with Spring Boot)
  • and frontend (kanban-ui, written with Angular framework).
  • adminer.k8s.com

Install Docker, kubectl & minikube

First you need to install all necessary dependencies. Here are links to official documentations which are covering most of popular OSes:

  • kubectl (a CLI tool to interact with cluster),
  • minikube (locally installed, lightweight Kubernetes cluster).

Start minikube

Once you’ve got everything installed you can start the minikube cluster by running the CLI command in terminal:

$ minikube start
😄 minikube v1.8.1 on Ubuntu 18.04
✨ Automatically selected the docker driver
🔥 Creating Kubernetes in docker container with (CPUs=2) (8 available), Memory=2200MB (7826MB available) ...
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🚀 Launching Kubernetes ...
🌟 Enabling addons: default-storageclass, storage-provisioner
⌛ Waiting for cluster to come online ...
🏄 Done! kubectl is now configured to use "minikube"
$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
$ kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:32768
KubeDNS is running at https://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Modify hosts file

To make the http://adminer.k8s.com & http://kanban.k8s.com work you need to edit the hosts file on your PC.

<MINIKUBE_IP>	adminer.k8s.com
<MINIKUBE_IP> kanban.k8s.com
$ minikube ip
172.17.0.2
172.17.0.2	adminer.k8s.com
172.17.0.2 kanban.k8s.com

Add Adminer

Finally everything is set up and we can start with deploying applications. First one will be Adminer app.

  • selector.matchLabels —defines how Deployment will find Pods that it needs to take care of, in this case it will look for a Pod which is labeled with app: adminer ,
  • template.metadata — tells what metadata will be added to each Pod, in this case all of them will have labels : app: adminer , group: db .
  • template.spec.containers — is a list of containers that will be inside a Pod. In this case I put only one container, which is based on adminer:4.7.6-standalone Docker image and exposes containerPort: 8080 . Moreover with env section we inject environment variable to the container to configure Adminer UI (full documentation can be found here).
$ kubectl apply -f adminer-deployment.yaml
deployment.apps/adminer created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
adminer 1/1 1 1 30s
$ kubectl describe deployment adminer
... many details about the Deployment ...
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
adminer-994865d4b-kqck5 1/1 Running 0 24m
$ kubectl describe pod adminer-994865d4b-kqck5
... many details about the Pod ...
  • selector — here we say to which Pods this Service provide access, in this case it provide access to a Pod with app: adminer label.
  • ports — indicates the mappings of the port exposed by the Pod to the ClusterIP port which will be available for other applications inside cluster.
$ kubectl apply -f adminer-svc.yaml
service/adminer created
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adminer ClusterIP 10.99.85.149 <none> 8080/TCP 9s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m34s
$ kubectl describe svc adminer
... many details about the ClusterIP...

Add Ingress Controller

As it was mentioned before, ClusterIP exposes the app only for other apps inside the cluster. And in order to get to it from outside of it we need to use a different approach.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
$ minikube addons enable ingress
🌟 The 'ingress' addon is enabled
$ kubectl apply -f ingress-controller.yaml
ingress.networking.k8s.io/ingress-controller created

Add PostgreSQL database

Right, we need to set up our database. To do that we need to create another pair of Deployment-ClusterIP, but this time with PostgreSQL.

$ kubectl apply -f postgres-pvc.yaml
persistentvolumeclaim/postgres-persistent-volume-claim created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODE STORAGECLASS AGE
postgres.. Bound pvc-43. 4Gi RWO standard 40s
$ kubectl describe pvc postgres-persistent-volume-claim
... many details about the PersistentVolumeClaim...
$ kubectl apply -f postgres-config.yaml
configmap/postgres-config created
$ kubectl get configmap
NAME DATA AGE
postgres-config 3 2m31s
$ kubectl describe configmap postgres-config
... many details about the ConfigMap...
  • spec.template.spec.containers[0].image — here we specify what Docker image we want to use for our database,
  • spec.template.spec.containers[0].envFrom — indicates from which ConfigMap we want to inject environment variables,
  • spec.template.spec.containers[0].volumeMounts — tells Kubernetes which Volume to use (defined in the spec.template.spec.volumes section) and map it to a particular folder inside the container — basically all data inside the mountPath will be stored outside the cluster.
$ kubectl apply -f postgres-deployment.yaml 
deployment.apps/postgres created
$ kubectl apply -f postgres-svc.yaml
service/postgres created
System:   PostgreSQL
Server: postgres
Username: kanban
Password: kanban
Database: kanban

Add kanban-app

First let’s provide all necessary definitions for backend service. As it was for Adminer, we need also to have create Deployment and Service for backend service.

$ kubectl apply -f kanban-app-deployment.yaml
deployment.apps/kanban-app created
$ kubectl apply -f kanban-app-svc.yaml
service/kanban-app created
$ kubectl apply -f ingress-controller.yaml
ingress.networking.k8s.io/ingress-service configured

Add kanban-ui

And at last, we can add the UI application. Again, we need to define the Deployment and ClusterIP.

$ kubectl apply -f kanban-ui-deployment.yaml 
deployment.apps/kanban-ui created
$ kubectl apply -f kanban-ui-svc.yaml
service/kanban-ui created

Conclusion

With this blog post I’ve tried to walk you through all the steps to deploy couple applications into a local Kubernetes cluster.

$ kubectl apply -f ./k8s
deployment.apps/adminer created
service/adminer created
ingress.networking.k8s.io/ingress-service created
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
deployment.apps/kanban-app created
service/kanban-app created
deployment.apps/kanban-ui created
service/kanban-ui created
configmap/postgres-config created
deployment.apps/postgres created
persistentvolumeclaim/postgres-persistent-volume-claim created
service/postgres created

Java Software Developer, DevOps newbie, constant learner, podcast enthusiast.

Java Software Developer, DevOps newbie, constant learner, podcast enthusiast.