Oct 26 2020

Spring Boot + Kubernetes = Love (or something like that)

Category: MiscellaneousIuliana @ 22:31

In the previous entry I had a Spring Boot application running in a Docker container. And that is good. But an application designed to be used by a lot of users should be scalable. Well… not the app per-se, but the setup managing it. So, I have a container with an application running. What if more users access it than the poor container with its limited resources can handle? Well, in this case, I need another container with the same application installed and a component called a load balancer that will direct traffic as needed. This is where Kubernetes comes in handy.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. This means, now that I have an image of my containerized application, I can use Kubernetes to deploy it on more than one container and I can use it to manage my containers. I can decide the resources assigned to a container and I can tell Kubernetes how many containers I want for crowded and lighter situations. Kubernetes can do a lot for me, but I will focus here on something called Horizontal Pod Autoscaler – you can configure one of these to manage a number of containers. As the name of this components says it quite clearly, I can configure it to automatically decide the number of container with my application that need to be up and running based on a specific criterion. And because it is the easiest for me to do, my criteria will be the CPU load percentage. (instead of users requests)

Kubernetes manages something called pods. The pod is a logical construct specific to Kubernetes.  A Pod is the unit of work in Kubernetes.

A pod could be configured with a set of specifications, such as processing power requirement (CPU, memory, disk, etc.), execution policy, and application image. Each pod can group one or more containers storage resources which are shared between containers and has a unique network IP-address. This comes in handy, since containers do not have IP addresses.

Kubernetes usually manages a number of VMs called nodes. On these nodes it creates the pods and installs all the services necessary to manage them. For learning purposes though, one node is enough. The easiest way to get a Kubernetes cluster of one node running locally is to use Minikube. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

So, fist step: I need to install Minikube. Since Minikube needs a VM, I need a a Virtualisation system. MacOS comes with Hypervisor. But I’ve always liked Oracle’s VirtualBox, because it comes with a console where I can see my VMs. So, I will install VirtualBox.

Once I have Minkube and VirtualBox installed, I can start Minikube. In a terminal, I can type in minikube. Is as simple as that. But, I want more control and I want to make sure that the virtualization system used is VirtualBox(because Minikube can use a Docker container as a node and if it finds a Docker runtime locally it will use it, and then you end up with a Kubernetes cluster managing Docker containers in a Docker container, and I don’t like this Matryoshka setup), and that the node is being given enough resources to function properly, so instead I will execute the command with the following arguments:

> minikube start --memory=16384 --cpus=4 --driver=virtualbox

If I now open the VirtualBox visual console I should see my VM there.


Now, just in case, I should make sure Minikube is working as expected. So I can do this by inspecting its status by exectuing minikube status in the terminal.

>  minikube status
  minikube
  type: Control Plane
  host: Running
  kubelet: Running
  apiserver: Running
  kubeconfig: Configured

Minikube has its own internal Docker runtime, so to be able to deploy my image on Minikube, I have to upload my image to the Docker on this VM. This is done by executing the following command in my terminal, before executing the gradle command to build the image.

> eval $(minikube -p minikube docker-env)
> gradle clean build prepareForDocker generateDockerfile docker 

This command points the local shell to Minikube’s docker-daemon. And this means the resulted image will be stored in the Minikube’s docker image repository. If I want to make sure my image is there I just have to ssh into the Minikube machine and check the existing Docker images.

> minkube ssh  # command to ssh into the Minikube VM
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

> docker images -a

REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
simple-app                                0.0.1-SNAPSHOT      af2a3b91b889        10 seconds ago      543MB
k8s.gcr.io/kube-proxy                     v1.19.2             d373dd5a8593        5 weeks ago         118MB
k8s.gcr.io/kube-apiserver                 v1.19.2             607331163122        5 weeks ago         119MB
k8s.gcr.io/kube-controller-manager        v1.19.2             8603821e1a7a        5 weeks ago         111MB
k8s.gcr.io/kube-scheduler                 v1.19.2             2f32d66b884f        5 weeks ago         45.7MB
gcr.io/k8s-minikube/storage-provisioner   v3                  bad58561c4be        7 weeks ago         29.7MB
k8s.gcr.io/etcd                           3.4.13-0            0369cf4303ff        2 months ago        253MB
kubernetesui/dashboard                    v2.0.3              503bc4b7440b        4 months ago        225MB
k8s.gcr.io/coredns                        1.7.0               bfe3a36ebd25        4 months ago        45.2MB
kubernetesui/metrics-scraper              v1.0.4              86262685d9ab        7 months ago        36.9MB
k8s.gcr.io/pause                          3.2                 80d28bedfe5d        8 months ago        683kB

So, the image is there and containers can be created from it. The next thing I need is to tell Kubernetes what it needs to do for me. For this I need a configuration file.

  1. I want use the simple-app image to create a pod with a containter running the simple-app on port 8080 – this can be done by configuring a Kubernetes object named Deployment.
  2. I want to provide public access to the application running on port 8080 – this can be done by configuring a Kubernetes object named Service.
  3. I want to tell Kubernetes to manage between 2 and 5 instances of the application, to deal with the CPU load. Keep 2 at minimum, increase up to 5 to deal with unexpected load – this can be done by configuring a Kubernetes object named HorizontalPodAutoscaler.

The configuration file is written in YAML and each object has its own name, that is declared under the metadata section. Kubernetes uses selectors to identify objects by name and create relationships between them. For example the Service object is configured to expose port 8080 for all deployments with name simple-app-deployment. The HorizontalPodAutoscaler object, is configured to trigger the replication or destruction of Deployment objects that match the name simple-app-deployment.

# simple-app.yml
########################################################
# Simple App Deployment & Service & HPA
#######################################################
apiVersion: v1
kind: Service
metadata:
  name: simple-app-service
  labels:
    app: simple-app-service
    groupId: simple-app
spec:
  selector:
    app.kubernetes.io/name: simple-app-deployment
  ports:
    - port: 8080
      targetPort: 8080
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: simple-app-deployment
    groupId: simple-app
  name: simple-app-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: simple-app-deployment
  template:
    metadata:
      labels:
        app.kubernetes.io/name: simple-app-deployment
        groupId: simple-app
    spec:
      containers:
        - name: simple-app-container
          image: simple-app:0.0.1-SNAPSHOT
          ports:
            - containerPort: 8080
          resources:   # we are limiting resources for pods to make sure autoscaling works
            requests:
              cpu: 750m
              memory: 750m
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: simple-app-hpa
  labels:
    app.kubernetes.io/name: simple-app-hpa
    groupId: simple-app
spec:
  minReplicas: 2
  maxReplicas: 5
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: simple-app-deployment
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 20
---

Now that I have the configuration file, I have to give it to Kubernetes for execution. This is done using a CLI utility named kubectl.
After I installed it, there is a single command that you could execute to trigger all those Kubernetes objects to be created. But I like to do things like these in a more practical way.

Kubernetes supports namespaces. Which means I have the possibility to logically group Kubernetes objects in a single namespace. And this is useful, because when I want to clean up after myself, I just have to delete the namespace. Also, if I messed up the configuration, which will happen, instead running delete command for each object type, I only have to run a single command.
Or even better, I can create a bash script that starts with the deletion of the namespace if there is one. Then add a command in there to create the namespace, another to make it default and execute the file to create your objects. I can execute the script how many times you need and the result will always be the same: a brand new set of Kubernetes objects.

#!/bin/bash
# the deploy.sh script
kubectl delete namespace simple-app

kubectl create namespace simple-app
kubectl config set-context --current --namespace=simple-app

kubectl apply -f simple-app.yml

Now, I’ve executed my script, how can I make sure everything went as planned? Well… there are a few things I can do:

  • Use kubectl commands and check that namespace/pods/services have been created.
  • Run minikube service list and get the URL where your application is available.
  • Run minikube dashboard that will open up the Kubernetes web console, where I can see all the namespaces and objects currently managed.

Also, since I’ve mentioned load, I would like be able to see it in the dashboard, so, I will enable the Minikube metrics-server addon, by executing the following command in my terminal.

> minikube addons enable metrics-server

After enabling the metrics-server addon, I can start the dashboard by executing:

> minikube dashboard

The Minikube dashboard is an addon too, and when started it opens a browser tab that allows me to see all the Kubernetes namespaces and I can inspect thier contents.
Below, you can see a screenshot of my simple-app namespace. And it currently contains 2 pods.

I’ve previously mentioned the CPU load, well… currently there is no load. That can change by accesing the /load URL on the simple-app. But wait, how can I even be sure my application actually works on the Kubernetes pods? Where can I access it?

Well, here’s the easiest way, just execute the following command:

> minikube service list
|----------------------|---------------------------|--------------|-----------------------------|
|      NAMESPACE       |           NAME            | TARGET PORT  |             URL             |
|----------------------|---------------------------|--------------|-----------------------------|
| default              | kubernetes                | No node port |
| kube-system          | kube-dns                  | No node port |
| kube-system          | metrics-server            | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard      | No node port |
| simple-app           | simple-app-service        |         8080 | http://192.168.99.100:30496 |
|----------------------|---------------------------|--------------|-----------------------------|

The URL returned in that table for the simple-app-service is where the simple-app Spring Boot Web application can be accessed. So, opening this URL: http://192.168.99.100:30496 in the browser should return my “It Works!” text. Opening the http://192.168.99.100:30496/load URL will call a method that is designed to keep the CPU busy for a while so I can see the pods being loaded like crazy and see Kubernetes creating others to deal with this load.
In the next image you can see Kubernetes creating other pods, so that the application remains accessible. I can still open up the main page at http://192.168.99.100:30496/ easily. What you can also see is that the VM that I used for my Minikube, has run out of CPU power to create pods according to my specs. And if you ever run my load example, you will probably consider that I over-did it with the load as well. :)

Long story short: if you follow in my footsteps which I listed for you in this entry, you will end up with a local Kubernetes cluster of one node, hosted on a VirtualBox VM, that can be used to deploy and scale Spring Boot applications.

The code of the Spring Boot application, the Kubernetes configuration file and the deploy.sh are all available in this public repo.

Kubernetes is very powerfull and it can do a lot of things when configured properly. If I have time and ideas, I’ll probably write some more technical posts. Until then, feel free to comment on my approach, criticize me, let me know if it was useful and help me make it better.

Stay safe, stay happy!

Tags: , , , , ,

4 Responses to “Spring Boot + Kubernetes = Love (or something like that)”

  1. Juan A Moreno says:

    Hi Iuliana, good article thanks for sharing.

    I take a look at the code but I didn’t find the load endpoint configuration. Is the repository update?

  2. Iuliana says:

    Hello Juan,

    I’ll take a look and push it. I might have forgotten to do so. Thank you for letting me know.

    Cheers!

  3. Iuliana says:

    Hello Juan,

    You were right, I forgot to push the commit with the “/load” endpoint.
    It has now been taken care of.

    Enjoy!

  4. Cristina says:

    Hi Iuliana, nice that you’re into K8s these days :) Btw you may want to give k9s ;) also a try. It’s a simple CLI utility to watch K8s resources, switch between them, checkout manifests, logs, monitor events and exec into pods. On macOS is just as simple as: brew install k9s

Leave a Reply