Deploying An Application On Kubernetes From A to Z

By Mohamed Ahmed
April 14, 2020

Not sure where to start with deploying an application on Kubernetes? We'll walk you through the steps from A to Z.

Related posts

Kubernetes Security - A Complete Guide to Securing Your Containers

KubeCon EU 2023 Recap – GitOps Sessions on Flux with OCI, Liquid Metal CI/CD Platforms & Telco Cloud Platforms

Extending GitOps Beyond Kubernetes with Terraform Controller

We’ve all been there. You learned the basics of Kubernetes, Pods, ReplicaSets, Deployments, Services, etc. The different lego parts are used to build something big.

The Sample Application

We’re going to use a simple web service. The API accepts user messages in the following format:

{"username":"your message"}

The service then parses this object and adds your message to a Redis server. Consider it as a sort of voice mail where a user can leave a message. A working example is as follows:

$ curl -X POST -H "Content-Type: application/json" -d '{"jdoe":"Hello k8s"}' localhost:3000/api
$ curl localhost:3000/api
[{"username":"jdoe","message":"Hello k8s"}]

The application is written in Go, with only one source code file, main.go. The file contents are too large to be displayed here. But you can view it in the project’s repository: https://github.com/MagalixCorp/sample-api/blob/master/main.go

In order to run correctly, the application needs a running Redis instance on localhost with the password exported as an environment variable. You can easily do that with Docker as follows:

$ export REDIS_PASSWORD=password123
$ 
docker run -d --name redis -p 6379:6379 -e REDIS_PASSWORD=password123 bitnami/redis:latest
$ go run main.go

Since we need to run this application on Kubernetes, then the first step we need to take is to Dockerize it. That is, to enable it to run on Docker.

Note: the full application can be found on https://github.com/MagalixCorp/sample-api/

Step 1: Dockerize The Application

Our project contains dependencies that enable it to work as expected. In Go, dependencies are just third-party libraries that can be imported into the project. The same concept can be found in many other languages, each has its own tool for dependency management. For example composer for PHP, npm for NodeJS, and several others. To automatically install our application dependencies, we do the following (inside the project directory):

$ go mod init github.com/MagalixCorp/sample-api
$ go build

We should find two files created for us now: go.mod, which contains all the dependencies that our application needs, and go.sum, which contains the checksum for those dependencies to ensure integrity.

Our Dockerfile should look as follows:

FROM golang:alpine AS build-env
RUN mkdir /go/src/app && apk update && apk add git
ADD main.go config.json go.mod go.sum /go/src/app/
WORKDIR /go/src/app
RUN go mod download && CGO_ENABLED=0 GOOS=linux go build -a -installsuffix 
cgo -ldflags '-extldflags "-static"' -o app .

FROM scratch
WORKDIR /app
COPY --from=build-env /go/src/app/app .
COPY --from=build-env /go/src/app/config.json .
ENTRYPOINT [ "./app" ]

For compiled languages like Go, it’s a good idea to use multi-stage Docker builds. For this method, we create two (or more) images where one contains the final artifact that will be used for running the application while the others are used to build and provide any dependencies for that application. In our example, we use the golang:alpine as our build image. It contains all the build tools that we need to compile our binary file. Once done, we no longer need any build tools, we just need the artifact. So, we use the scratch image. Scratch is just an empty image, it contains no data. We copy our binary to that image and base our container on it. The main advantage of following this approach is that the resulting image is really small. Smaller images mean faster upload/download from the container registry and also faster load time when being moved from one node to another when working with Kubernetes (as we’ll see in a few moments).

The Development Environment

When developers need to make changes to the application, they need to test those changes on their local laptops. Kubernetes offers several ways of deploying a cluster on your local machine for development processes like Minkube for Linux and Docker Desktop for Windows and macOS. However, up to this point, we don’t have a cluster running yet. So, for our developers to continue their work while we build our cluster, they can use docker-compose. Our docker-compose.yml file should look as follows:

version: '3'
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
        REDIS_PASSWORD: password123
    volumes:
    - "$PWD/config.json:/app/config.json"
  redis:
    image: "bitnami/redis:latest"
    environment:
        REDIS_PASSWORD: password123

You may have noticed that we use environment variables to specify the Redis server password, which is a very bad practice. Having the file hosted on a version control system where it is being viewed by multiple users makes this password useless for obvious reasons! We’ll approach this issue later on in Kubernetes using Secrets. Notice as well that we are mounting the config.json file, which contains configuration options for our application, through a volume. This is a best practice so that we can make configuration changes without requiring to rebuild/redeploy our application.

Now, to test that our application is working as expected (as well as allowing our developers to resume work), we only need to run docker-compose up -d

Step 2: Creating a Deployment

The first step in moving to Kubernetes is to create a Pod that hosts the application container. But since pods are ephemeral by nature, we need to create a higher controller that takes care of our pod (restart if it crashes, move it around nodes, etc.). For that reason, we’ll use a Deployment. This also has the added bonus of enabling us to deploy more than one replica of our application for high availability. So, our deployment.yml file should look as follows:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    labels:
        app: frontend
    name: frontend
spec:
    replicas: 2
    selector:
        matchLabels:
            app: frontend
    template:
        metadata:
            labels:
                app: frontend
        spec:
            containers:
            - image: magalixcorp/sample-api:v1
              imagePullPolicy: IfNotPresent
              name: frontend

A good GitOps practice here is to ensure that all the cluster files are checked out into source control. This ensures that your application stack stays consistent in different environments. So, let’s check out our deployment file:

git add deployment.yml
git commit  -m "Adds the Deployment file"
git push

GitOps on AWS on-demand Webinar Series

If you want to learn how GitOps workflows can improve security, reliability and accelerate deployments speed safely, this is for you.

Watch Now

Step 3: Exposing Our Application Using Service and Ingress

The Deployment we applied in the previous step just creates two pods of our application and ensures that they’re always running. But we need our users to start using our application. So far, our pods are accessible internally inside the cluster network. To make it accessible from outside, we need to create a Service and an Ingress. So, let’s do just that.

What’s The Difference Between A Service And Ingress?

Before we go ahead with creating the resources let’s address this question. A Service is a Kubernetes object that receives HTTP requests and load balances them among the Pods under its control. There are several types of Services that you can use depending on your specific case. So, in our scenario, you may be thinking of using a LoadBalancer type of Service. You’re correct, a LoadBalancer service will create a load balancer that is publicly exposed and has an external IP address. Any traffic arriving at this IP will be routed to one of the backend pods. However, the load balancer is tied to only one service. If we need more than one service for our application (which is a typical use case), we’ll need to create more load balancers and more IP addresses, which is impractical and cannot scale. The Ingress controller, on the other hand, allows you to tie it to more than one service and choose which requests go to which service depending on the condition you specify (in our case, it’ll be the URL path). You can refer to the following diagram for a better understanding of how Ingress differs from a Service:

Deploying_An_Application_On_Kubernetes_From_A_to_Z_1.jpg

Now, in order to create an Ingress, we need to create a Service first. Our frontend-svc.yml looks as follows:

apiVersion: v1
kind: Service
metadata:
    labels:
        app: frontend
    name: frontend-svc
spec:
    ports:
    - port: 3000
      protocol: TCP
      targetPort: 3000
    selector:
        app: frontend
    type: ClusterIP

As you can see, our Service is the ClusterIP type (default) and we don’t need our service to be accessible externally, this is the job of the Ingress controller.

Installing Nginx Ingress Controller

Ingress, unlike other Kubernetes objects, does not have a controller that ships with Kubernetes. This means that we need to install our controller of choice to handle our Ingress resource. There are several sources where you can obtain an Ingress controller, each has its own set of features. For this article, we chose the Nginx one. The documentation provides options for installing the controller on several platforms. In our case, we are using Docker Desktop on macOS so the steps are as follows:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx
/nginx-0.30.0/deploy/static/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx
/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml

Now, let’s create our Ingress resource. Our ingress.yml file should look as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: frontend-ingress
spec:
    rules:
    - http:
        paths:
        - path: /api
          backend:
            serviceName: frontend-svc
            servicePort: 3000

As simple as it looks, this ingress resource actually does a lot here. If you notice, the rules that the Ingress follows (line 6) are in an array. This means that you can add more than one rule, and each connects to its own backend service. In our example, we only need to route traffic that arrives at /api to our frontend-svc service. Note that our Ingress resource listens on port 80, which is what you desire most of the time. After all, it’s much easier for your clients to find you through www.mycompany.com/api rather than www.mycomany.com/api. Each internal service, however, gets to keep its own port number. Next, you may want to verify that everything is working correctly. If you point your browser to http://localhost/api, you should see a page similar to the following:

Deploying_An_Application_On_Kubernetes_From_A_to_Z_2.jpg

As counterintuitive as this may seem to you, this actually means that our Ingress resource and controller are working fine. That’s because when we hit the /api path, Ingress contacted the frontend-svc Service, which in turn sent an HTTP request to one of the backend Pods hosting our application. However, since we haven’t yet deployed our Redis service, the application sent an empty reply, which was interpreted by Ingress as an invalid response and, hence, a 502 Bad Gateway response was sent to the browser.

Step 4: Handling Application Configuration Using ConfigMaps

Any non-trivial application will definitely need configuration one way or another. Some programming languages have their own configuration files, for example, php.ini for PHP. The best practice requires that you avoid injecting configuration data in code. Otherwise, you will need to debug, test, and redeploy the binary each time you need to make a configuration change. The recommended method is to inject any configuration data into the container from a separate outside source. Kubernetes handles this by using ConfigMaps. Using a ConfigMap, you can supply configuration data to the pod in the form of environment variables or files mounted as volumes. Our application uses the following configuration stored in config.json:

{
    "RedisHost": "redis"
}

When we’re using our application on our local machine without using Docker, we can set the RedisHost to localhost as the install will be running on 127.0.0.1. However, if we’re using docker-compose, where Redis is running in a container on the internal network, then the key should have “redis” as its value (or whatever name our Redis Service is using).

Our application expects to find config.json in the same path as the binary file so we can create our ConfigMap to contain the configuration data from config.json in one of two ways:

We can use the one-liner kubectl create configmap config.json --from-file=config.json. However, this is not the best approach since we don’t get to check out our configmap in version control. So, a more GitOps-oriented way would be to create a YAML file for the ConfigMap as follows:

apiVersion: v1
kind: ConfigMap
metadata:
    name: app-config
data:
  config.json: |-
    {
        "RedisHost": "redis-svc"
    }

Now, we need to modify our deployment file to mount the ConfigMap. Notice that this is very similar to what we did in our docker-compose.yml file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    labels:
        app: frontend
    name: frontend
spec:
    replicas: 2
    selector:
        matchLabels:
            app: frontend
    template:
        metadata:
            labels:
                app: frontend
        spec:
            containers:
            - image: magalixcorp/sample-api:v1
              imagePullPolicy: IfNotPresent
              name: frontend
              volumeMounts:
              - name: config-volume
                mountPath: /app/config.json
                subPath: config.json
            volumes:
                - name: config-volume
                  configMap:
                    name: app-config

Pay attention to line 24. Using subPath is necessary here so that the Kubernetes mounts the config.json file as a file, not a directory.

Step 5: Securing Confidential Data Using Secrets

As mentioned before, we used environment variables to inject the Redis password into our application. We could use environment variables in our Deployment. However, since we have deployment.yml checked into our version control system, the Redis password is publicly exposed, which poses a very high-security risk. Instead, as a safer method, we should use Secrets - let’s create a Secret to store our Redis password:

$ kubectl create secret generic redis-password --from-literal=redis-password=password123

Notice that we used the imperative way to create the Secret to avoid creating a YAML file where the Secret value is base64-encoded. Base64 encoding is not secure since it can easily be decoded. A more secure way of storing Secrets is integrating Kubernetes with a key store like AWS KMS or Microsoft Azure Key Vault. A key store ensures that your Secrets are stored in encrypted form (remember encryption is different than encoding). Now, let’s inject our newly-created Secret into our pods. Our deployment.yml file should look as follows:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    labels:
        app: frontend
    name: frontend
spec:
    replicas: 2
    selector:
        matchLabels:
            app: frontend
    template:
        metadata:
            labels:
                app: frontend
        spec:
            containers:
            - image: magalixcorp/sample-api:v1
              imagePullPolicy: IfNotPresent
              name: frontend
              env:
              - name: REDIS_PASSWORD
                valueFrom:
                    secretKeyRef:
                        name: redis-password
                        key: redis-password
              volumeMounts:
              - name: config-volume
                mountPath: /app/config.json
                subPath: config.json
            volumes:
                - name: config-volume
                  configMap:
                    name: app-config

Step 6: Deploying the Backend Storage (Redis) Using a StatefulSet

Why use a StatefulSet with Redis and not just a Deployment? The reason is that we need Redis to maintain its state through restarts. That is, we need to ensure that it uses the same storage disk whenever it’s restarted or moved from one node to another. If we decide (in the future), we’re going to use a Redis cluster, then we need to ensure that the pods maintain the same hostnames and network configuration. Our redis.yml file should look as follows:

apiVersion: apps/v1
kind: StatefulSet
metadata:
    name: redis
spec:
    serviceName: redis-svc
    replicas: 1
    selector:
        matchLabels:
            app: redis
    template:
        metadata:
            labels:
                app: redis
        spec:
            containers:
            - name: redis
              image: bitnami/redis:latest
              env:
                - name: REDIS_PASSWORD
                  valueFrom:
                    secretKeyRef:
                        name: redis-password
                        key: redis-password
              ports:
              - containerPort: 6379
                name: redis-port
              volumeMounts:
              - name: data
                mountPath: /bitnami/redis/data
    volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
            requests:
                storage: 5Gi

The StatefulSet refers to a headless service (line 6) that you’ll need to create. Our redis-headless.yml file should look as follows:

apiVersion: v1
kind: Service
metadata:
    labels:
        app: redis
    name: redis-svc
spec:
    clusterIP: None
    ports:
    - port: 6379
    selector:
        app: redis

What makes the above definition yield a headless Service is line 8, which dictates that this Service shall not have an IP address. Instead, it will just return the names of the pods that it manages.

Now, the application should be running the same way it was on the local machine (through docker-compose).

Step 7: Adding HTML Content to the Application

So far, we have a basic, working application. However, only the API is functioning. Most web applications have an API, and a nice user interface where clients can see the served content. Let’s add some static files (HTML, CSS, images, and JS files) to our application. The project repository contains a directory called static for our HTML and JS files. We need to add those to a container that runs Nginx. Nginx is an excellent choice when it comes to serving static content. The Dockerfile for our static content looks as follows:

FROM nginx
ADD index.html /usr/share/nginx/html/
ADD js /usr/share/nginx/html/js

Next, to create a new deployment for our static content pod. Our deployment-static.yml file should look as follows:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    labels:
        app: static
    name: static
spec:
    replicas: 2
    selector:
        matchLabels:
            app: static
    template:
        metadata:
            labels:
                app: static
        spec:
            containers:
            - image: magalixcorp/static:v1
              imagePullPolicy: IfNotPresent
              name: static

Finally, we need a Service to load-balance the Nginx pods. Our static-svc.yml file look as follows:

apiVersion: v1
kind: Service
metadata:
    labels:
        app: static
    name: static-svc
spec:
    ports:
    - port: 80
      protocol: TCP
      targetPort: 80
    selector:
        app: static
    type: ClusterIP

One last thing before we can actually see our static content on the web page, we need to add a new path to our ingress.yml file. The complete file should look as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: frontend-ingress
spec:
    rules:
    - http:
        paths:
        - path: /api
          backend:
            serviceName: frontend-svc
            servicePort: 3000
        - path: /
          backend:
            serviceName: static-svc
            servicePort: 80

An important thing to note here is the order of paths. Now, if we were to place the / path before the /api, the first would capture any and all requests going to / and also to /api. For that reason, you need to pay close attention to the order of paths in Ingress. The longest path should come first, then the shorter, until we arrive at the slash (/).

Applying our new configuration to the cluster, we should see the following when we open http://localhost:

Deploying_An_Application_On_Kubernetes_From_A_to_Z_3.png

Behind the scenes, the page is calling /api through an AJAX call, then fetches the list of usernames and their messages as a JSON array, and formats it so that it appears as above.

Step 8: Packaging Our Kubernetes Cluster Using Helm

Whenever we need to replicate our setup to another environment, we’ll need to reapply all the configuration files (deployments, services, configmaps, etc.) again. However, the environments are not completely identical. So, for example, if we want to create a test environment, we don’t need to have four replicas of the application. More replicas mean more resources and higher costs. Most people create a 50% downsized environment for testing purposes. This means that we need to change the number of replicas in the Deployment file(s) to match the required number. But that also implies that we’re going to need a separate copy of the Deployment files for our testing environment. More often than not, you’ll end up with two or three environments in addition to the production one. Keeping a separate copy of the resources files for each environment is neither practical nor scalable. The recommended method is to use a templating tool like Helm. Helm provides templates (called Charts) where you can add resources that you need, keeping placeholders for the values that you are not sure will remain the same from one environment to another (in our example, the number of replicas). When you need to deploy to a new environment, all you need to do is fill a few variables with their appropriate values without changing the main logic. Now, let’s start by installing Helm.

Installing Helm

Installing Helm is so simple. It involves two steps:

  1. Deploying the client binary.
  2. Deploying the server-side part (tiller).

Installing the client depends on which OS you’re using. In our case, we’re on macOS so we installed Helm using brew: brew installs helm. If you’re using another OS, please consult the documentation for your specific case. Once installed, we need to install Tiller, which is the server-side part of Helm. Tiller is responsible for executing whatever Helm needs against the cluster (creating, updating, and removing resources). Tiller can be installed using the following commands:

$ kubectl -n kube-system create serviceaccount tiller
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin 
--serviceaccount=kube-system:tiller
$ helm init --service-account tiller

For your reference, the first command creates a Service Account named tiller, the second grants that service accounts administrative privileges on the cluster, and the last hooks our Helm client with Tiller.

Working With The Helm Chart

Helm works by creating a directory where we have a Chart.yaml file. This file contains the package metadata i.e, the name, version, and description. Let’s create a directory called helm and place Chart.yaml inside it. The contents of Chart.yaml should look as follows:

apiVersion: v1
appVersion: "1.0"
description: The Helm chart for the messages application
name: messagesapp
version: 1.0.0

Now, the remaining step is to create a directory called template and copy all our resource files and place them in the templates directory. If you feel that there’s a variable parameter in any of the files, you can just replace it with a placeholder that has the following format: {{ variable }}. So, in our example, we needed to make the number of frontend app Deployment replicas variable depending on the target environment. To do that, we replace the hardcoded number of replicas (2) to be {{ replicas }}. The following is a snippet of our deployment.yml file after the modification:

spec:
    replicas: {{ .replicas }}
    selector:

The question now is: how can we substitute those variables for the real values? You have two ways of doing this:

  • Through a dedicated YAML file containing all your values. It’s typically named values.yaml but you can change it as per your needs and reference it by name when using the helm install command (shown later).
  • Through passing the values as command-line arguments to the helm install command.

Both methods are not mutually exclusive. Actually, you can (and should) use both of them in your charts. The values file will contain the same defaults for all the variables that you include in your templates. Then, you can override what you need through passing the values on the command line. Yes, command-line-supplied values in Helm have higher precedence over the ones in the values file and will override them.

Now that we have our files ready, execute the following commands to destroy our environment. We are going to use Helm to rebuild it automatically:

$ kubectl delete deployments frontend static
$ kubectl delete statefulsets redis
$ kubectl delete svc frontend-svc redis-svc static-svc
$ kubectl delete configmaps app-config
$ kubectl delete ing frontend-ingress

To redeploy our application stack (including the frontend, static and the backend Redis instance), we use only one command:

$ helm install --name messagesapp-helm helm/

Helm has a number of different ways to get invoked. In our case, we pointed it to the directory where we have our Chart.yaml file. You may need to wait for a few seconds until all the components are created successfully. You can check the progress by using the following command:

$ helm status messages-helm

The install command assumes that you have the values in a file called values.yaml in the same directory as the Chart.yaml file. Finally, it’s a good idea to give your Helm deployment a name (through the --name flag). Otherwise, Helm automatically assigns a name for you. The name becomes very important when you need to make changes to the deployment (and apply them). For example, we may need to make changes to our ingress resource template. To apply this change, you just run a command like this:

$ helm upgrade messagesapp-helm helm/

Finally, you can use Helm to destroy the environment and remove all the resources that it created. Deleting the namespace may have the same effect, but you may have other needed resources in this namespace. The following command will purge the environment that we’ve just created:

$ helm delete messagesapp-helm

TL;DR

If you’re a veteran Kubernetes user, this article may not have much value for you. However, the main objective here is to bring together the typical steps that a user needs to follow when deploying an application to a Kubernetes cluster.

You can think of this article as a reference that you can get back to whenever you need a quick refresher about the most common resources used with Kubernetes.

The sample application we used, while largely trivial, shares a lot of modern web application requirements:

  • A front-end part where user requests are received.
  • A backend that stores state.
  • A static component that displays a nice user interface and connects to the front end.

Through our discussion, we visited Deployments, Services, ConfigMaps, Secrets, and Ingress. Finally, we saw how we can package the whole thing into an easy-to-deploy component using a Helm Chart. The real power of Helm comes when you need to take your cluster one step further and create a CI/CD pipeline. In CI/CD, each change may involve creating and pushing a new Docker image. Through Helm Charts, you can quickly update the existing Kubernetes Deployment to use any new image.


Related posts

Kubernetes Security - A Complete Guide to Securing Your Containers

KubeCon EU 2023 Recap – GitOps Sessions on Flux with OCI, Liquid Metal CI/CD Platforms & Telco Cloud Platforms

Extending GitOps Beyond Kubernetes with Terraform Controller

Whitepaper: Production Ready Checklists for Kubernetes

Download these comprehensive checklists to help determine your internal readiness and gain an understanding of the areas you should update

Download your Copy