Fork, Clone, Run: A GitOps Model for Provisioning Multi-Machine HA Clusters with Rolling Upgrades

By Chanwit Kaewkasi
March 03, 2020

Preparing a Kubernetes cluster should be as easy as 1-2-3. With the GitOps operational model, fork, clone, run is a simple workflow that anyone can use to provision easy-to-manage Kubernetes clusters.

Related posts

Enterprise Ready GitOps with the Weave Kubernetes Platform

InfoQ on WKSctl and Cluster Configuration Management with GitOps

Weave GitOps Manager Adds Policy Based Cluster Automation to Kubernetes


Preparing a Kubernetes cluster should be as easy as 1-2-3. With the GitOps operational model, fork, clone, run is a simple workflow that anyone can use to provision easy-to-manage Kubernetes clusters. We’ll show you that GitOps is not only good for application deployments, but that it is also an effective workflow for managing and upgrading Kubernetes itself. WKSctl is the open source tool behind these cool things.

WKSctl, is an OSS tool used by Weave Kubernetes Platform (or WKP) and Firekube to provision Kubernetes clusters directly from a Git repository. WKSctl implements the Cluster API specification that can bootstrap Kubernetes clusters with the novel master-of-masters pattern.

In this post, we’ll explore how you can also use WKSctl to manage the complete lifecycle of a Kubernetes cluster. This is the general workflow you will follow:

  • With Firekube you will fork, clone, and run a cluster from Git. The cluster will consist of Firecracker / Ignite micro-VMs running on a Docker’s bridge network. The Git repository also includes the high availability setup for Firekube with HAProxy.
  • We’ll then explain the configuration that needs to be prepared for WKSctl in order to set up an HA Firekube cluster (in the Ignite mode) by using HAProxy as the Kubernetes API load balancer. We’ll then demonstrate how you can also mix container and micro-VM workloads together in the same setup if needed.
  • With the HA cluster running, we’ll show you how easy it is to upgrade Firekube from the Git repository. A minor patch to the cluster will be applied from 1.14.1 to 1.14.7, and then a major upgrade will be applied twice to 1.15.9 and 1.16.6 using only Git commits.

First let’s begin with the setup for the high availability Firekube cluster.

Fork, Clone, Run Firekube with WKSctl

Many teams are already using GitOps for application deployments and now we’ll show you how you can take GitOps even further by bootstrapping whole Kubernetes clusters with WKSctl straight from a Git repository. Everyone can now easily spin up their clusters in 3 simple steps: fork, clone, and run:

fork-clone-run.png

  1. Fork: fork a Git repository so that you can work on it from your GitHub user account. Here’s the base repository https://github.com/chanwit/wks-firekube-upgrade-demo you can fork from.
  2. Clone: do git clone to download your forked repository onto your machine. Please note that we need to clone with the SSH protocol for write access. Here’s the command: git clone git@github.com:/wks-firekube-upgrade-demo
  3. Run: cd into the cloned directory, and run the setup script to start your cluster:

cd wks-firekube-upgrade-demo

./setup.sh

From the logs, you’ll see the bootstrap node coming up as the first master. After that’s complete, please just wait for a couple of minutes so that the bootstrap node can finish provisioning the entire cluster, including an HA for the API Server’s load balancer. When complete, you should have a cluster with 3 masters, and 1 worker node all running on Kubernetes 1.14.1.

In the next section, we’ll explain how to set up the load balancer for the Kubernetes API on a Firekube cluster.

Configuring an HA Firekube cluster

When preparing a Kubernetes cluster for High Availability or HA, such a setup can be achieved by putting a load balancer (in our case an HAProxy) in front of the master nodes to help load balance the traffic to and from the APIServer (the kube-apiserver component). Since a cloud native architecture these days can consist of a mix of both container and VM workloads, we’ll run an external load balancer, HAProxy in a container, while the Firekube cluster provisioned it with the Ignite mode. Using Ignite mode means that every Kubernetes node will be an Ignite instance, running on a Firecracker micro-VM. You don’t need to choose between containers or VMs any more!

HA Firekube cluster.png

The HAProxy configuration haproxy.cfg generated by a JK script. The cluster in this post has 3 masters with the IP addresses 172.17.0.{2,3,4} YMMV. The script obtains this list of IP addresses from machines.yaml and puts them into haproxy.cfg.

In the HAProxy configuration, we use the TCP round-robin mode with checks. The API Servers on each node open port 6443 that is accessible via the Docker bridge. You can view the bridge network IP addresses in the configuration file.

frontend k8s-api
  bind *:6443

  mode tcp
  option tcplog
  default_backend k8s-api

backend k8s-api
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server node0 172.17.0.2:6443 check
  server node1 172.17.0.3:6443 check
  server node2 172.17.0.4:6443 check

These are the commands for starting the HAProxy container instance, and obtaining the HAProxy IP address. These two commands are inside the setup.sh script.

docker run --detach \
        --name haproxy \
        -v $PWD/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg \
        haproxy
docker inspect haproxy --format “{{.NetworkSettings.IPAddress}}”
172.17.0.6

After running setup.sh, try curl 172.17.0.6:6443 to check that the HAProxy is working. You might notice that the port 6443 is not published in the above Docker command. Don’t worry, without that you can still access HAProxy at 172.17.0.6.

To achieve HA mode, you will need to make a small tweak to cluster.yaml. What’s inside the cluster.yaml? It’s a Cluster API’s Cluster resource for defining our cluster, and the provider specification that is responsible for telling the WKS controller which provider to use when creating the cluster.

Where to place an external LB.png

The JK script, a JavaScript Configuration as Code, called cluster_yaml.js generates the `cluster.yaml` manifest for us. However, you will need to add the HAProxy IP address to it. The address is obtained by using the Docker inspect command which in this example is 172.17.0.6.

You should now have an HA Firekube cluster running. Please check that all 3 masters and 1 worker nodes are up. Next, we’ll upgrade the cluster, also with GitOps.

Upgrading Kubernetes with Git commits

When discussing the upgrade process, you could be talking about anything from upgrading the OS & packages of nodes, to upgrading Kubernetes packaged components like Kubelet, or upgrading control plane components like API Servers.

In this tutorial, we'll cover how to upgrade the control plane and the packaged Kubernetes components. The upgrade process is done by Kubeadm to ensure the compatibility of the upgraded cluster.

The cool thing is that since we're using a 100% GitOps-managed Kubernetes cluster, the upgrade process is just a few commits away after tweaking the machine manifest.

First, specify a new version in config.yaml.

backend: ignite
controlPlane:
  nodes: 3
workers:
  nodes: 1
version: "1.14.1" # specify a new version here

Next, call the ./upgrade.sh script to re-generate machine manifests. The generated machine manifest will contain the new version number for kubelet (and also applies it to the control plane).

How the upgrade process is triggered.png

Git to cluster synchronization with Flux and the WKS controller

The upgrade.sh script commits and pushes the changes to the remote Git repository that is being watched by Flux. After the changes are pushed to the remote repository, Flux retrieves and synchronizes them to Kubernetes, as the resources.

The WKS controller (a feature of both WKSctl and the commercial version, WKP) also watches the resources in Kubernetes via an API Server. The controller picks up the changes, builds an upgrade plan and applies the plan to each node. In this scenario, the plan includes Kubeadm upgrade steps. In the following video, you’ll notice that the Firekube cluster is upgraded three times. The first upgrade is a minor one taking Kubernetes from 1.14.1 to 1.14.7. After that we do the major upgrades twice to bring the Kubernetes version to 1.15.9 and finally to 1.16.6.

backend: ignite
controlPlane:
  nodes: 3
workers:
  nodes: 1
version: "1.14.7" # <--- just change this line again

Make changes to the config.yaml file, and call ./upgrade.sh to kick off the upgrade process.

Here’s the video of the upgrade process. We’ll get Kubernetes 1.16.6 with just a few commits.

Summary

In this post, we showed that spinning up a Kubernetes cluster is relatively easy. In just three steps: Fork, Clone and Run, you get a 100% GitOps-managed Kubernetes cluster on your laptop.

We also showed you how to setup an HA Firekube cluster with a Kubernetes API load-balancer by running an HAProxy load-balancer in a container, while the Firekube cluster is provisioned using Ignite micro-VMs. These days, it doesn’t really matter whether you’re using containers or VMs, you can run both container and VM workloads together on the same network.

We then demonstrated that when a Kubernetes cluster is GitOps-managed, the upgrade process is extremely easy by simply changing the Kubernetes version in a config file, the file and pushing it to a remote repository. Everything else is then taken care by the WKS controller (the controller component of WKSctl) to upgrade the cluster for us. This feature is also a part of our Weave Kubernetes Platform. Please contact us if you’d like to have the commercial support for this feature.


Related posts

Enterprise Ready GitOps with the Weave Kubernetes Platform

InfoQ on WKSctl and Cluster Configuration Management with GitOps

Weave GitOps Manager Adds Policy Based Cluster Automation to Kubernetes

Schedule a demo of the Weave Kubernetes Platform with our team.