Running Highly Available Clusters with Kubeadm
Automated support for HA clusters is currently in beta in kubeadm v1.15. Lucas provided an overview of how High Availability works with kubeadm and also gave a demo on how to configure a Highly Available (HA) cluster with kubeadm.
At a recent Weave Online User Group, Lucas Käldström, a SIG cluster lifecycle leader, took us through some of the important new features in the recent 1.15 release of Kubernetes and kubeadm.
Automated support for Highly Available (HA) clusters is currently in beta in kubeadm v1.15. Lucas provided an overview of how High Availability works with kubeadm and also gave a demo on how to configure an HA cluster with kubeadm.
Kubeadm - a standard Kubernetes installer
Kubeadm is a Kubernetes installer that enables operators to quickly bootstrap minimum viable clusters that are fully compliant with certified Kubernetes guidelines. Kubeadm has been under active development by the SIG Cluster Lifecycle since 2016; it recently went from from beta to Generally Available (GA) at the end of 2018.
With this important GA milestone achieved, the kubeadm team could shift its focus on improving and extending the stability of its core feature set.
SIG Cluster Lifecycle
The SIG cluster lifecycle has more than 600 members with hundreds of contributors working on 17 sub-projects and several working groups. This SIG is mostly concerned with simplifying the creation, configuration, upgrades, downgrades and teardown of Kubernetes clusters and their components.
The developer experience for using installing, upgrading and adding components to Kubernetes is a significant focus of this SIG. And since all of these components are described declaratively, they can be kept alongside code, allowing them to be managed with GitOps.
The diagram below illustrates where kubeadm resides in the full Kubernetes stack. At the bottom of the stack is your infrastructure layer. This can be any of public clouds for example, Google, Microsoft Azure, and AWS; or you may have your own datacenter and so you may be installing Kubernetes on-premise or it could be on Raspberry Pi’s.
Above this layer are the machines or the nodes that will run your clusters. These can be physical, virtual or something in between those two. You will run the command
kubeadm init on each of these machines. And since you will want an HA cluster, you will instruct kubeadm on the number of masters that connect to the control plane before joining all of the nodes into a cluster.
Note: As a general rule a Highly Available cluster means that you have a fail-over strategy which at a very minimum for a Kubernetes cluster is at least three master nodes for a single control plane.
Once the cluster bootstrapping is complete and it passes all conformance tests, any add-on tools can then be applied to complete the full cluster platform. At the centre of all of these add-ons, is kubeadm which is responsible for bootstrapping the control plane.
Kubeadm vs an end-to-end solution like Weaveworks Kubernetes Platform
In the case of Weaveworks Kubernetes Platform, after the upstream version of the kubernetes is installed, you can complete your platform by installing the add-ons to complete your DevOps toolchain such as Fluxcd for continuous delivery, or for example, Prometheus for monitoring and observability, as well as a cluster management dashboard.
Weaveworks Kubernetes Platform provides cluster management and fleet automation. WKP delivers managed clusters on demand, including everything you need for production applications whether using upstream open source or cloud hosted Kubernetes. GitOps increases velocity safely with automation plus policy controls. Upgrade, patch and secure fleets and pipelines directly from audited and managed configurations. Add progressive delivery to control and observe feature roll-outs, with continuous monitoring to alert on drift.
Getting started with kubeadm
Lucas then described how kubeadm works and how you can use it to create a single control-plane cluster.
To get started with kubeadm run the following on each of the machines:
kubeadm init <args>
- After running kubeadm init on each of your nodes, it generates a number of certificates. There are almost 15 required certificates required to run Kubernetes.
- The control plane is then bootstrapped.
- A join token is created with the first node identified as the master.
- Lastly, the core DNS and the kube proxy is applied.
How does automated High Availability work in kubeadm?
To create an HA cluster you need to pass the
--control-plane flag to
kubeadm join when adding more control plane nodes.
For more details on what the options are for HA clusters, see Options for Highly Available topology.
kubeadm HA topology- stacked etcd From (Options for Highly Available topology)
kubeadm HA topology- external etcd From (Options for Highly Available topology)
How to create a Highly Available cluster with kubeadm
In summary, these are the general steps for creating an HA cluster:
- Set up a Load Balancer. There are a number of open source options available for load balancing: HAproxy, Envoy, or a similar Load Balancer from a cloud provider works well. See Creating Highly Available clusters with kubeadm
- Run kubeadm init on the first control plane node, with these modifications:
- Create a kubeadm Config File
- In the config file, set the controlPlaneEndpoint field to where your Load Balancer can be reached at.
- Run init, with the
--upload-certsflag like this:
sudo kubeadm init --config=kubeadm-config.yaml --upload-certs
kubeadm join –control-planeat any time when you want to expand the set of control plane nodes
The control-plane and the normal nodes can be joined in any order, at any time.
The command you need to run will be generated by
kubeadm init above, and is of the form:
kubeadm join [LB endpoint] \ --token ... \ --discovery-token-ca-cert-hash sha256:... \ --control-plane --certificate-key ...
To view the talk in its entirety see:
Want to participate and ask questions? Join the Weave Online User Group to be notified of upcoming events, webinars, in person meetups and online talks like this one.