Weave Cloud and The Future of K8s: A Simplified Kubernetes Installation
Since announcing kubeadm in September as a simplified way to install Kubernetes, SIG-cluster-lifecycle has made a number of updates to the tool. The goal remains the same: to make it the standard for Kubernetes installation. Lucas...

Since announcing kubeadm in September as a simplified way to install Kubernetes, SIG-cluster-lifecycle has made a number of updates to the tool. The goal remains the same: to make it the standard for Kubernetes installation.
Lucas Käldström is the youngest contributor of Kubernetes, a member of SIG-Cluster-lifecycle, and now a Weaveworks contractor. Here he has outlined what future versions of kubeadm are seeking to accomplish and what the improvements to kubeadm mean to Kubernetes as a whole.
Flexibility
Kubernetes is very extensible to its nature. You can set it up exactly to your liking and it’s general-purpose enough for you to be able to run virtually anything on top of it. In SIG-cluster-lifecycle, we want to retain that set up flexibility in kubeadm, creating a tool that allows you to customize your installation to individual specifications while being easy to use.
One of the first steps in making kubeadm work for any user on any platform was the decision to make it multi-architectural. This was important for us to incorporate for the long-term use of kubeadm, since this allows users to choose their own hardware, their own OS, and their own network provider. Essentially, you don’t have to think about the underlying architecture during installation. Further, I’ve written a multi-platform proposal that was accepted design for Kubernetes and kubeadm follows those guidelines.
If you want to run Kubernetes on ARM, why should you be limited? Running Kubernetes on ARM can be great for educational environments such as conferences or classrooms, where having desktops computers to show an example cluster just isn’t feasible. Spinning up Kubernetes on a small ARM device is also just fun, which is the primary reason I setup Kubernetes on an ARM device using kubeadm.
The opinionated installation story
Kubectl is the CLI for running commands against Kubernetes clusters. Using kubeadm and kubectl, Kubernetes installation and subsequent Weave Net installation on your Kubernetes cluster is now a very straight-forward task. Luke Marsden gave a great rundown on Kubernetes’ blog in September when kubeadm was still in alpha. Going forward, we’re excited by the possibility of having that same flexibility present for all Weave software installed on a Kubernetes cluster.
The future of kubeadm
Designing kubeadm is an opportunity to gain in-depth knowledge of Kubernetes’ internal structure. This also leads to finding drawbacks within Kubernetes that could be improved. After all, our goal is to simplify Kubernetes as a whole and not just the installation. And we’ll get there by making kubeadm the building block for many Kubernetes deployments.
However, we still have things to learn and test as we optimize kubeadm for the future. Despite the challenges facing us, we’re excited about what we intend to accomplish over the upcoming releases.
One of our challenges is implementing reliable self-hosting. The self-hosting way of running Kubernetes was developed by CoreOS, many thanks to those guys for coming up with the concept. There are indeed difficulties with running Kubernetes components in self-hosted mode, but gradually we’re getting there. Bootkube is a project in Kubernetes incubation that now serves the only easy way of self-hosting Kubernetes. We want to upstream these features to kubeadm, so other higher-level tools that will use kubeadm in the future like kops or kargo can easily create self-hosted clusters. Experimental self-hosting support in kubeadm might be included in v1.6.
The main focus for v1.6 has been around stabilization and security. RBAC (Role Based Access Control) support is now present and it is now the default authorization mode in kubeadm. This is great because now the API Server requires all requests to it to be authorized and only lets granted users, groups and ServiceAccounts to access and edit the precious information stored in the API.
The token bootstrap framework is also rewritten in line with our proposal. Earlier, Kubernetes has relied on a CSV file on disk for authenticating users by tokens. This method has many drawbacks, especially in a world with ever-changing tokens and API Servers. It’s a high maintenance burden on managing a file somewhere on disk every time you want to grant a new user access to the API Server with a new token. You would also have to restart the API Server which could lead to a small but significant service outage. We want to solve this by letting you create bootstrap tokens as Secrets in the kube-system namespace so they can be changed dynamically. The API Server will also expose a public ConfigMap containing the information required for joining the cluster. This way nodes can join a given master given they have a token that’s valid, and the node can also be sure it’s talking with the right master and can trust that no man-in-the-middle attacks have been made.
But you won’t have to think about these implementation details, kubeadm will set up this for you automatically. You will see many improvements compared to kubeadm v1.5. For example, you will now be able to set expirable tokens, valid only for a limited timeframe. This can enhance security as well. You will be able to add a valid bootstrapping token after you’ve already started your cluster without service outage and more.
Another feature we want to include in future versions is to allow users to customize commands. Right now the only commands available are init and join, with init doing about six different things including bringing up the control plane. However, if you want to bootstrap only the control plane and generate the certificates yourself, you can’t, at least not as it stands currently. And we don’t want it to be that way. In the coming releases, we’re hoping to have all phases separately invokable. This will allow other higher-level tools to build more functionality on top of kubeadm, without having to do all the common tasks that can be delegated to kubeadm.
Further, we want to remove the need for tedious manual flag changes on each node in an upcoming release of kubeadm and Kubernetes. This is a problem with cloud configuration for instance, since changes to your cloud will often mean configuring your kubelet to do what it needs to on every node. Instead, there will be a ConfigMap in the API that all kubelets dynamically check, taking configuration from there. Now, adding a specific flag to a kubelet in the cluster is as easy as updating the ConfigMap using kubectl in a native manner. You only need to access the API and no longer need to access each node.
We’re working to align kubeadm’s releases with the Kubernetes release schedule. The launch of kubeadm 1.6 will be March 21 (one week before KubeCon in Berlin) to coincide with the latest release of Kubernetes.
Get involved
Join me and the Weaveworks team at CloudNativeCon + KubeCon Europe 2017 March 29-30 where I’ll be speaking in-depth about kubeadm. I’d love to hear your thoughts on kubeadm and installing Kubernetes on ARM devices.
If you’re curious about kubeadm, join SIG-cluster-lifecycle to learn more about the project and how you can make a contribution. We have collaborators from all over the world, and we’d love to have you on our journey toward improving Kubernetes. You can also reach out to me on Twitter. We’re always looking for constructive community feedback.