WKSctl - A New OSS Kubernetes Manager using GitOps

By Alexis Richardson
October 03, 2019

Introducing WKSctl - a new open source tool for installing and managing Kubernetes using GitOps. Read this post to learn about its features, the reasons we created it, how it works, and how to get started.

Related posts

Weaveworks statement for Terraform customers on Hashicorp license changes

Weaveworks and AWS Collaborate to Enhance the Official CLI for EKS (eksctl)

Kubernetes Security - A Complete Guide to Securing Your Containers

Today we are announcing the release of WKSctl - a new open source tool to install and manage Kubernetes. WKSctl provides a new way to manage application clusters based on GitOps and leverages recent work in Kubernetes, such as the Cluster API.

WKSctl is part of our commercial enterprise product, the Weave Kubernetes Platform (WKP), which also uses GitOps to provide policy management, operational automation and cross-cloud management capabilities. WKP uses EKSctl to drive EKS clusters, and it uses WKSctl to drive open source Kubernetes, in the public cloud, in private data centers, or air-gapped.

Kubernetes: What problems need solving?

We’d like application developers and operators to have an easier Kubernetes experience.

  • Clusters should be cattle - easy to start, stop and delete. Anywhere.
  • We want “application clusters” that are ready for use; that means they need choice of add-ons and configured extension components (e.g. ingress, monitoring).
  • Creation of application clusters should be reliable, reproducible and verifiable from configuration.
  • No dependence on complex scripts and CLI flags which could lead to incorrect or unknown cluster state and “snowflake clusters”, that cannot be safely shut down or upgraded.

It should be easy to keep your Kubernetes cluster configuration consistent from development all the way through to production, and across different infrastructure environments.

What is WKSctl?

WKSctl is ‘GitOps for cluster configuration management’. It has three important features:

  1. Correct clusters made easy: Using a single tool, SREs and DevOps engineers can create development to production-grade clusters. In a single step, a cluster can be spun up to include add-ons such as Helm and Prometheus, all correctly configured. Clusters can also be easily replicated on demand, removing a major source of Kubernetes pain.
  2. Continuous verification: WKSctl keeps clusters in a verifiably correct state. It uses a GitOps control loop to enforce verifiably correct changes to the whole system. This simplifies runtime cluster and addon management, cluster upgrades, including some migrations, all based on a single source of truth in Git.
  3. Getting started with GitOps: Another feature of our implementation allows you to start with a git repo and your cluster will be setup with GitOps out of the box.

Why a new Kubernetes tool?

Automation of course. At cloud scale, customers need automation for faster application delivery and lower cost infrastructure management. Our vision is that GitOps will provide this for all layers of the stack. The future data center - edge, cloud, enterprise - is a GitOps data center that is based on Kubernetes with a new stack of tools that are lean, portable and all managed via declarative configuration.

We believe Kubernetes will be table stakes for most applications moving forward and therefore we created WKSctl and WKP. These tools help those that are new to Kubernetes to grow from developers experimenting to production-grade environments with minimal retooling.

Snowflake clusters

At ground level today, we see many customers struggling with snowflake clusters created with an old version of Kubernetes including various add-ons implemented by a bold tech lead. Many teams are not resourced to upgrade or modify these systems. Even before a team manages to get a cluster running, the number of choices they face is overwhelming. Do they start with minikube or kind? What are the differences? When applications need to be made available to others on the team or to production, do they DIY a custom, snowflake cluster?

What options do they have?

  1. Migrate to a hosted solution (e.g. EKS). Weaveworks helps customers run on EKS with simple tools such as EKSctl - the official CLI for AWS.
  2. Go all in with IBM-Red Hat or VMware. These ‘boxed’ offerings also work well with Weave's GitOps app delivery. However note that their main goal is to create value through a vertical end to end integration. This may not leave room for customer choice, e.g. to use AWS Linux, or to pay-per-hour?
  3. Go for a leaner option that extends upstream components and can be adapted to run wherever you like with the add-ons that you need. WKSctl aims to fill this gap for open source users, along with WKP as a commercial enterprise option.

Let’s look at some of the implications of choices 2 vs. 3 above.

The problem with upstream Kubernetes

We believe that more and more customers will migrate to hosted Kubernetes like EKS, AKS and GKE, over time. However the practical reality today, is that many enterprises also want to manage their own Kubernetes. WKSctl aims to help these users.

Many enterprises tell us that they want to use “Upstream Kubernetes”. These customers want support from a vendor, but they also want to know that it is easy for them to switch vendors.

These customers tell us that they don’t want a vendor to provide them with a proprietary distribution that forks Kubernetes from upstream, or adds a lot of custom configuration and “tweaks”. The same customers also ask for flexibility, like using their own choice(s) of Linux and provisioning tools, according to their environment, rather than a vendors’ single option.

Providing “enterprise grade support” for upstream Kubernetes creates three problems:

  1. There is a need for a separate installer that can also manage and upgrade clusters.
  2. This installer must be able to add and configure extensions, such as ingress.
  3. Ideally, the installer has access to verified security patches which may not yet be in a Kubernetes distribution.

Traditionally, vendors have solved (1-3) by offering an enterprise distribution that bundles Kubernetes with management tools, add-ons and access to patches. Some of this may be open source, some may be commercial.

Here is how we solve this to give users and customers true “enterprise open source”:

For free users of open source WKSctl:

  1. WKSctl is a stand-alone installer and cluster controller, which provides enterprise runtime management and upgrades, on a single-cluster basis.
  2. As a baseline option, WKSctl works with upstream Kubernetes.
  3. WKSctl OSS can work with your choice of OS, on-metal, VM, etc.

For paying customers of WKP:

  1. WKSctl can also subscribe to a Weaveworks repo to add extensions. It’s not a “forked distribution”, but rather an additive to upstream Kubernetes.
  2. Subscribers also have access to a patch stream in case of CVEs
  3. We certify supported combinations of OS, virtualization, on-metal, air-gapped install, etc.

In conclusion: WKSctl offers a range of ways to help you, from free to paid. Upgrades are simpler, and your choice of operating system, VM or provisioning tools are not restricted. You can easily use WKSctl in your CICD pipelines, adding GitOps cluster lifecycle management to your delivery toolchain, or you can make it part of your operational stack. Weaveworks offers full commercial support and additional features and services in WKP.

Overall we see WKSctl as helping customers to make infrastructure predictable, boring and low cost. Everyone can focus on building applications across a multitude of Kubernetes options.

How WKSctl works

To build WKSctl, we used our well known Flux tool (now in CNCF) to apply GitOps to whole application cluster configurations. For this reason, WKSctl can manage the running state of your cluster by tracking the source of truth specified in Git. Flux has helped many users to deploy and manage applications and now you can also use it to manage Kubernetes clusters. You can also continue to use Flux, for applications, alongside WKSctl, for clusters.

From an implementation perspective, WKSctl uses the Cluster API. A “WKS controller” component implements a cluster API provider. Unlike other providers, WKSctl can self-bootstrap ie. we can create a cluster out of nothing: we just need SSH for a set of VMs/machines.

The WKSctl installer can work with a number of Kubernetes choices - it does not enforce a specific distro unless the user wants it to (e.g. for security patches). WKSctl “subscribes” to Git repos to make this work.

wkp-architecture.png

WKSctl assumes that a user will do their own machine or VM provisioning, using one or more of the many tools available. For example, an enterprise might use Terraform and Ubuntu on metal for on premises, and Terraform and AWS for cloud applications.

WKSctl requires a user to provide a set of SSH endpoints and credentials. It can then bootstrap and deploy clusters. Using these endpoints, WKSctl will SSH into the master machine, install the specified version of Kubernetes and start the cluster. With the single node cluster up and running, we install our wks-controller which is a cluster-api implementation for VMs/bare-metal.

Additionally, we install and configure flux for GitOps. Flux will sync the additional machine manifests, along with any customer workloads, and the wks-controller will begin converting those machines into nodes by installing Kubernetes and joining them to the cluster.

We support the ecosystem’s standard tools

At Weaveworks we believe that customers just want Kubernetes to work, so they can build higher value systems on top. We are committed to making our tools work with as many other vendors as we reasonably can.

As a basis for this:

  1. WKSctl by default installs standard OSS Kubernetes and Kubeadm
  2. Cluster configuration uses SIG Cluster Lifecycle work on Cluster API, Component Config, and Add-ons
  3. A key objective is interoperable GitOps (e.g. with EKSctl and other stacks in the future)

We’ll keep working on this through our ongoing involvement in the Kubernetes SIG Cluster Lifecycle. Contact us if you have questions.

Commercial options

WKSctl is free and open source.

You can upgrade to a commercial subscription for:

  • General 24/7 support for Kubernetes, WKSctl, and certified add-ons
  • Access to CVE security patches on time
  • A curated Weaveworks Kubernetes Distribution
  • Management of operating system patches for certified OS
  • Air-gapped installs

Our commercial product, WKP, includes the Weave GitOps Manager. This provides integrated support for application delivery and verification alongside advanced policy management.

As you scale to apps and clusters managed by more teams, making more frequent updates, you need policy and governance:

  • WKP applies permission at the GitHub PR gate - where GitOps starts.
  • Advanced config as code options (we've bundled JKcfg with Flux).
  • Cluster dashboard with GitOps visibility.

Conclusion

Installing and managing Kubernetes clusters does not have to be difficult or expensive. We want to see consistent automated cluster and add-on management, with all entry costs driven down to zero. WKSctl is a cluster installer and manager. Unlike traditional installers, WKSctl implements a control loop, based on a source of truth in git, to enforce correct and consistent cluster deployments every time.

A complete application-ready environment is created correctly, directly from git, that includes pre-configured add-ons like Helm and Prometheus. WKSctl works with upstream Kubernetes and your choice of Linux and provisioning tools. Because the solution is 100% declarative, it lends itself well to fleet scale automation.

For commercial options, contact us for a demo of both WKSctl and the Weave Kubernetes Platform.


Related posts

Weaveworks statement for Terraform customers on Hashicorp license changes

Weaveworks and AWS Collaborate to Enhance the Official CLI for EKS (eksctl)

Kubernetes Security - A Complete Guide to Securing Your Containers

Find out what you need for a production ready cluster with these Kubernetes checklists.