Weave Kubernetes Platform with GitOps Policy Management
The new release of the Weave Kubernetes Platform (WKP) makes it easy to build and deploy clusters and all their components using GitOps. In this post we'll examine the main features and why they're beneficial, looking at cluster add-ons and automating policy so that we can determine who can do what across multiple teams.
The new release of the Weave Kubernetes Platform (WKP) makes it easy to build and deploy clusters and all their components using GitOps. In the first post we described the GitOps Manager's capabilities and how it's at the centre of the operational model, enabling you to spin-up clusters anywhere - in public clouds, on-premise or on other types of nodes like OpenStack. In this post we'll examine the main features and why they're beneficial, looking at cluster add-ons and automating policy so that we can determine who can do what across multiple teams.
How WKP helps manage Kubernetes complexity
The reasons to implement WKP in your organization today include:
- Speed and certainty: A reduction in the time to create, update and manage production ready application clusters, including all of the correct add-ons needed for a developer and ops ready Kubernetes stack.
- Continuous verification: validate applications and cluster changes and be alerted on inconsistent cluster states through a set of pre-configured, instant dashboards.
- Reliable deployment: An automated and less error prone method for developers to define application deployments, and cluster configuration that avoids having to manually edit YAML files.
- Security: Better security through GitOps-based policy managed with real-time checks to ensure that only the right people can make specific changes to clusters and applications.
- Automated ops: Less operations overhead with automated cluster lifecycle management: upgrades, security patches, and automated upgrades to any cluster extensions.
Reproducible, correct application clusters
The heart of WKP is its ability to create a base configuration of the Kubernetes stack that can be managed and maintained in Git. This includes managing applications as well as clusters and add-ons.
With the configuration stored in Git, team members can create an identical cluster with a preconfigured development stack that can be applied to whatever environment they need, for example, Development, QA/Staging, or Production.
Since Git maintains your cluster configuration, including cluster add-ons configurations such as monitoring, and your application workloads, GitOps should and can be used to create clusters, initiate cluster patches or minor version upgrades, to add or remove cluster nodes, and other management operations. With GitOps at the centre of the Weave Kubernetes Platform, it’s simple to create reproducible production ready Kubernetes platforms for both application development and production.
Supporting user choice of Kubernetes
WKP works with both open source Kubernetes and cloud-hosted (eg EKS). The system is extensible - if you are using a Kubernetes distribution please contact us for details on integration.
GitOps policy management: for user roles and permissions
At the core of reproducible, correct cluster configuration is Weave GitOps Manager, delivering policy management. Policies and rules can be set up by Ops/SRE or DevOps teams to determine who can commit changes to the base Kubernetes configuration.
The rules themselves are also kept in Git and are also guided by roles on who can change what.
This is what happens during a policy verification:
- A pull request is made in Git.
- The Weaveworks GitOps Policy Manager verifies that the changes to the PR are valid and are permitted by the user.
- If the PR passes the GitOps Policy Manager verification, it can be merged into the configuration repo and is ready to be applied to your cluster.
- If the PR fails, errors are indicated to the user and the PR is prohibited from being merged. Therefore keeping your cluster safe from potentially damaging changes.
View a demo of the GitOps policy manager:
Install policy-based clusters anywhere
The different infrastructure options for installing Kubernetes include the following:
Customer managed clusters are typically situations where you want to install Kubernetes on-premise or onto pre-provisioned OpenStack nodes or in a public cloud, like GCE, AWS or Azure without using any of the managed Kubernetes services in the public cloud.
Rest assured that with WKP you are installing an upstream version of Kubernetes that never diverges from the mainline and that has been integration tested for stability. To manage these clusters, the command line tool WKSctl is provided.
For those who wish to take advantage of a Kubernetes managed service like EKS, you can use eksctl. Eksctl allows you to create a cluster in EKS in a single command. It sets up an AWS Identity and Access Management role for the master control plane, creates the VPC architecture, brings up instances and deploys the config map so nodes can join the cluster. Additional managed Kubernetes service support like AKS and GKE are planned for an upcoming release.
A typical cluster installation workflow
#1 Cluster installation
Command line interface that configures cluster for repeatability. Host environment dependencies are installed.
#2 Cluster configuration
Clusters are configured using standard YAML files stored in Git. Configuration changes for example cluster extensions such as Flux for continuous delivery or Prometheus monitoring can be added using GitOps overrides via pull requests in Git that get automatically applied.
#3 Service discovery configuration
Service discovery components are set up: DNS, monitoring, logging and other options.
#4 Multi-tenancy and user authorization
Cluster permissions and namespace designations are restricted by role, and uses Kubernetes native RBAC configuration. By default the roles Cluster Operators, who are only permitted to update cluster components and Application Developers/SREs who can only manage workloads out of the cluster component namespaces are created.
#5 Apply security patches
Security updates are provided and can be installed automatically in place. WKP can also upgrade underlying images from your OS security team, without bringing down the cluster.
#6 Kubernetes upgrades
Upgrades provided and installed in place by individuals with the default role Cluster Operators.
#7 Dashboard configuration
Preconfigured dashboards checked into Git provides the health and status of the newly spun up cluster. Dashboards also reveal changes in state and other drift alerts.
Cluster to machine installation workflow
WKP Automation: instant dashboards and drift alerts
As a user running WKP, you gain a single view on to the health and state of your cluster and its workloads. Dashboards can also be configured to send alerts when either a cluster state or a workload state has changed.
See video below for a walkthrough of how dashboard and alerts work for WKP:
Use GitOps to automate add-ons
Choose from a set of extensions that the Weaveworks team has integrated with an upstream version of Kubernetes. Update your stack with the tools and components you need without vendor lock-in and without integration headaches or other unnecessary overhead encountered when you go the DIY route. All extensions in WKP are selected from the CNCF ecosystem of cloud native tools and have been fully vetted, tested, and integrated by the Weaveworks team when you use WKP to install them.
Supported cluster components
WKP supports cluster components so you can extend, and upgrade your clusters using GitOps. We want you to have supported choices for networking, deployment and app delivery, and security; plus add logging, alerting and observability tools and solutions.
For example, you can make GitOps part of a WKP continuous delivery pipeline using your choice of CI tools, image repos and Git implementation. Using WKP for this, you can kick start your development team so that they can push code to production in a fraction of the time.
Since all add-ons are managed with GitOps, removing and updating tools and other components by rolling backward or forward is a simple click away.
What is does
Built specifically to monitor applications running in containers at scale in Kubernetes.
Custom dashboards for Kubernetes monitoring with Prometheus.
NGINX for on-premise installation, and ELB for EC2.
Dashboard for cluster health, and alerts on configuration drift.
Open Policy Agent
GitOps policy rules can work with the Open Policy Agent for fine grained control over your entire stack.
GitOps policy engine, and the config as code generator.
Flux for CD
GitOps operator for application deployments to Kubernetes.
Template based package manager for application deployments.
Third-party tool integration
Can be easily extended to include your own tools such as logging, and other tooling choices.
As we’ve shown, the Weave Kubernetes Platform (WKP) provides all the capabilities you need for a complete operational model that allows you to spin up clusters anywhere in public clouds, on-premise or on other types of nodes like OpenStack.
If you’d like to talk through how we can help you with your production Kubernetes, contact us for a demo of the Weave Kubernetes Platform.