Kubernetes is revolutionizing how organizations run applications. It brings speed, flexibility and reliability to development and operations teams.  By 2022 75% of global organizations will be running containerized applications in production. That’s a revolution - but how do you select the right technologies so you can benefit safely and what capabilities are really important?

Weaveworks has been working with customers to implement their Kubernetes platforms for the past 5 years.  We’ve worked with enterprises of all shapes and sizes, across sectors such as Finance, E-commerce and Travel. Using those experiences, we’ve identified 15 key capabilities that you need to consider when using Kubernetes.  

We also compare how these capabilities are reflected in Rancher, Red Hat OpenShift and Weave Kubernetes Platform. 

The GitOps difference

The key to a successful platform is that it’s flexible, reproducible and adopted quickly by your team.  It must have the correct technologies to run your workloads.  And, run where you need it, whether that’s on premise or in a public cloud. Solving these kinds of problems goes beyond a cluster installer, it takes a complete application platform. Teams need a management framework using familiar tooling like Git. We call this approach GitOps as it brings together developer and operator tooling into true DevOps.

A stable, fully integrated stack modeled and managed with GitOps avoids one-off clusters and other difficult to replicate cluster configurations. With GitOps, teams can deliver predictable platforms across different environments whether on-premise or in the cloud. They can scale to cluster fleets consisting of many applications across multiple environments. 

Configuration models kept in Git make it easy to to manage the entire cluster lifecycle and all maintenance via pull request. Regulatory and compliance needs are easily met through git-based rules and policy that controls who can make changes to the configuration.

With our GitOps philosophy in mind we’ve selected 15 capabilities that are most important for enterprise adoption of Kubernetes. These are inherently opinionated choices, built on our experiences working with Kubernetes in production.

Kubernetes Solutions and Vendors

Although the landscape is large and growing, we selected these platforms because of their maturity in the market, as well as the similarities in terms of how their solutions are architectured and delivered to an enterprise. 

Rancher 

Rancher describes itself as a complete container management platform that provides an easy on-ramp to Kubernetes. It considers itself pure open source software and includes a mature user interface as well as policy and RBAC with a full selection of add-on components to choose from in order to build out your platform. 

Rancher is focused on the mid-market and so ease of use is one of its most important features. Although its graphical user interface helps to get teams quickly up to speed, its underlying architecture does not scale across a large organization very well.  It also lacks some of the enterprise features like built-in auditing for compliance and other regulatory requirements. Although Rancher indicates that they are an API first solution, it’s architecture is not based on declarative constructs and cannot be managed with GitOps for a consistent configuration management. 

Red Hat OpenShift

OpenShift is an integrated container platform aimed at large enterprise users who want a single solution. As a solution from a major enterprise vendor, OpenShift aims to provide all the elements enterprises need: for example the security controls are extensive as regulated companies need these. The entire system works around an operator framework that makes deployments to the cluster a mostly hands off affair.  

As an integrated platform OpenShift is an inherently complex solution, it can be complicated to install and the integrations limit choices to those available through the framework. This lack of flexibility means OpenShift users can’t easily take advantage of other CNCF technologies and upgrades are reportedly complicated. As an enterprise solution it can also be a costly solution.  

Weave Kubernetes Platform (WKP)

WKP is a production ready Kubernetes platform that builds a complete application platform and provides GitOps management workflows for a modern approach to operations. GitOps forms both the underlying architecture and the developer experience of WKP simplifying the configuration and the management of Kubernetes. WKP is a pure API solution making use of the open Cluster API (or CAPI) standard. This guarantees it will work across multiple cloud environments and on-premise. GitOps ensures the entire platform, with all of its add-ons, are  kept in Git with full version control - capitalizing on all the benefits this provides from security and auditability though to built in disaster recovery and reduced MTTR (Mean Time To Recovery).

As WKP is designed for scale and operability, it’s weakness is it lacks a graphical user interface for all functions. The GitOps approach means it depends on the command line and git for many functions. In addition, WKP was designed for enterprise deployments where add-ons are built and operated by a platform team so it lacks a catalogue where end-users install their own components. 

Key Enterprise Requirements

What are the key considerations when evaluating a Kubernetes platform for enterprise?

1. Infrastructure provisioning 

Out of the box, Kubernetes doesn’t provide a simple way to deploy highly available clusters. This leaves your team to forge a DIY approach or to go with a more closed and blackbox solution.

For enterprise users we see a lot of value in putting the creation of machines, VM’s and other infrastructure, under GitOps.  This simplifies the installation process and enables automated life-cycle management by keeping all configuration centrally controlled.

2. On premise and across clouds cluster installation

Your applications need to run in the cloud and on-premise, which means your Kubernetes platform needs to be available in those settings. Either as a capability that an enterprise can install itself, or it should work with the public cloud’s managed Kubernetes.

We’ve found that using configuration management to define the Kubernetes platform is critical to reducing operational overhead and speeding up feature delivery. For enterprise deployments it’s also important to consider high-availability and disaster recovery that is repeatable across multiple backends.   

3. Choice of Operating System

Prerequisites for most organizations mean they have a variety of Linux distributions in use across teams. Most platforms should be able to switch between distributions and choose the best OS depending on their use case. 

4. Curated cluster components 

Most development and platform teams have a set of tools that they want to use in their day to day activities. Be that a particular logging or tracing tool or continuous integration or dashboards, it is important to offer a choice of vetted tools that can be easily integrated with a platform without compromising on security or other regulatory requirements. Ideally your platform could include a system for deploying and managing standard cluster components that should be on every cluster. 

5. Platform upgrades and updates

New versions of Kubernetes are released every three months. Enterprise customers require seamless upgrades ensuring security and reliability. Kubernetes and its ecosystem of projects gives you the flexibility and technology choices but the downside is that it’s easy to end up with unmaintainable “snowflake” clusters.

The benefit of GitOps is that users can easily upgrade and since all elements are under configuration management, it’s simple to roll-back to the previously known good release.

6. Composable models

One of the biggest challenges for enterprises transitioning to cloud native development is delivering Kubernetes platforms and environments where ever one is needed. Deploying and managing a large number of applications and clusters adds complexity, increases operational overhead as well as the chances of errors. 

We’ve found it’s important to simplify configuration and management of Kubernetes platforms with a model based approach that defines the infrastructure as well as all required configuration for any cluster components. Each model lets a team easily deploy new clusters or applications using predefined and standard configurations. And since the model itself is declarative and kept in Git, teams are able to deliver and manage reliable and predictable platforms across many different environments whether on-premise, in the cloud or cluster fleets across multiple clouds. \

7. Security and policy

Large organizations need efficient security and policies in place that can be easily enforced, ideally out of the box secure setup with fine grained cluster and workload change control. Extensible RBAC Roles and Permissions are important so that each user has clear capabilities and there is clarity over the workloads and clusters they are allowed to change.

With our GitOps approach we configure both policies and roles under Git so they can be tracked. Each change to the cluster is checked into Git and the policy manager checks whether the change is acceptable,providing and enforcing feedback in real time. As a best practice , policy should be controlled from a central place with full audit trails. 

8. Cluster monitoring and alerting

A Kubernetes platform that is critical for operating applications must be monitored 24/7. The benefit of a built-in monitoring platform is that it’s immediately available and specialized to the requirements with the appropriate dashboards and alerts. Configuring dashboards and metrics for clusters can be complex, and getting a head start with predefined templates managed in Git means that teams are not working from scratch. Preconfigured dashboards also deliver a consistent experience for managing clusters across the enterprise. 

9. Networking and ingress

Network integration and ingress are crucial components of running Kubernetes at scale. A Software Defined Network (SDN) is typically added to the cluster for internal pod networking and also to define policies.  Enterprise networks are complex environments so any enterprise Kubernetes should support a variety of options to reflect those requirements. Different SDNs can have varying capabilities, such as policy support, making it imperative that an SDN can be easily swapped when necessary. The same is true of an ingress endpoint: it needs to be flexible in terms of being able to connect to different API gateways, and load balancers.

10. GitOps multi-cluster fleet management

Enterprises need to deploy many clusters in different environments and for different teams. The benefit of using full configuration management is the ability to treat clusters ‘as cattle not pets’. This allows for faster development cycles with more features delivered to your end users. 

11. Multi-tenancy: teams and tenants

Multiple teams operating in the same cluster with partitioning and Role-based Access Control. This enables teams to share resources without breaking security boundaries and reduces costs. At the top of most organizations’ list is the ability to manage namespaces, and apply the appropriate security controls. Even more control can be gained through managing namespaces and multi-tenancy in general with GitOps workflows and team workspaces kept in Git.   

12. Built-in GitOps application deployment pipelines

Kubernetes platforms that provide continuous delivery components (among others) enable speed, and they also lower the barriers to entry. With continuous delivery in place, your team can deploy changes throughout the day, instead of quarterly or monthly. Continuous delivery with GitOps provides a mechanism to rollback changes whenever they need to. With a continuous delivery pipeline in place, development teams can make changes right from source code to production, but even more importantly, with GitOps, they also have the ability to revert and back out of the change just as easily. 

13. Application observability 

To maintain deployment velocity, you will need the tools that can provide you with instantaneous feedback and control so that when something goes wrong, you can easily rollback. Visibility onto the cluster after a deployment as well as feedback and control with the ability to instantaneously rollback seamlessly is also crucial for teams developing cloud native applications.

14. Progressive deployments

When it comes to experimentation and delivering features to customers in a timely manner, it’s imperative that your platform can efficiently deliver progressive delivery pipelines like: blue/green testing, Canary and other types of deployments. Rolling out these deployments and gaining their benefit though requires a framework that allows you to quickly measure the results, and to be able to react to the results by either rolling forward or backward.  

15. Production Grade Service Level Agreement

Most organizations are running applications 24/7, and so a bullet proof SLA is a must for most organizations. IT needs to ensure that the Kubernetes Platform is available to developers and as well as its end users in order to support the business requirements. 

Enterprise Kubernetes capabilities scorecard

Features WKP Rancher Red Hat

Infrastructure Provisioning

  ◐

    ◔

    ◐

Cluster installation

  ●

    ●

    ●

Choice of OS

  ◕

    ●

    ⊗

Curated cluster components

  ◐

    ◕

    ◕

Platform upgrades and update


To view the entire scorecard and a detailed feature comparison table for all platforms, please register to download. 

button_download-now.png