Is the Kubernetes adoption rate the fastest in the history of Open Source software? Quite possibly. According to the CNCF, Kubernetes is now the second largest open source project in the world just behind linux.
Since the introduction of Kubernetes, you can safely say that almost all of the other orchestrators are either irrelevant or have taken a back seat to Kubernetes. And now just over four years later, every major public cloud provider has a managed Kubernetes service or is in the process of developing one.
As the first Cloud Native Cloud Foundation (CNCF) project, Kubernetes became popular for a few key reasons:
- Kubernetes offers portability, faster deployment times and scalability that allow companies to rapidly grow without having to re-architect their infrastructure. And because Kubernetes has fundamentally changed the way development is done, teams can also scale much faster than they could in the past.
- Because it’s open source, you can take advantage of the vast ecosystem of open source tools designed specifically to work with Kubernetes.
- It was developed by and maintained by Google which gives it instant credibility.
There are of course many other factors that led up to this epic uptake of Kubernetes, but these three ideas are on the top of the list for most organizations looking to make the migration.
The cloud native landscape is a busy one. While the list of new technologies can be overwhelming, at the same time, developers and operators have more choice than ever before on which technologies you want to incorporate into your development pipelines and infrastructure. Many companies have been doing just that, where the latest CNCF survey shows the use of these Cloud Native tools to have grown by a whopping 200%.
Below I discuss five key projects that will help you to complete the Kubernetes feature set and scale up your business.
#1 Prometheus - Monitoring
Prometheus is the second project to meet the graduated stage in the CNCF. Designed specifically to monitor dynamic environments like Kubernetes, it’s become a defacto standard and is one of the only systems that can monitor applications and infrastructure running in Kubernetes.
- Kubernetes integration—supports service discovery and monitoring of dynamically scheduled services.
- Pull based metrics—a pull-based monitoring system means that your services don’t have to know where your monitoring system is located.
- Flexible multi-dimensional data model—a labels-based time-series database lets you diagnose a problem when it occurred without needing to independently recreate the issue outside of the system.
- Built-in alert manager—alerts and notifies via a number of methods based on rules that you specify.
- Supports whitebox and blackbox monitoring—provides extensive instrumentation client libraries and exporters to support both whitebox and blackbox monitoring.
Read more about Prometheus, how to integrate it with your platform and whether Prometheus as a service might be the right solution for you, “Monitoring Kubernetes with Prometheus - What you need to know”
#2 Istio - Service Mesh
Istio is a service mesh that provides some of the missing components needed to successfully run Kubernetes in production, like the ability to easily debug microservices and to apply advanced deployment strategies like canaries. Istio manages and routes encrypted network traffic, balances loads across microservices, enforces access policies, verifies service identity, provides tracing, aggregates service to service telemetry, and incorporates Helm.
Istio is not the only, and is certainly not the first, to market a service mesh. Others include Envoy, and Linkerd. Many of the public cloud providers are in the process of integrating a service mesh as part of their managed Kubernetes solutions.
Since Istio is completely declarative, it also works well in a GitOps workflow. You can read more about that in our blog and GitOps with Istio tutorial, GitOps for Istio - Manage Istio Config like Code.
#3 Helm - Package Manager for Continuous Deployments
Repeatable deployments without all of the overhead and complication of keeping dependencies up to date and consistent is one of the goals of Helm. Helm is a package manager for Kubernetes that works much the same as other package managers: apt, yum or npm. Helm has the concept of ‘charts’ that defines a package of Kubernetes resources as well as any dependencies needed for your app. A developer then calls a specific chart from the command line, and Helm generates the YAML files for the Kubernetes deployment then applies them to the cluster. Since Helm is open source, there are many community charts available with standard configurations for common application services. Open source charts can be downloaded and amended for your own organization from the Kubeapps Hub. An advantage of using Helm is that it makes deploying complex applications more portable, supports automatic rollbacks and is a familiar pattern for developers that makes it easy for them to understand. The drawbacks are that Helm is complex to setup and keeping secrets secure across your pipeline can be difficult to configure.
Find out more about Helm and other CICD tools for Kubernetes, “CICD for Kubernetes”.
#4 Weave Flux - GitOps and Continuous Deployments
GitOps allows developers to manage both infrastructure provisioning and software deployments and rollbacks through pull requests. With GitOps, developers use Git as the source of truth for the desired state of their entire application. When the source of truth differs from what’s running in the cluster, the cluster gets automatically synchronized with what’s kept in Git.
Weave Flux is an OSS tool that ensures that the state of a cluster matches the declarative configuration kept in git (the source of truth). Flux implements a Kubernetes operator that is deployed to the cluster. The operator triggers deployments from Kubernetes when it detects that the cluster state is out of sync with what’s in Git.
Flux monitors all of your image repositories and when it detects new images, it triggers deployments and updates the manifests in Git and then updates the cluster.
The benefits are:
- Your CI system does not maintain credentials to the cluster or to the image registries.
- Git maintains an audit log that can be used to meet SOC 2 compliance.
- Mean Time to Recovery is reduced with GitOps where you can quickly recover from disaster if the cluster melts down.
Weaveworks has put together some resources called GitOps: What You Need to Know if you want to learn more.
#5 OpenFaaS Operator - Serverless
Serverless functions enable developers to create self-contained bits of code and deploy them to the cloud without needing to maintain any infrastructure. Instead, resources are dynamically allocated to the function as needed by the cloud provider. All of the major cloud providers support serverless functions, but not all of them provide a framework that provides a way for them to run in Kubernetes.
The OpenFaaS Operator is a custom implementation that allows you to build serverless functions that run in Kubernetes. It also integrates metric exportation so that any functions deployed to Kubernetes can also be observed with Prometheus.
Helm charts in combination with Weave Cloud can be implemented to build continuous deployment pipelines. You can also take advantage of Weave Cloud’s built in observability dashboards to monitor your OpenFaaS workloads.
Try it out with this tutorial, Getting Started with the OpenFaaS Kubernetes Operator on EKS.
In this post we discussed some key projects and technologies that can help complete the Kubernetes feature set. We discussed Prometheus monitoring, Istio service mesh, Weave Flux or Weave Cloud for continuous delivery, and serverless with the OpenFaaS operator.
Weaveworks now offers Production Grade Kubernetes Support for enterprises. For the past 3 years, Kubernetes has been powering Weave Cloud, our operations as a service offering, so we couldn’t be more excited to share our knowledge and help teams embrace the benefits of cloud native tooling.
Kubernetes enables working as a high velocity team, which means you can accelerate feature development and reduce operational complexity. But, the gap between theory and practise can be very wide - which is why we've focused on creating GitOps workflows, building from our own experiences of running Kubernetes in production. Our approach uses developer-centric tooling (e.g. git) and a tested approach to help you install, set-up, operate and upgrade Kubernetes. Contact us for more details.