Progressive Delivery across multiple clusters & cloud using Weave Kubernetes Platform, Flagger & Linkerd

By Weaveworks
April 15, 2021

In this talk Jason Morgan, Buoyant, and Paul Curtis, Weaveworks are introducing the concept of progressive delivery across multiple clusters and environments. With technologies such as Linkerd and Weave Kubernetes Platform, teams can progressively move traffic to Kubernetes - controlled and securely. Best in class GitOps workflows ensure developer friendly operations for Kubernetes while ensuring High Availability and Disaster Recovery across multiple backends.

Related posts

Meet the Weaveworks Team at GitOpsCon Europe (Virtual Event)

KubeCon Chicago 2023 Recap: Cloud-native Scale & Growth with GitOps

GitOps Goes Mainstream - Flux CD Boasts Largest Ecosystem

As software delivery teams look for ways to speed up deployments, and yet make them safe and reliable in production, they are turning to progressive delivery as a key enabler. Progressive delivery allows software delivery teams to minimize the impact radius of failures and issues during a deployment. This enables teams to get real-time feedback on the quality of their releases, and gives them the confidence to deploy faster. In this post, we look at an example of how you can use modern tooling like Flagger, and Linkerd to orchestrate progressive delivery strategies like canary releasing.

About Flagger

Flagger is part of the Flux family of GitOps projects. It is purpose-built to execute progressive delivery of applications that are run on Kubernetes. Flagger can implement canary releases, blue-green deployments, and A/B deployments. It works with any service mesh such as Linkerd, and Istio. 

In this demo, we use Flagger in combination with Linkerd - both open source CNCF projects - to implement an automated canary release.

About Linkerd

Linkerd is a lightweight service mesh that is secure and production-ready right from the moment it is installed. Linkerd inserts proxies between application services and these proxies control the execution of network requests. Linkerd provides observability metrics like success rate, latency, and throughput. It builds reliability with its easy-to-configure rules for retries, timeouts, and load balancing. It bolsters security by managing the creation and rotation of mTLS certificates.

There are two key parts to Linkerd in this demo - the Linkerd control plane, and Linkerd Viz. The control plane is, not surprisingly, where you operate Linkerd from, and Viz is the dashboard UI which is linked to Grafana for further detail.

The setup

As part of this demo, there are two clusters involved in the deployment - a dev and a production cluster. The Dev cluster (operated by Jason Morgan) is managed by K3s on a local machine, while Prod (operated by Paul Curtis) is a larger cluster running on AWS EKS and is managed by Weave Kubernetes Platform (WKP). WKP is designed to manage applications and operations by following GitOps principles. In this demo we will be looking at Multitenant Team Workspaces, a feature that enables GitOps across multiple namespaces on the same cluster. Team Workspaces simplifies the management and portability of applications and allows to operate across separate environments on a single cluster with the security in place that gives each team control of its own tenant.


Morgan starts by installing Flux and Linkerd on the Dev cluster. He's already got a simple application that exposes an NGINX frontend on the container 'Podinfo' and plans to deploy an update to the application that simply changes the color from blue to green.

Flagger's role in Progressive delivery (Canary releasing)

Morgan already has a Git repository with instructions for how to carry out the canary release. The file includes details such as the time interval, threshold, stepweight, and maxweight. These parameters can be easily tweaked to change the rules that govern the canary release. All it takes is a simple edit to the file in Git. Morgan initiates the deployment by reviewing his changes and creating a pull request.

Flagger notices the change and the related instructions about the canary release. Based on this, Flagger automatically creates a new container instance called Podinfo-primary. From the Prod side, it's a simple 'Merge' to approve the suggested changes. Here as well, Flagger orchestrates the canary release process.

Flagger enables you to have different deployment strategies for Dev & Prod. For example, you can have a more rigorous list of checks to be performed for a deployment on a Dev cluster than a Prod cluster. Beyond canary, Flagger also supports blue-green and A/B deployments. Flagger automatically creates new containers and terminates old ones. This greatly simplifies this otherwise complex deployment process.

Linkerd, Prometheus, and Grafana - The monitoring trio

Linkerd Viz (in the image below) shows the various parts of the system and how they talk to each other. As you can see Prometheus is the central point for all vital metrics.


From the Linkerd Viz interface you can jump to the Grafana dashboard for any metric you see. This is extremely convenient and helps you deep dive into any metric. 

While the deployment is happening Linkerd allows you to look at the traffic split between the two instances in real time. You just use the 'ts' command for this. You can also view this in a Grafana dashboard. Further, Prometheus lets you set up any custom metrics you may need.

Flagger queries Linkerd to get the Prometheus metrics it needs. It uses these metrics to check against the canary rules during the deployment. By combining Linkerd, Prometheus, and Grafana you can gain deep visibility and greater control over your progressive delivery.

The role of GitOps in multicloud deployments

GitOps is crucial to execute this type of progressive delivery with ease. It acts as the bridge between Dev and Prod. The key rule to follow with GitOps is to have everything versioned in Git. This includes the instructions for the canary release.

In the demo, the dev cluster runs on a local machine, whereas the prod cluster runs on AWS EKS. Morgan initiates a change from a Git repository as a Pull request, and Curtis approves the change using a Merge. The important thing to notice is that the developer doesn't need to talk to the cluster directly, but can initiate the deployment right from Git.

This process would be very difficult to initiate manually. It would involve constant re-balancing of traffic, and multiple manual deployments. What Flagger, with the power of GitOps, is able to do is completely automate the process based on a set of rules. This way the Dev team can sit back and simply monitor the progressive delivery happening in real-time, and tweak things as needed. The deployment itself runs on autopilot.

Rollback is equally easy by revoking a change within Git. Along with this ease of use, GitOps also gives you built-in versioning, change history, security, and an audit trail.

Watch the demo

This blog post is a teaser, and calls out the key takeaways from the demo video shown below. To see it all in action, we encourage you to watch the entire demo.

If you are eager to get started on advanced deployment strategies with Weave Kubernetes Platform and GitOps and your service mesh of choice, contact us for a demo!

Related posts

Meet the Weaveworks Team at GitOpsCon Europe (Virtual Event)

KubeCon Chicago 2023 Recap: Cloud-native Scale & Growth with GitOps

GitOps Goes Mainstream - Flux CD Boasts Largest Ecosystem

Multi-Cloud Strategies with Kubernetes and GitOps - learn more