With the recent release of Linkerd 2.4, we are pleased to announce that you can now use Flagger with Linkerd to automate progressive deployments like canaries and other advanced deployment strategies for your Kubernetes workloads.

Linkerd implements the Service Mesh Interface (SMI) Traffic Split API. This allows Flagger to control the traffic between two versions of the same application. For apps running on the service mesh, you can configure Flagger with a custom resource to automate the analysis and control the promotion of a canary deployment.

flagger-linkerd-traffic-split.png

When you deploy a new version of an app, Flagger gradually shifts traffic to the canary, and at the same time, measures the requests success rate as well as the average response duration. Based on an analysis of these Linkerd provided metrics, a canary deployment is either promoted or rolled back. With custom Prometheus metrics, you can extend this canary analysis by creating acceptance and load tests to further harden the validation process of your app release process.

Linkerd and Flagger provides a way to limit the blast radius of a failed deployment, automating a rollback, the moment a problem has been detected. The canary deployment pattern is not a new concept nor is traffic splitting. But what Kubernetes and service meshes do offer is a safer app delivery process though a declarative definition.

Specify Canary Deployment Parameters in YAML

Flagger adds a new kind to the Kubernetes API called Canary. How an application should be exposed on the mesh and how the canary analysis should be run and for how long is defined in a Kubernetes YAML.

The following is an example of a canary deployment where the analysis runs periodically until it reaches the maximum traffic weight or the failed checks threshold.

flagger-yaml-definition.png

The above analysis, if it succeeds, will run for 25 minutes while validating the HTTP metrics every minute (success rate 99% and latency max 500ms).

When the canary analysis starts, Flagger will call the pre rollout hook before routing traffic to the canary. If the helm test fails, Flagger retries it until the analysis threshold is reached and the canary is rolled back.

For applications exposed outside of the mesh, you can use Linkerd with NGINX or with SuperGloo. Flagger will then use the ingress controller’s routing capabilities to drive the canary traffic and Linkerd metrics to measure the impact.

Linkerd and Flagger Tutorial

A tutorial on canary deployments with Linkerd and Flagger can be found at docs.flagger.app.

With Flagger you can automate application analysis for the following deployment strategies:

  • Canary (progressive traffic shifting)
  • A/B Testing (HTTP headers and cookies traffic routing)
  • Blue/Green (traffic switch)

Since the canary deployment is declarative, you can define your delivery process with Kubernetes objects and operate on them with git. By using GitOps, the desired state of both your infrastructure and workloads are kept in a repository and any changes to the system must be committed in source control prior to being applied on the cluster.

In a future article we'll explore how you can use GitOps to manage the progressive delivery for applications running on Kubernetes and Linkerd.

Listen to Stefan Prodan demo this on July 31st at the Linkerd Online Community Meetup.