Automated Canary Management to Kubernetes with Flagger, Istio and GitOps Pipelines

By Stefan Prodan
May 17, 2019

Flagger automates traffic routing between canary deployments, reducing the risk of app downtime. This allows your team to confidently test and rollout innovative new features more frequently. This tutorial walks you through setting up Istio on a Kubernetes cluster, automating canary and A/B deployments with GitOps pipelines.

Related posts

Kubernetes Deployment Strategies

Reduce Continuous Deployment Complexity with Flagger and Istio

GitOps Workflows for Istio Canary Deployments

Flagger is a Kubernetes operator that automates the traffic for advanced deployments like canaries and A/B testing. Since Flagger manages the traffic routing between canary deployments, the risk of app downtime is reduced or completely eliminated. This allows your team to confidently test and rollout innovative new features more frequently.

Flagger can be used with a service mesh like Istio or AWS App Mesh. But if you don’t want to implement a service mesh into your infrastructure at this time, you can also use an ingress controller like NGINX to help manage traffic.

This tutorial walks you through setting up Istio on a Kubernetes cluster and automating canary deployments with GitOps pipelines.

GitOps Pipeline for Canary Deployments with Flagger

The diagram below shows the overall architecture and also how the components all fit together: 

  • Istio service mesh: 
    • manages the traffic flows between microservices, enforcing access policies and aggregating telemetry data
  • Prometheus monitoring system: 
    • time series database that collects and stores the service mesh metrics
  • Flux GitOps operator:
    • syncs YAMLs and Helm charts between git and clusters
    • scans container registries and deploys new images
  • Helm Operator CRD controller:
    • automates Helm chart releases
  • Flagger progressive delivery operator:
    • automates the promotion of canary deployments using Istio routing for traffic shifting and Prometheus metrics for canary analysis
  • Weave Cloud: 
    •  Comes with Weave Flux, Prometheus and Flagger all bundled together



You'll need a Kubernetes cluster v1.11 or newer with LoadBalancer support, `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` admission controllers enabled. For testing purposes you can use Minikube with two CPUs and 4GB of memory.

Install the Flux CLI, Helm and Tiller:

brew install fluxctl kubernetes-helm
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
helm init --service-account tiller --wait

Fork this repository:

Then clone it:

git clone<YOUR-USERNAME>/gitops-istio
cd gitops-istio

1. Bootstrap the Cluster

Install Weave Flux and its Helm Operator by specifying your fork URL:


At startup, Flux generates a SSH key and logs the public key. The above command will print the public key.

In order to sync your cluster state with git you need to copy the public key and create a deploy key with write access on your GitHub repository. On GitHub go to Settings > Deploy keys click on Add deploy key, check Allow write access, paste the Flux public key and click Add key.

When Flux has write access to your repository it does the following:

  • creates the istio-system and prod namespaces
  • creates the Istio CRDs
  • installs Istio Helm Release
  • installs Flagger Helm Release
  • installs Flagger's Grafana Helm Release
  • creates the load tester deployment
  • creates the frontend deployment and canary
  • creates the backend deployment and canary
  • creates the Istio public gateway


The Flux Helm operator provides an extension to Weave Flux that automates Helm Chart releases for it. A Chart release is described through a Kubernetes custom resource named HelmRelease. The Flux daemon synchronizes these resources from git to the cluster, and the Flux Helm operator makes sure Helm charts are released as specified in the resources.

Istio Helm Release example:

kind: HelmRelease
  name: istio
  namespace: istio-system
  releaseName: istio
    name: istio
    version: 1.1.4
      enabled: true
      enabled: true
        type: LoadBalancer
      enabled: true
        enabled: false
        enabled: true
      enabled: true
      scrapeInterval: 5s

Note that the Istio CRDs have been extracted from the istio-init chart and placed inside the istio-system dir. You can update the CRDs by running the ./scripts/ script.

If you make changes to the Istio configuration and push those to git, Flux will upgrade the Helm release. It can take up to 3 minutes for Flux to sync and apply the changes or you can use fluxctl sync to trigger a git sync.

2. Workloads bootstrap

When Flux syncs the Git repository with your cluster, it creates the frontend/backend deployment, HPA and a canary object. Flagger uses the canary definition to create a series of objects: Kubernetes deployments, ClusterIP services and Istio virtual services. These objects expose the application on the mesh and drive the canary analysis and promotion.

# applied by Flux
# generated by Flagger

Check if Flagger has successfully initialized the canaries:

kubectl -n prod get canaries
backend    Initialized   0        2019-04-30T18:53:18Z
frontend   Initialized   0        2019-04-30T17:50:50Z

When the frontend-primary deployment comes online, Flagger will route all traffic to the primary pods and scale to zero the frontend deployment.

3. Automate canary promotion

Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack.

A canary deployment is triggered by changes in any of the following objects:

  • Deployment PodSpec (container image, command, ports, env, etc)
  • ConfigMaps mounted as volumes or mapped to environment variables
  • Secrets mounted as volumes or mapped to environment variables

For workloads that are not receiving constant traffic Flagger can be configured with a webhook, that when called, will start a load test for the target workload. The backend load test webhook configuration can be found at `prod/backend/canary.yaml`.


Trigger a canary deployment for the backend app by updating the container image:

$ fluxctl release --workload=prod:deployment/backend \
Submitting release ...
prod:deployment/backend  success  backend: -> 1.4.1
Commit pushed:    ccb4ae7
Commit applied:    ccb4ae

Flagger detects that the deployment revision changed and starts a new rollout:

$ kubectl -n prod describe canary backend
New revision detected! Scaling up
Starting canary analysis for
Advance canary weight 5
Advance canary weight 10
Advance canary weight 15
Advance canary weight 20
Advance canary weight 25
Advance canary weight 30
Advance canary weight 35
Advance canary weight 40
Advance canary weight 45

Advance canary weight 50
Copying template spec to
Promotion completed! Scaling down

During the analysis the canary’s progress can be monitored with Grafana. You can access Grafana using port forwarding:

kubectl -n istio-system port-forward svc/flagger-grafana 3000:80

The Istio dashboard URL is http://localhost:3000/d/flagger-istio/istio-canary?refresh=10s&orgId=1&var-namespace=prod&var-primary=backend-primary&var-canary=backend


Note that if new changes are applied to the deployment during the canary analysis, Flagger will restart the analysis phase.

Automated A/B testing

Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions. In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users. This is particularly useful for frontend applications that require session affinity.

You can enable A/B testing by specifying the HTTP match conditions and the number of iterations:


    # schedule interval (default 60s)
    interval: 10s

    # max number of failed metric checks before rollback
    threshold: 10

   # total number of iterations
    iterations: 12

    # canary match condition
      - headers:
            regex: "^(?!.*Chrome)(?=.*\bSafari\b).*$"
      - headers:
            regex: "^(.*?;)?(type=insider)(;.*)?$"

The above configuration will run an analysis for two minutes targeting Safari users and those that have an insider cookie. The frontend configuration can be found at prod/frontend/canary.yaml. Trigger a deployment by updating the frontend container image:

$ fluxctl release --workload=prod:deployment/frontend \

Flagger detects that the deployment revision changed and starts the A/B testing:

$ kubectl -n istio-system logs deploy/flagger -f | jq .msg
New revision detected! Scaling up
Waiting for rollout to finish: 0 of 1 updated replicas are available
Advance canary iteration 1/10
Advance canary iteration 2/10
Advance canary iteration 3/10
Advance canary iteration 4/10
Advance canary iteration 5/10
Advance canary iteration 6/10
Advance canary iteration 7/10
Advance canary iteration 8/10
Advance canary iteration 9/10
Advance canary iteration 10/10
Copying template spec to
Waiting for rollout to finish: 1 of 2 updated replicas are available

Promotion completed! Scaling down

You can monitor all canaries with:

$ watch kubectl get canaries --all-namespaces
prod        frontend  Progressing   100      2019-04-30T18:15:07Z 
prod        backend    Succeeded     0        2019-04-30T17:05:07Z

Automated rollback

During the canary analysis you can generate HTTP 500 errors and high latency to test Flagger's rollback.
Generate HTTP 500 errors:

watch curl -b 'type=insider' http://<INGRESS-IP>/status/500

Generate latency:

watch curl -b 'type=insider' http://<INGRESS-IP>/delay/1

When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed.

kubectl -n prod describe canary/frontend
  Failed Checks:         2
  Phase:                 Failed
  Type     Reason  Age   From     Message
  ----     ------  ----  ----     -------
  Normal   Synced  3m    flagger  Starting canary deployment for
  Normal   Synced  3m    flagger  Advance canary iteration 1/10
  Normal   Synced  3m    flagger  Advance canary iteration 2/10
  Normal   Synced  3m    flagger  Advance canary iteration 3/10
  Normal   Synced  3m    flagger  Halt advancement success rate 69.17% < 99%
  Normal   Synced  2m    flagger  Halt advancement success rate 61.39% < 99%
  Warning  Synced  2m    flagger  Rolling back failed checks threshold reached 2
  Warning  Synced  1m    flagger  Canary failed! Scaling down

Alerting Flagger can be configured to send Slack notifications. You can enable alerting by adding the Slack settings to Flagger's Helm Release:  

kind: HelmRelease
 name: flagger
 namespace: istio-system
 Annotations: "false" semver:~0
 releaseName: flagger
   name: flagger
   version: 0.12.0
     user: flagger
     channel: general

Once configured with a Slack incoming webhook, Flagger will post messages when a canary deployment has been initialized, when a new revision has been detected and if the canary analysis failed or succeeded.


A canary deployment will be rolled back if the progress deadline exceeded or if the analysis reached the maximum number of failed checks:


Besides Slack, you can use Alertmanager to trigger alerts when a canary deployment failed:

 - alert: canary_rollback
   expr: flagger_canary_status > 1
   for: 1m
     severity: warning
     summary: "Canary failed"
     description: "Workload {{ $ }} namespace {{ $labels.namespace }}"

Need help?

For the past 4 years, Weaveworks has been running Kubernetes at scale. We’re happy to share our knowledge and help teams embrace the benefits of on-premise installations of upstream Kubernetes with Weaveworks Kubernetes Subscription.  Contact us for more details on our Kubernetes support packages.

Related posts

Kubernetes Deployment Strategies

Reduce Continuous Deployment Complexity with Flagger and Istio

GitOps Workflows for Istio Canary Deployments

Want to set up your own GitOps pipeline? Download A Practical Guide to GitOps