Weaveworks Developer Experience engineer, Stefan Prodan (@stefanprodan), recently posted a tutorial on the Google Cloud Platform blog describing how to use Flagger and Istio to automate canary style deployments to Kubernetes.

To accelerate software development, automated Continuous Delivery is the end goal for many organizations making the transition to the cloud and running applications on Kubernetes. But this goal can be difficult to achieve because of deployment complexity and other frictions such as system availability and reliability.

Continuous delivery challenges

For example, you may be running multiple clusters across several different branches and repositories, which can make managing continuous deployments cumbersome and complex. Other issues that arise are the cultural changes that need to take place within your development team. Teams may need to rethink how they work together so that deployment pipelines are as automated as possible.  This means QA and test automation teams need to work even more closely with development to determine what exactly can be automated and what may need manual testing.  At the same time, Cluster Operators and DevOps engineers will need to have the systems in place that can inform and alert developers on the cluster's up and down times and other states.  

For an in-depth discussion on the cultural challenges of implementing Continuous Delivery, download the eBook - Building Continuous Delivery Pipelines

flagger-istio.png


How does Flagger help reduce continuous delivery friction?

Flagger is an open source Kubernetes operator that aims to untangle this complexity. It automates the promotion of canary deployments by taking advantage of Istio’s traffic shifting and Prometheus metrics to analyze and provide feedback of an application’s behaviour during a controlled rollout.

Try out this tutorial and learn how to:

  1. Create a GKE cluster on the GCP platform.
  2. Set up an Istio ingress gateway.
  3. How to install Flagger.
  4. Deploy a web application with Flagger.
  5. Automate your canary promotions and deployments.
  6. Analyze the deployment in Grafana.
  7. Perform a rollback.

Final Thoughts

By running a service mesh like Istio on top of Kubernetes, you get automatic metrics, logs and traces, however the deployment of workloads still relies on external tooling. Flagger aims to change that by extending Istio with progressive delivery capabilities.

Flagger is compatible with any CI/CD solutions made for Kubernetes. Canary analysis can be easily extended with webhooks for running system integration/acceptance tests, load tests, or any other custom validation. Since Flagger is declarative and reacts to Kubernetes events, it can be used in GitOps pipelines together with Weave Flux.

Flagger is sponsored by Weaveworks and powers the canary deployments in Weave Cloud. The project is being tested on GKE, EKS and bare metal clusters provisioned with kubeadm.

If you have any suggestion on improving Flagger please submit an issue or PR on GitHub at stefanprodan/flagger. Contributions are more than welcome!