Progressive Delivery: Towards Continuous Resilience with Flagger & Weave GitOps
In progressive delivery, traffic is routed through various methods to ensure a smooth transition between application releases. In this post, learn how Flagger and Weave GitOps automates progressive delivery.
Safety Fast with Weave GitOps Trusted & Progressive Delivery
A Pocket Guide to Progressive Delivery with GitOps
How Policies Enable Safer And More Predictable Progressive Delivery
When rolling out a new version of your application, traffic routing is one of the trickiest challenges. Progressive delivery is a way to get the best of both worlds - stable releases, and higher release velocity. Exposing a subset of users to the new version during the evaluation period and keeping other users in the previous version is the gist of progressive delivery. Through this approach, you can ensure any potential issues arising from the latest version of your application will have a small impact radius. In this post, we dive into how traffic is routed with progressive delivery, especially with the help of Flagger (now part of Weave GitOps) which can automate this entire process.
What is Progressive Delivery?
Progressive Delivery is an evolved form of Continuous Delivery where new versions/features are gradually rolled out to limited users to mitigate any potentially damaging effect. Developers and DevOps teams can incrementally roll out and deliver new features with fine-grained controls to a small group of users to minimize the risks of pushing new features to the production environment. If the newly released feature proves to be stable and performant with the small audience, it can then be expanded and released to all users.
Organizations that adopt progressive delivery approaches are able to mitigate risk and deliver code safely, improve their release frequency, and decrease development costs - ultimately boosting software delivery capabilities.
The key Principles of Progressive Delivery Are
Based on your needs, you can control the number of users that are exposed to the latest release. Release progressions can be executed in multiple ways, like, canary launches, percentage rollouts, targeted rollouts, ring deployments, and so forth.
This implies, gradually transferring the control of a feature to the person who is most accountable for the results. For example, during the initial phases, developers have control over a feature since they are the ones developing it but during the release cycle, DevOps engineers or project managers assume control over the feature to supervise a successful release.
There are multiple types of Progressive Delivery methods used by organizations, like:
- Canary releases
- A/B testing
- Blue-green deployments
- Service meshing / Ingress Controller
- Feature Flags
- Chaos engineering
- Traffic shadowing
- Release Analysis
To learn more about Kubernetes deployment strategies, check out the blog “Kubernetes Deployment Strategies.”
The Challenges with Progressive Delivery
Progressive delivery is a great way to mitigate failures in production from affecting the user experience. However, it comes with its own challenges such as the complexity of maintaining different versions of the same service, controlling the traffic flow between these versions, monitoring the reliability of new services, and triaging and resolving issues as they come up. One of the most complicated challenges among these is traffic routing. Typically, teams manually manage the traffic flow between services. However, this is a nightmare as it requires a lot of tweaking to achieve the right balance. Thankfully, there are solutions that make traffic management easier and more automated for progressive delivery.
The Importance of Traffic Management for Progressive Delivery
Using traffic management you can incorporate smart routing rules for an application. These routing rules are used to redirect the traffic flow to multiple software versions, allowing for Progressive Delivery. This way, the latest version is restricted to only a limited number of users. For example, you can specify to have 5% of the traffic routed to the new version of a service, and as the service proves reliable, you can increase the traffic from 5% to 10% and so on until it reaches 100%.
A Few Techniques for Traffic Management Include
In mirrored traffic, all traffic is simultaneously reproduced and a copy of the live traffic is transferred to the new version. This method carries a low-risk factor.
In this method, a fixed ratio of users is decided (in percentage), and based on that ratio the traffic is routed to the new version, and the rest are kept in the previous version. For example, 30% of the traffic is rerouted to the latest version while the remaining 70% is to the stable version.
In this method, based on HTTP header information, traffic is routed. The headers can vary from standard headers to custom headers. For example, sending requests to the latest version with a unique header.
How Flagger Eases Progressive Delivery
Flagger, now part of Weave GitOps, is a progressive delivery tool that helps implement complex and controlled delivery approaches like canary releases, blue-green deployments, and more. It integrates with service mesh tools to control the flow of traffic to services and is able to do this by following policies as defined in YAML config files.
You can configure Flagger to automate the delivery process for Kubernetes jobs through Canary, which is a custom resource, targeting DaemonSet or Deployment. You can use the Canary resource to manage the application releases for scalable apps that are run on Kubernetes.
Flagger will enable you to perform automated application analysis which can also include metric queries for CloudWatch, Dynatrace, Prometheus, Graphite, InfluxDB, New Relic, Google Cloud Monitoring (Stackdriver), and Datadog.
Through Flagger you can progressively shift traffic to the newer application version while tracking HTTP/gRPC request golden signals of success rate and latency. This will minimize the associated risks of rolling out a new application version into production.
Flexible Traffic Routing
You can use service meshes like Linkerd, OSM, Istio, AWS App Mesh, or Kuma to switch across app versions and route traffic. You can also use an Ingress controller like NGINX, Contour, Traefik, Skipper, or Gloo if a service mesh does not suit your requirements.
You can enhance your application analysis with additional metrics and webhooks for performing acceptance tests, load tests, or just about any custom validation in addition to the pre-existing metrics checks.
3 Application Deployment Techniques
Weave GitOps uses the following deployment techniques to perform automated application analysis, testing, promotion, and rollback:
Canary Releases (Progressive Traffic Shifting)
With canary releasing, new features are introduced to a limited number of users, which shields a majority of users from any potential bugs. A production environment can often differ from a development or staging environment. This minimizes the blast radius of the latest release, allowing developers to validate and rectify all changes before they are released to all users.
You can implement canary deployments with Istio. Learn more about it in GitOps Workflows with Istio.
A/B Testing (HTTP Headers and Cookies Traffic Routing)
In this method, on the user interface, you assess a variable on two separate groups of data to analyze the better-performing one. This is useful for determining the impact of tweaking a function, along with anything else that may affect your conversion rate.
Blue/Green Deployments (Traffic Switching and Mirroring)
In this deployment technique, you release the new application version on one production environment and then switch from the other one to this one. Both your environments are almost identical however, at any given time, only one is active. This way you can not only validate your changes on production but also roll back to the previous version in case any issue arises. Flagger uses Kubernetes L4 networking to orchestrate the Blue/Green deployment technique (for apps that do not use a service mesh). For service mesh applications in Blue/Green deployment, you can refer to this page from the Flagger Docs.
Safely Automate Progressive Delivery with Flagger and Weave GitOps
Flagger, part of the Flux family of GitOps tools, is now part of Weave GitOps - the fully automated enterprise platform powered by Flux. Flux automates application deployment (CD) and oversees progressive delivery via automatic reconciliation with Flagger. By adding Flagger to the mix, you can completely automate GitOps pipelines for canary deployments. The latest product release introduced actionable dashboards to visualize and report on progressive delivery status. Teams now have immediate access to advanced delivery patterns such as canary, A/B or blue/green deployments.
GitOps enables you to use a declarative approach to route traffic for progressive delivery. Suppose you have a 70-30 traffic split between versions, all you need to do is define these parameters in your Git repository, and through a pull request Flagger will figure out how to implement them. Now you can safely introduce new code to customers with progressive deployments, isolate and fix potential issues, and minimize the deployment risks.
You can also configure Weave GitOps to send notifications and alerts to apps like Slack, Microsoft Teams, Discord, and Rocket. All these reasons make Weave GitOps the leading progressive delivery tool today.
We can help you put progressive delivery into practice using Weave GitOps and Flagger. Request a Demo now.Request a Demo