Progressively Delivering Applications Across Cloud and On-Premise
For many teams moving legacy services to run on a modern cloud native stack, is daring and sometimes not even feasible due to regulatory requirements. In this true hybrid scenario, we will be explaining how to move legacy apps progressively to Kubernetes in a secure and controlled manner.
For many development teams progressive application delivery methods like canary releasing are essential to building greater velocity, resilience and efficiency. A look at the DORA report solidifies this idea; elite teams deploy 208 times more frequently than low performing teams. They also have a change failure rate 7 times lower than low performing teams. Progressive application delivery is key to achieving this level of velocity and reliability.
However, progressive application delivery can be complex to implement. Thanks to modern distributed application management solutions like service meshes and advanced devops best practices, progressive application delivery can now be as simple as editing a Git repository. In this post, we recap a conference talk where Paul Curtis of WeaveWorks demonstrates how to implement canary releasing using Kuma, Weave Kubernetes Platform and GitOps.
What is progressive delivery?
Typically, software delivery involves replacing an old version of an application with a new version. Progressive delivery, on the other hand, allows for two separate copies of the same application running simultaneously - an old version, and a new version with a change. Rather than completely replacing the old version with the new version, progressively delivery shifts traffic from the old to the new version. You would start by routing a small percentage of traffic (5% or so) to the new version, and gradually increase the percentage as the new version is confirmed to be stable and without issues. (Here is an intro to service meshes on Kubernetes and progressive delivery). Today many teams use progressive delivery for deploying a new version of an application, or moving from a legacy system to a containerized system.
Canary releasing is a popular implementation of progressive delivery. The concept of canary releasing has its origins in the coal mining industry. Miners were susceptible to harmful gases in coal mines. To protect themselves, they would take canary birds with them into the mines, and since the birds are more sensitive to these gases than humans, if a bird dies, the miners are alerted that it is unsafe for them. In progressive delivery, the new version of an app is the canary.
What is Kuma mesh?
Kuma is a new service mesh implementation that was created by Kong, the team behind the popular Kong API management solution. Kuma was recently adopted by the CNCF as a sandbox project, and it is now the only CNCF service mesh built on top of Envoy.
When it comes to progressive application delivery, the Kong gateway makes it possible to map traffic splitting across multiple versions of an application in a distributed system. Building on this, Kuma acts as the control plane that implements the traffic splitting.
Why GitOps for progressive delivery?
GitOps is operations by pull request. It enables you to declare and version the entire system in Git repositories. The defined configuration is automatically applied to the system. When a difference arises between the live cluster and its Git repository, a GitOps tool would use a software agent to alert on this 'diff' and reconcile the cluster back to the original desired state that was declared in the repository.
When it comes to progressive application delivery, GitOps allows you to:
1. See how traffic changes from the old to the new version
2. Automate the traffic split between versions
3. Split traffic across versions running on completely different infrastructure - e.g. between cloud and on-premise
Using Kuma & GitOps to implement canary releasing
The following illustration shows how the process is implemented using Kuma and GitOps.
There are two Kubernetes clusters (the forked repo is being used for ongoing development), and they both have a corresponding Git repository that defines them. As changes are made to the repositories, they are 'pulled' to the production clusters. The repositories also include the configuration for the Kong gateway, the Kuma control plane, and the traffic splitting policies.
To change the traffic split, all you need to do is edit the traffic policy in the git repo to change the 'weight' of traffic across the two versions. This is shown in the screenshot below:
Once the repository is edited, all it takes is a simple pull request (which can be automated in GitOps) to roll out the new traffic split. This repository can be edited repeatedly to give the new version more weight and traffic until we have reached 100% of traffic being directed to the new version.
Benefits of progressive delivery using GitOps
There are many benefits to delivering applications progressively using GitOps principles but here are the top three for this use case:
- Auditability: GitOps records an automatic audit trail. This means you can view what changed at any given time, who made the change, and more.
- Stability: Rollbacks are as easy as editing the Git repository. If a deployment has issues, you can easily revert the traffic split change within the Git repository itself.
- Flexibility: Finally, GitOps in combination with a service mesh like Kuma enables you to implement progressive application delivery across complex backend infrastructure. For example, you can split traffic between containers and VMs, or have one version run on EKS and another version on EKS-D.
As cloud-native applications become more complex, so do their deployment practices. However, progressive application delivery need not be complex. By leveraging modern solutions like Kuma and the GitOps principles, development teams can instill safety and stability with well known tools while meeting growing demands for speed and innovation.
Watch the full presentation: