Kubernetes Deployment Strategies

By Weaveworks
May 02, 2019

Kubernetes deployment strategies replace pods of previous versions of your application with pods of the new version without cluster downtime.

Related posts

Progressive Delivery for AWS App Mesh

Reduce Continuous Deployment Complexity with Flagger and Istio

GitOps Workflows for Istio Canary Deployments

What is a Kubernetes Deployment Strategy?

Kubernetes deployment strategies are the way that pods are updated, downgraded or created into new versions of applications. Kubernetes deployment strategies work by replacing pods of previous versions of your application with pods of the new version. The main benefits of these Kubernetes deployment strategies are that it mitigates the risk of disruptions and downtime of services.

One of the biggest challenges in developing cloud-native applications today is speeding up the number of your deployments. With a microservices approach, developers are already working with and designing a deployment strategy that is completely modular and allows multiple teams to write and deploy changes simultaneously to an application.

Deployment strategies with shorter and more frequent deployments offer the following benefits:

  • Reduced time-to-market.
  • Customers can take advantage of features faster.
  • Customer feedback flows back into the product team faster, which means the team can iterate on features and fix problems faster.
  • Higher developer morale with more features in production.

But with more frequent releases, the chances of negatively affecting application reliability or customer experience can also increase. This is why it’s essential for operations and DevOps teams to develop processes and manage deployment strategies that minimize risk to the product and customers. Learn more about CI/CD pipelines automation and deployments for Kubernetes.

In this post, we’ll discuss Kubernetes deployment strategies, including rolling deployments and more advanced methods like canary and its variants.

Deployment Strategies

There are several different types of deployment strategies you can take advantage of depending on your goal. For example, you may need to roll out changes to a specific environment for more testing, or a subset of users/customers or you may want to do some user testing before making a feature 'Generally Available'. 

Rolling Strategy Deployment

The rolling deployment is the standard default deployment to Kubernetes. It works by slowly, one by one, replacing pods of the previous version of your application with pods of the new version without any cluster downtime.

rolling-deployment-ww.png

A rolling update waits for new pods to become ready via your readiness probe before it starts scaling down the old ones. If there is a problem, the rolling update or deployment can be aborted without bringing the whole cluster down. In the YAML definition file for this type of deployment, a new image replaces the old image.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: awesomeapp
spec:
replicas: 3
template:
metadata:
labels:
app: awesomeapp
spec:
containers:
- name: awesomeapp
image: imagerepo-user/awesomeapp:new
ports:
- containerPort: 8080

Rolling updates can be further refined by adjusting the parameters in the manifest file:

spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
template:
...

Recreate

In this type of very simple deployment, all of the old pods are killed all at once and get replaced all at once with the new ones.

recreate-deployment-ww.png

This manifest looks something like this:

spec:
replicas: 3
strategy:
type: Recreate
template:
...


Blue/ Green (or Red / Black) deployments

In a blue/green deployment strategy (sometimes referred to as red/black) the old version of the application (blue) and the new version (green) get deployed at the same time. When both of these are deployed, users only have access to the blue; whereas, the green is available to your QA team for test automation on a separate service or via direct port-forwarding.

blue-green-deployment-ww.png

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: awesomeapp-02
spec:
template:
metadata:
labels:
app: awesomeapp
version: "02"

After the new version has been tested and is signed off for release, the service is switched to the green version with the old blue version being scaled down:

apiVersion: v1
kind: Service
metadata:
name: awesomeapp
spec:
selector:
app: awesomeapp
version: "02"
...


Canary

Canary deployments are a bit like blue/green deployments, but are more controlled and use a more ‘progressive delivery’ phased-in approach. There are a number of strategies that fall under the umbrella of canary including: dark launches, or A/B testing.

A canary is used for when you want to test some new functionality typically on the backend of your application. Traditionally you may have had two almost identical servers: one that goes to all users and another with the new features that gets rolled out to a subset of users and then compared. When no errors are reported, the new version can gradually roll out to the rest of the infrastructure.

While this deploy strategy can be done just using Kubernetes resources by replacing old and new pods, it is much more convenient and easier to implement this strategy with a service mesh like Istio.

As an example, you could have two different manifests checked into Git: a GA tagged 0.1.0 and the canary, tagged 0.2.0. By altering the weights in the manifest of the Istio virtual gateway, the percentage of traffic for both of these deployments is managed.

canary-deployment-ww.png

For a step by step tutorial on implementing canary deployments with Istio, see, GitOps Workflows with Istio.

Canary deployments with Flagger

A simple and effective way to manage a canary deployment is by using Flagger.

With Flagger, the promotion of canary deployments is automated. It uses Istio or App Mesh to route and shift traffic, and Prometheus metrics for canary analysis. Canary analysis can also be extended with webhooks for running acceptance tests, load tests or any other type of custom validation.

Flagger takes a Kubernetes update deployment and optionally a horizontal pod autoscaler (HPA) to create a series of objects (Kubernetes deployments, ClusterIP services and Istio or App Mesh virtual services) to drive the canary analysis and promotion.

flagger-canary-hpa.png

By implementing a control loop, Flagger gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pods health. Based on the analysis of the KPIs, a canary is either promoted or aborted, and the analysis result is published to Slack. For an overview and demo see, Progressive Delivery for App Mesh.

flagger-canary-overview.png

Dark deployments or A/B Deployments

A dark deployment is another variation on the canary (that incidentally can also be handled by Flagger). The difference between a dark deployment and a canary is that dark deployments deal with features in the front-end rather than the backend as is the case with canaries.

Another name for dark deployment is A/B testing. Rather than launch a new feature for all users, you can release it to a small set of users. The users are typically unaware they are being used as testers for the new feature, hence the term “dark” deployment.

With the use of feature toggles and other tools, you can monitor how your user is interacting with the new feature and whether it is converting your users, or whether they find the new UI confusing and other types of metrics.

flagger-abtest-steps.png

Flagger and A/B deployments

Besides weighted routing, Flagger can also route traffic to the canary based on HTTP match conditions. In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users. This is particularly useful for front-end applications that require session affinity. Find out more in the Flagger docs.

Thanks to Stefan Prodan, Weaveworks Engineer (and creator of Flagger) for all of these awesome deployment drawings.

Need help with Kubernetes Deployment Strategies?

If you’re looking for a safe and reliable way to implement, manage, and monitor Kubernetes deployments, Weave GitOps can help. Weave GitOps is powered by Flux and Flagger which enables you to simplify automated deployments to implement successful progressive delivery.  Contact us for more details.


Related posts

Progressive Delivery for AWS App Mesh

Reduce Continuous Deployment Complexity with Flagger and Istio

GitOps Workflows for Istio Canary Deployments

Whitepaper: Progressive Delivery with GitOps

A handy pocket guide covering the benefits of Progressive Delivery, how it works and how you can get started today.

Download Whitepaper