Continuous Delivery the Hard Way
Learn why you need Continuous Delivery and how we evolved Weave Cloud to achieve Continuous Delivery with the tools you already have, like Jenkins, Circle CI and with any container registry like DockerHub, Quay or even with a private registry.
On August 17th, Luke Marsden gave a talk and demonstration of Continuous Delivery (CD) with Weave Cloud. He discussed why you need Continuous Delivery and how we evolved Weave Cloud to achieve Continuous Delivery with the tools you already have, like Jenkins, Circle CI and with any container registry like DockerHub, Quay or even with a private registry.
But before we get into the details of how CD works in Weave Cloud, let’s back up and discuss why you need to do continuous delivery in the first place.
Why do Continuous Delivery?
Systems are changing from monoliths to microservices. Microservices reduce your architecture into parts where each service performs a single function. It’s based on a principle UNIX philosophy that says software components should be designed to perform one function well.
Implementing a microservices-based architecture enables large teams of developers to work independently and to deliver features asynchronously. But even with services split up into simpler components, delivering them to a cloud environment like Kubernetes can be error prone and can slow teams down. Automating microservice deployment to the cloud reduces that complexity and allows development teams to deliver features faster.
Conway’s Law & Your Organization
Conway’s law states that the structure of your software often mirrors your own organization. DevOps and microservices are actually as much an organizational change as it is a technological change.
According to Luke,
“Once you've made the leap and focused your organization around microservices, the next step is to go from deploying or releasing a change manually through a CI system to turning on continuous delivery so that every time a change is made to your master branch in Git, it is automatically pushed into production.”
Continuous Delivery Components
Everyone who embarks on the cloud native journey will have these basic pieces in their development pipeline:
- Version Controlled Code -- source code repository where changes and updates are pushed
- CI system -- integrated test system and will also build the Docker image
- Docker Registry -- the image registry that stores your Docker images
- Kubernetes Cluster (or some other orchestrator) -- where you will deploy and run the application
Getting those pieces to work together is the key to continuous delivery.
Manually Deploying an Updated App to Kubernetes
To manually update a Kubernetes cluster with a new feature, a developer pushes new code to Git where it runs through CI and then builds a Docker image. The developer would then manually update the YAML file and from the command line use `kubectl apply -f service.yaml` to deploy that newly built image to the running cluster.
It’s simple to deploy something the first time manually, but when you need to do further updates, and if you are working on a team, you will eventually need to automate deployments, as it’s a slow and error prone process.
To get to the continuous delivery offered in Weave Cloud, several permutations in the architecture were explored.
Version 1: Git Push → Kubectl set image
In Version 1 of this architecture, the CI system and the Docker Registry were used to push an updated Docker image to Kubernetes via the `kubectl set image` command.
And to rollback a change using this system, we just do the same as a deploy but in reverse order. You’ll revert the code commit on master and then push it to Git which triggers the CI system.
As you can see, this method can be problematic where you can end up with a lot of merge commits and other Git complications - especially if more than one developer is working on the source.
Other downsides include:
- Building & pushing containers is slow (disk I/O, network), and you shouldn’t need to do this when rolling back a change
- Branch per environment and per microservice, results in an explosion of branches that are hard to manage & scale
- Easy to end up with a Git merge mess
Version 2 - Git as the Source of Truth
In the next iteration of the continuous delivery architecture, the kubernetes manifests (the YAMLs) are separated from the code and kept together and version-controlled in their own Git repository.
An advantage of version controlling your configuration is that if your cluster is accidently deleted, it can be easily restored by checking the manifests out of Git and redeploying them. You read more about Ops with Git in “GitOps Operation by Pull Request”.
In the V2 architecture, code is pushed to Git, and then to the CI system where an image is built, and pushed to the Docker Registry. The CI system then takes the associated YAML file, checks it out of its repo, updates it and then checks it back in before deploying the newly built image to the cluster.
The downsides of this architecture:
- The CI system is responsible for too many things and can be overloaded
- The CI system is only triggered by pushing code and we need to be able to rollback without pushing code
- If you rollback out of band (directly with kubectl), you have to remember to update the central configuration repo as well
- Parallel builds can tread on each others’ toes and can result in race conditions between git checkouts and pushes
- Scripting updates of YAMLs is finicky and it’s easy to mangle your YAMLs
Version 3 - Release Management & Automation
In the third version of continuous deployment automation, we’ve added a Release Manager to the GitOps pipeline. The Release Manager updates the manifests and applies them to the cluster, which allows Kubernetes to pull and deploy the correct image. It also handles specific tasks in the CD pipeline, like managing a feature rollback and therefore alleviates the CI system from doing things it’s not designed for.
In this post we discussed Luke Marsden’s talk on “Continuous Delivery the Hard Way”, which describes how Weave Cloud achieves continuous delivery. It uses a Release Manager that handles the heavy lifting of deploying and rolling back features in your applications running in Kubernetes.
Check out the Weave Cloud documentation for information on Release Management.
To see the talk in its entirety, watch the video here:
For further reading we recommend: