GitOps: High velocity CICD for Kubernetes
This blog post explains techniques for development teams who strive for high velocity continuous delivery using Kubernetes and Docker. When we say “high velocity” we mean that every product team can safely ship updates many times a day — deploy instantly, observe the results in real time, and use this feedback to roll forward or back. The goal is for product teams to use continuous experimentation to improve the customer experience as fast as possible.
Understanding Core Kubernetes Concepts & Components
Introducing GitOps Run: Create secure developer environments faster than ever
Helm Charts in Kubernetes
“The world is envisioned as a repo and not as a kubernetes installation" - Kelsey Hightower
This blog post is a round-up of useful information for people who want to do high velocity continuous delivery using Kubernetes and Docker. When we say “high velocity” we mean that every product team can safely ship updates many times a day — deploy instantly, observe the results in real time, and use this feedback to roll forward or back. The goal is for product teams to use continuous experimentation to improve the customer experience as fast as possible.
If you’re not yet familiar with GitOps, it’s an agile software lifecycle for modern applications. Learn more by reading our blog series on the topic, starting with Gitops - Operations by Pull Request.
Use Case: high velocity CICD is game-changing
How can one go from a single manual deployment per week to more than thirty, zero effort deployments per day?
Meet Qordoba - a San Francisco based team who use machine learning to deliver optimised localization for big brands that market internationally. One of their teams is using Kubernetes-based microservices as part of the Qordoba backend. By adopting the GitOps practices and tools we recommend, they radically improved their ability to shape customers’ UX:
1) Estimated time needed to fix prod software bugs: ~60% less time after using Weave Cloud
2) Estimated time to respond to customer requests: ~43% less time after using Weave Cloud
3) Uptime from 99% to 100% (so far…)
Qordoba moved from 1 or 2 deployments per week to 30+ per day. The key here is that every deployment is essentially “zero cost”. That means: it takes very little time, it doesn’t break half-way through leaving the team unable to return the system to a good state, and all changes can be rolled back. Zero cost deployment liberates the dev team to focus on UX and business logic - which creates value - iterating as fast as they like without fear of failure costs.
Learn more: GitOps intro presentation
Some background: Weaveworks have operated production Kubernetes cloud services on AWS for over two years. Recently we documented some best practices for this, calling our approach “GitOps”. GitOps builds on proven devops techniques like CICD and declarative infrastructure as code to deliver a joined up and automatable lifecycle for Kubernetes applications. We launched a ‘free tier’ GitOps capability with Google Cloud Platform at Kubecon.
Kubecon is a community event so in this presentation William and I talk about the solution pattern, the technical role of Kubernetes deployment operators, and the open source projects that make it work.
Instant Kubernetes CICD with Weave Cloud
If you liked this presentation and the idea of GitOps, then you might like to try it out. The quickest way to set it up is via our Weave Cloud offering on Google.
This gets you:
- A Kubernetes pipeline set up in minutes
- One-click app deployment (of course: manual staging if needed)
- Full observability, dashboards & alerts
Weave Cloud is easily connected your Git repository and to any CI. This sets up a Kubernetes cluster for CICD so that you can benefit from continuous and high velocity delivery like Qordoba above. On Google cloud your GKE cluster is easily connected to Google’s container builder and repo. But note: Weave Cloud works with any cluster, any network, any CI, any Git repo — we don’t force you down a path you may not like.
From this base, you can update and rollback applications via Git PRs or CLI or GUI. Finally we provide integrated ‘full stack’ monitoring and management to complete the operational circle. Weave’s integrated approach means that a developer can observe deployments as they happen and hence measure and react to the impact of a change. An example UI is shown below.
Open Source GitOps
Our cloud service is a convenient and satisfying way to do GitOps for Kubernetes. But what if you are a developer who wants to set up and understand your own open source pipeline? Read more below to learn about some GitOps approaches you can take.
Approach 1 - Use the Weave Flux Deployment Operator
We recommend that you use the operator pattern to listen for and orchestrate service deployments to your Kubernetes cluster. This approach is described by William Denniss in slides 15-21 of our Kubecon presentation (video, slides). Using the operator, an agent can act on behalf of the cluster to listen to events relating to custom resource changes and apply them consistently. In other words the operator performs reconciliation between Git and the cluster.
The operator is implemented by Weave Flux, an open source project that grew out of our many painful attempts to simplify our own microservices deployments to Kubernetes for our SaaS, Weave Cloud. Read more in this blog post. “The GitOps Pipeline” which sets out the benefits of orchestrating deployment rather than hand-coding it via CI plus scripts (“CI ops”).
Flux is used in our Weave Cloud service. We encourage you to try it there or set it up yourself. We welcome external OSS contributors and companies interested in Flux. Many CI and deployment tools would benefit from a deployment operator - please get in touch.
Approach 2 - GitOps fundamentals the Kelsey Way
Kelsey Hightower has a remarkable habit of delivering great presentations that contain simple developer-first storylines and solutions. His keynote at Kubecon is no exception (video).
Kelsey’s talk is about continuous delivery using Github and Google Cloud and starts from the idea that “kubectl is the new SSH …. if you are using kubectl to deploy from your laptop to production, you’re missing a few steps…”. He then provides a demo which lays out the steps from Git through CI and staging, to production and observability dashboards. He makes the steps visible and explicit and I recommend watching the video of his talk to follow them.
Kelsey’s analysis is founded on two premises:
- Kubernetes Deployment is not fully solved. He says: “How many people have end to end continuous delivery pipelines?” … Once you get to kubernetes, this is the next holy grail that you have to do ... this is where you should be focussing your time, big time”
- Developers want to drive changes through Git, and observe the results. He says: “Ideally if I make a code change, all I want is a URL to tell me where it’s running….You get bonus points if you can give me metrics to tell me how well it’s running…. If you give people visibility, they will stop asking for tools like kubectl to do their job, because now they can actually observe what’s happening in the cluster”
This is substantially the same argument we have made in our pitch for GitOps. The key idea is that Git is the source of truth for the desired state of everything in your stack — cluster, configuration, applications, tooling — using declarative infrastructure as code. Please read our first blog post “operations by pull request” for more details.
Choose your GitOps adventure
In this post we have offered three approaches to trying out high velocity CICD.
- An out-of-the-box service that provides a default CICD pipeline from Git to GKE clusters, using Weave Cloud, and includes monitoring, observability, audit and features to support resilience and scale. This also works with other Kubernetes providers and your own CI.
- An introduction to Weave Flux, a standalone and open source deployment operator that is used in (1), and can run wherever you have a Kubernetes cluster.
- Kelsey’s demo of how you can set up your own simple GitHub-to-Google pipeline
All three of these approaches are 100% consistent with each other. Let’s compare them.
The Weave Cloud service is the most integrated in that it sets up the GitOps pipeline for you and includes substantial support for GitOps observability which is an important matter in its own right. To borrow another quote from Kelsey’s talk - we want “to give people visibility.. One way to do that is to give people a curated dashboard that shows them what’s actually happening”.
The Weave Flux open source operator is focussed only on the “CD” part of CICD. By using Flux it means your release workflows can be made repeatable and manageable at scale and is a lot easier than using `kubectl` or by bolting scripts onto a CI tool. Flux is a coordination and notification agent that “speaks native Kubernetes” so that artefact changes such as code, config files, containers, tags, etc may be sync’d with cluster changes for any Kubernetes objects. You also get policies like “rollback”, “automatic”, “manual” deployment (with more to come, eg blue/green and canary deployments).
Kelsey’s approach is essentially a further decomposition into smaller parts. He provides an integration example from Git to CI to GKE, but does not provide a programmatic agent like the Flux operator. It would be perfectly reasonable to combine Kelsey’s approach with Flux. Indeed our Kubecon presentation shows such an integration on slide 21:
Note also that Kelsey recommends using a curated dashboard—Grafana in his demo. We agree that this is a good idea and supply such tooling in our cloud service, along with Slack integration too.
What about Secrets?
This is the most frequently asked question about GitOps so far.
The good news is that our friends at Bitnami created a Sealed Secrets open source project which specifically addresses a GitOps workflow. Sealed Secrets is a Kubernetes Custom Resource Definition Controller which allows you to store even sensitive information (aka secrets) in Git, which previously has not been an option. See also Building Serverless Application Pipelines, presented by Bitnami’s Sebastien Goasguen at Kubecon.
In addition, you can use Weave Cloud’s Deploy feature in conjunction with Sealed Secrets to create a continuous deployment pipeline where all operations are git based and where the desired state of your apps is declared in your git repos including your secrets.
What about Helm?
We are working on a way to treat Helm charts as first class objects in a GitOps workflow. This is a very exciting idea: it combines the benefits of Helm, namely ‘chart’ encapsulation and templating, with the benefits of GitOps, including: a version controlled deployment history that helps recreate the cluster when it goes down; the automation of cluster state updates to reflect updates of Charts and their customizations as well as the automation of cluster state updates to reflect container image rollouts. And more :) [UPDATE: Read our blog "Managing Helm Releases the GitOps Way"]
How can I get involved?
If any of the above is exciting to you, please get in touch. There is lots you can help with. Other areas of work include advanced policies such as blue/green and canaries. Tools like Istio encourage control of routing configuration at runtime, eg. updating a % weighting on traffic flow. How can GitOps do this? Traditional Infra-as-code suggests deployment artefacts are immutable, and canaries represent mutable configuration. By the same token, in the future we hope that many software packages will be offered as Kubernetes add-ons - such as Istio, OpenFaaS and Kubeflow. How can we make these GitOps-aware out of the box?
Read our latest whitepaper, "Making the Leap from Continuous Integration to Continuous Delivery" which details the hurdles that DevOps teams must clear in order to move from CI to CD and the best practices for making the difficult leap. It is designed as a resource for DevOps practitioners who want to take full advantage of the efficiencies and operational advantages that CD enables, yet struggle to overcome conceptual, cultural and technological challenges.