In this blog, I’ll make the case that a CICD pipeline implemented using the GitOps methodology is a more secure way to automate deployment.

Consider the following questions:

  • Do you have direct access to the container image repository?
  • Do you have direct access to the production cluster?
  • How do you know what’s actually running in your cluster?
  • Can you tell when expected vs actual state diverges?
  • Would you have to re-run every CICD pipeline to recover a cluster after a disaster?

What’s a CICD pipeline?

A brief perusal of the results of a Google image search for “CICD pipeline” produces a vast number of colourful and sometimes bewildering examples.

A CICD pipeline is the combination of Continuous Integration (CI) and Continuous Deployment or Continuous Delivery (CD), with automation such that a commit to a source code repository triggers the build, test and packaging of an application that’s then deployed to a cluster.

CI is well understood to be a best-practice, so I shall not discuss it further. CD has been around as a concept for quite a long time, but only recently has it been adopted as a common practice.

Most Continuous Integration (CI) systems/servers now have a deployment plugin or configuration for a container orchestrator like Kubernetes. This makes it easy to connect the CI systems output to the target environment for the application.

A simplification, below, starts with code on the developer machine, (Dev), being pushed to a code repository (e.g. git), where it’s picked up by a CI system, which runs some tests and then builds an artifact (a container image), which is pushed to the image repository and then deployed to the orchestrator (Kubernetes).


Security By Design

The OWASP project (which states how to produce secure applications by design) lists ten principles that should be applied when designing secure applications. I’ve highlighted a few to consider in the context of a CICD pipeline.

  • Minimize attack surface area
  • Establish secure defaults
  • Principle of Least privilege
  • Principle of Defense in depth
  • Fail securely
  • Don’t trust services
  • Separation of duties
  • Avoid security by obscurity
  • Keep security simple
  • Fix security issues correctly

Let’s look at the pipeline with these principles in mind, consider the credentials and access typically assigned, and then what’s actually needed for each step. (RW for Read Write access, and RO for Read Only access.)


Woah! There’s a lot of red ink there. It’s easy to see how a simple pipeline can violate some of the principles listed above. There’s an easy fix for some of that - by removing direct access to the Image Repo and cluster by the developer, the attack surface can be reduced, privileged access can be minimized, duties can be separated.

Here’s an improved pipeline:


Necessary RW access is now marked in blue. The dotted lines indicate where we might consider the logical security boundaries to be, if they’re considered as separate duties. Defense in depth is improving as the need to cross those boundaries is reduced.

The CI system is still looking like a pretty interesting target, because it’s got credentials for the source code, the image repo and the cluster, and it crosses two logical security boundaries.  (If the CI system above is also maintaining the current state by updating YAML manifests, there’s another set of credentials here too.)

Is the CI system the most well secured piece of your infrastructure?

A better approach

The GitOps way to address this is by running a reconciliation operator in the cluster itself. It operates on a configuration git repo, with separate credentials. The operator reconciles the desired state as expressed in the manifest files, stored in the git repo, against the actual state of the cluster.


There’s no credential leakage across the boundaries, the CI system can operate in a different security ‘zone’ than the target cluster. Each pipeline component only needs a single RW credential. Now, you can “keep your secrets close” because the cluster credentials never leave the cluster itself.

Automating releases by writing them into git and only applying changes when they’ve already happened in git ensures that a record of the desired state of the cluster doesn’t depend on the cluster itself. If the cluster is lost, it can be restored quickly from the independent record in the config git repo, without having to re-run build pipelines for the entire application - thus improving availability. The config repo has its own set of credentials - which adds another layer of defense.

Developer friendly workflows, reviews and pull requests are enabled on the config repo - which is independent of the cluster itself - meaning there’s a complete audit trail of every tag update and config change, regardless of whether it was made manually or automatically.

Finally - with a separate record of the desired state to compare the actual state with, it’s possible to alert when divergence occurs.

We can derive two assertions:

  1. CICD pipelines that require the cluster control API endpoint to be exposed to the internet and place sensitive cluster credentials in external CI systems are an anti-pattern.
  2. CI driven CD patterns that change the state of the cluster without recording the change are an anti-pattern.

Weave Cloud’s Deploy feature makes it easy to set up and automate a secure CD pipeline, sign up for a free 30 day trial to give it a try.

Further reading:
See the GitOps and Sealed Secrets post for more on how adopting GitOps can contribute improvements to your application security.