We write a lot about GitOps, for obvious reasons. But to understand what makes GitOps so revolutionary, you need to understand what came before it. You need to know how IT operations (the ‘Ops’ in GitOps) used to work – and how, in the age of the cloud, IT operations evolved. So in this post, we’re going to take a step back and explain how IT systems used to be managed – and what the arrival of GitOps has done to change that.

Let’s start at the beginning

IT operations – the discipline of deploying and monitoring applications and infrastructure – has existed for as long as IT itself. For as long as we’ve had servers and software, we’ve had operations teams. Until recently, their work was more or less the same. The types of systems they managed changed over time, as did the tools they used. But by and large, the core processes, methodologies and philosophies remained constant, whether they involved mainframes, bare-metal servers or virtual machines.

Then came the cloud and containers

Over the past decade, all that has changed. Thanks to the convergence of trends including cloud computing, highly distributed architectures and containerized infrastructure, the field of IT operations has undergone a transformation.

That transformation involves not just new tools and technologies, but also new processes. A containerized, microservices-based application that is hosted across multiple clouds requires a fundamentally new approach. Traditional operations strategies simply won’t cut it.

The complexities of the cloud native era

Clearly, cloud native technologies bring a wealth of benefits to business. In principle, they make it easier to run applications that won’t fail if they are subject to a sudden spike in demand. They enable a business to host its apps anywhere – even a public cloud like AWS – rather than on servers in its own datacenters. And hybrid public/private cloud approaches can be used to cope with the requirements of regulatory compliance.

But the cloud era brought change – and with change, there is often complexity. The four big changes can be summarized as:

1. DevOps and DevSecOps

DevOps refers to the convergence of development and operations teams. In the past, developers would build an application, then an operations team would deploy and maintain it. This could lead to a lack of ownership – and finger-pointing when things went wrong.

By bringing development and ops tasks together, DevOps boosts productivity and crucially, solves the ownership problem. The DevOps team owns the development of the application, its testing and its deployment – even support. Tune in to our podcast with Gene Kim as he discussed DevOps Foundation and Future.

DevSecOps is a more recent evolution that adds greater security focus when building and deploying applications. Against a backdrop of growing cyber-crime, it ensures that security is baked in from the outset. (We did a great webinar with our friends at Twistlock a little while back that covers DevSecOps and GitOps pipelines.)

2. CI/CD pipelines

Continuous Integration/Continuous Deployment is a process that entails updating, testing and deploying software updates far more frequently than in the pre-cloud era. By focusing on small but frequent changes, the scope of each update is limited, making bug fixes and rollbacks less challenging.

Testing in a CI/CD pipeline takes the form of an automated testing suite. Whenever we add to or modify our code, the pipeline automatically administers all unit tests, integration tests, performance tests and checks such as static code analysis, vulnerability scanning and code coverage. Only if the new build passes those tests can deployment take place. Learn more about CI/CD pipelines for Kubernetes.

3. Microservice architectures

Microservice architecture involves dividing an application into small components, each tasked with a specific role in the organization. Take an e-commerce business, for example. When a customer pays for a product, they could do so with a credit card, gift card, offline bank transfer, or PayPal. By designing a separate microservice for each payment option, reliability is increased, as a bug in one microservice would not affect the others. When the responsible team resolves the bug and deploys a new build, they can do so without having to redeploy the entire application.

Two related concepts worth mentioning here are domain-driven design and API-first development.

· Domain-driven design involves dividing your organization into groups based on shared functionality. Using the example above of an e-commerce organization, you might have a payment domain, a product catalog domain, and a user experience domain, among others. Each domain is responsible for developing the services for their domain.

· API-first development is the process where new development begins with the development of an API contract. Especially in highly collaborative environments, an API contract establishes the means of communication between services. The development team builds its services based on the contract, and dependent teams can build their applications, knowing exactly how they can expect to communicate with the new service.

Here is an introduction to Microservices with a bit more depth.

4. Containerization

The explosion of container-based services in the past few years has made it far easier to build stable and scalable microservice-based applications. We can expect a microservice we have built and packaged as a container to behave the same on a local workstation or in a test or production environment. By using a container management system like Kubernetes, we can support highly available, fault-tolerant, and scalable systems with relative ease and confidence.

Enter GitOps

GitOps is a model for managing infrastructure and delivering code changes for cloud-native applications. It works by using Git as the single source of truth for both the application and the infrastructure. As changes are made to the code stored in the Git repositories, CI/CD pipelines roll out changes to the infrastructure. Software agents continually compare the state of the infrastructure as defined in Git (the single source of truth) with that of the production environment.


Key benefits of GitOps

  • Velocity: When everything – both infrastructure code and application code – are held in version control, developers can move faster without fear of causing serious outages. And because Git is a familiar tool for most developers, there is little in the way of a learning curve.
  • Stability: Using source control means infrastructure code is constantly kept up to date as it evolves. As well as enabling the seamless deployment of modifications, it also makes it easy to roll back problematic changes in the event of a failure. Simply redeploy the last stable version of your infrastructure and you are back up and running. This means lower mean time to recovery (MTTR) and, as a result, less interruption to customers.
  • Reproducible deployments: Through the use of declarative configuration, it becomes a straightforward process to define the infrastructure in a succinct and readable manner. Declarative infrastructure configuration enables an organization to reproduce the same infrastructure over and over in an attempt to scale, or to apply the same infrastructure configuration changes across an entire cluster.
  • Security and compliance: Whenever anything changes – whether it’s a change to application code or a change to infrastructure code – the change is recorded. Role-Based Access Control (RBAC) means the system always knows who has done what and when it happened. Audit trails are generated automatically.

Visit our guide to GitOps for detailed information and examples.

Putting GitOps to work

To learn more about how GitOps could help with digital transformation in your organization, download a copy of our white paper: 6 Reasons to Start the Cloud Native Transformation.

To start a conversation with the GitOps experts, contact us today.