Cloud complexity occurs when you thought you had everything under control with your cloud migration, but now find yourself drowning in APIs. Most people expect a migration to be straightforward with new cloud technology replacing your existing infrastructure and resources. But the reality is that the old stuff doesn’t just disappear. In fact, what you often end up with is more than twice as many moving parts.
This brings us to one of the causes of cloud complexity today, which – in addition to the more obvious mistake of mixing too many APIs in one application – is reactionary behavior. You can’t always use the same knowledge, expertise, and tools that served you well in traditional environments; nor can you “react” to problems as you previously did.
Deploying to the cloud requires planning as well as an understanding of the implications that come with this change. Even though Kubernetes and other cloud native technologies come with a learning curve, if implemented correctly, Kubernetes should reduce complexity and not add to it. In this post we’ll consider the key Kubernetes concepts you need to know while making the transition.
The art of abstraction with declarative infrastructure
To fully understand how Kubernetes can help with complexity, it’s important to first note the need for application portability. Today, this need is mission critical since applications are expected to run uniformly across devices, screen sizes, operating systems, public clouds, and even hybrid environments. Infrastructure abstraction decouples containerized applications from the underlying infrastructure so they run uniformly, irrespective of location.
Almost everything in Kubernetes uses declarative constructs that describe how applications are composed, how they interact and how they are managed. This allows for a significant increase in the operability and portability of applications. And most significantly, it can be taken advantage of by developers and operators to automate cluster management with GitOps workflows, ultimately increasing both the velocity and reliability of feature deployments.
GitOps automates cluster and application management
Kubernetes’ declarative architecture is managed via YAML files that specify how to operate the cluster and how to deploy its applications. These YAMLs can be stored and managed along with your application code in a version control system like git.
With all configuration files kept alongside your code and versioned in git, you have a canonical source of truth that can be managed with GitOps through pull requests. Software agents like Flux CD that run inside the cluster can then compare a running cluster with what’s kept in git. If there is a difference between the two an alert is sent to your team indicating exactly where and when the drift is.
How a GitOps pipeline works
Below is how a typical GitOps pipeline for making changes works:
- Developer makes a change to their code and pushes it to git.
- Code change goes through CI system, and if all tests pass, it builds a container image that gets deposited into a container registry.
- The software agent, FluxCD notices the newly built image and updates the associated manifest for the container and checks it back into git.
- FluxCD notes a change in the state between the source of truth in git.
- A rolling update to the cluster is automatically initiated and the new change is deployed.
GitOps simplifies the operation and development of Kubernetes in the following ways:
- Any developer that uses Git can start deploying new features to Kubernetes
- The same workflows are maintained across development and operations
- All changes can be triggered, stored, validated and audited in Git
- Ops changes can be made by pull request including rollbacks
- Ops changes can be observed and monitored
Having everything in one place means that your operations team can use the same workflows to make infrastructure changes by creating issues, and reviewing pull requests. GitOps allows you to roll back any kind of change made in your cluster. In addition to this, you also get built-in auditing and observability and full disaster recovery in the case of a cluster meltdown.
Implement best practice container design patterns for Kubernetes
The key to simplifying Kubernetes in your organization is a well-thought out application architecture. The immutability of containers means that you can take advantage of a number of container design patterns that not only saves development time, but if used properly can make your applications more fault-tolerant and reliable. Reusability is the operative word here, especially since this allows development to incorporate well-designed architectural components without having to reinvent the wheel. Applications can also share common resource containers, which translates to much better efficiency and more simplicity in design.
In the above diagram, one container is a web server for files kept in a shared volume. A sidecar container updates the files from a remote source. The two processes are tightly coupled and share both network and storage and are therefore suited to being placed within a single Pod.
Good container design patterns will reduce complexity and provide the scalability required to meet modern-day demands. With well established Kubernetes design patterns, the need for multiple instances of patching and hot fixing components can be eliminated. Additionally, as opposed to having multiple teams creating the same solution, teams have the option of building just once, and testing and deploying as many times as they like, thereafter.
Kubernetes does help tame cloud complexity through its ability to self heal and scale to meet the infinite demands of your application without your intervention. The immutability of containers coupled with Kubernetes declarative design provides a convenient layer of abstraction and also helps your team fully automate complex workflows. Since Kubernetes is open source, it features many tools from the cloud native ecosystem as well as other solutions like Helm, Flagger, eksctl and wksctl that allow you to completely customize your development platform. Most importantly if you plan with the idea of incorporating developer-driven workflows like GitOps that takes the entire stack into account you will reap the rewards of a faster more agile development team.
Just starting out with Kubernetes?
Excited about the opportunity of cloud native and GitOps but not sure how to navigate your organization to the path to success? Whether you’re using Kubernetes in the cloud or behind the firewall your team needs battle-tested architectures, workflows and skills to operate Kubernetes reliably. Our QuickStart program is a comprehensive package to design, build and operate Kubernetes in production using the GitOps methodology. Contact us for information.