GigaOm - Voices in DevOps with Steve George
Read a recap of Steve George’s appearance on the podcast, GigaOm - Voices in DevOps and learn how cloud native can help your team ship faster and more reliably.
Developer Impact on the Bottom Line
Kubernetes Deployment Strategies
Going Cloud Native: 6 essential things you need to know
A few weeks ago, Steve George, COO of Weaveworks appeared on GigaOm - Voices in DevOps hosted by Jon Collins. They discussed Kubernetes, and how building applications the cloud native way leads to an increase in your team's velocity.
Weaveworks and Kubernetes
Weaveworks was founded at exactly the same time Google launched Kubernetes. According to Steve, in those early days most of the focus and attention was on containers and on Docker in particular. The conversation then naturally leaned toward developers and cluster operators using the same workflow which is at the heart of the DevOps movement. As time progressed, and Kubernetes became the orchestration technology of choice, an entire ecosystem of cloud native technologies blossomed that can help organizations use DevOps best practices for building cloud native applications.
What do we mean by Cloud Native?
The term cloud native was developed by the Cloud Native Computing Foundation or the CNCF. The purpose of the CNCF is to bring together and to 'vet' the technologies that can best assist organizations to develop their applications in a ‘cloud first’ way. Kubernetes is of course the cornerstone of the CNCF and it was the first project donated to the CNCF. There are a number of other really important technologies that are used with Kubernetes such as network overlays for Docker containers and other core components needed to create a complete platform so that your team can successfully develop cloud native applications. Cloud native ultimately allows your team to increase both in velocity and scalability.
For teams to operate applications at scale in a production environment, there are best practices to follow for managing and developing containers. And to fully adopt cloud native technologies such as Kubernetes, an important step to take is a change in your application's architecture. In order to take full advantage of this dynamic way of developing and deploying changes, organizations need to shift their application architecture away from a monolith and instead run as a set of microservices, going beyond 12 factor applications of the past.
There are many new DevOps strategies that embrace automation and observability, providing real-time insights for reliability engineering and feedback to engineering for maximum feature delivery velocity. Cloud native is a completely new way of working from both an application technology and an operations perspective.
What do enterprises struggle with?
The biggest challenge for enterprises adopting these new technologies is how to implement this new way of working into your organization. It’s a significant change from the past and most organizations spend significant time figuring out what technology and methodology they should change and also what they should keep.
Common questions that many have include:
- Which public cloud should I use for this way of working?
- What is the right way to do my CICD process?
- Should I use Software Defined Networking?
There are many decisions to be made when making the cloud native transition, including how to host whether in the public cloud or on-premises; there are architectural and design strategies to consider; as well as how to manage and maintain control over specific technologies to build a Kubernetes platform.
Operating Kubernetes with GitOps
We coined the term GitOps which is an operating model for building and operating applications and Kubernetes in a cloud native way. Kubernetes basically operates on a model driven approach. You can for example tell Kubernetes how many instances to run for a particular application. But configuring your applications for Kubernetes can be both opaque and can get out of hand without having control.
GitOps brings configuration management together with simple auditing, so that you know exactly what you asked the system to do, when you asked it, and who asked it of the system. Also with full observability of the running cluster with alerting for when something has changed, you will always know a good state that can be rolled back to in the case of a failure. This is possible because the running cluster is continuously compared with the baseline system state version-controlled in Git, monitoring differences and acting on change when necessary.
Watch the entire podcast here: