The use of Kubernetes across the enterprise has exploded over the past few years. We can now find Kubernetes used everywhere from cruise ships to telecommunication towers to military installations. While EKS dramatically reduces the complexity of deploying and operating production ready Kubernetes clusters in the cloud - running self-managed clusters on premise has been a challenging experience for most infrastructure and operation teams.
At the same time, developers are increasingly looking to take advantage of the huge potential that hybrid Kubernetes provides towards accelerating their ability to deliver software faster to the hands of their users. Development teams strive for autonomy to release changes quickly, without worrying too much about the myriad of details that must be considered to guarantee a secure and compliant operating environment.
Multiple factors are driving this new hybrid world.
Global compliance: The number of regulations and compliance frameworks that global organizations must satisfy has dramatically increased over the past few years. This moves organizations towards implementing architectures where data and operations are kept within specific geographical boundaries, a requirement commonly referred to as data and operational sovereignty.
Processing close to the source: 5G and IoT are also moving workloads out of the cloud and getting them closer to the user. Processing data close to the source results in a whole new array of capabilities that development teams and enterprises are eager to exploit, thanks to the very low latency this scenario provides.
There are numerous other architectural requirements that promote the growing number of hybrid deployments in the enterprise, from latency around data access to integration requirements with legacy applications.
Operating at fleet scale
This explosion, both in terms of cluster count as well as location diversity, means that operations and infrastructure teams have gone from managing dozens of clusters, to operating thousands across many different locations at once – and the number will just keep growing. It’s not an easy task to manage the lifecycle of all these clusters – having visibility and managing services, workloads and associated security and governance controls across a distributed fleet of clusters necessitates an efficient and centralized platform.
Having an operational model and infrastructure foundation that can scale seamlessly across any number of clusters and provide a clear view into the full management lifecycle becomes indispensable – GitOps and EKS-Anywhere satisfy this growing need.
What about developers?
While operations and infrastructure teams are looking to simplify the effort required to manage this diverse new world, development teams and DevOps focused organizations aim to increase productivity by promoting autonomy and self service for their teams.
Reducing the number of hand-offs across teams is critical to decrease lead times and maintain a strategic advantage. Developers are eager to take advantage of this, but they want to achieve that goal while having the peace of mind that security and best practices for production ready clusters are already handled.
The Hybrid Shared Services Platform
To address all these challenges organizations are quickly adopting a concept known as the Shared Services Platform (SSP). With a Hybrid SSP, organizations can centrally provision and operate clusters at any scale and location, while providing development teams with secure and compliant self-service functionality and CI/CD capabilities for release promotion from the cloud and into on-premises transparently.
With a shared services platform, operators are able to manage and deliver functionality to multiple teams, efficiently allocating resources and reducing operational complexity. Development teams can then focus on writing code and deploying their solutions to environments where all the capabilities that they require are readily available and guaranteed to follow production best practices.
The SSP model also simplifies the implementation of multi-tenancy: larger clusters where multiple teams can independently deploy their workloads with security and compliance built-in.
The result: simplicity and acceleration no matter where your workloads are running.
EKS, EKS-Anywhere and Weave GitOps: the keys to SSP
EKS is already one of the most widely used managed Kubernetes services. With the recently launched EKS-Anywhere solution by AWS, organizations can now get the benefit of running the same Kubernetes distribution on-premise as they run on AWS cloud; including built-in production ready best-practices running on top of a container optimized OS.
EKS-Anywhere, with its native Flux and eksctl integrations, provides an intuitive path for organizations to dramatically reduce the complexity in deploying and managing their on-premise clusters together with their cloud-hosted infrastructure.
Adding Weave GitOps into the mix, end users are now able to operate all clusters centrally within Git as their desired state source of truth, while providing safe and secure autonomy to development teams that can now deploy across their hybrid multi-stage environments.
How to get started?
Applying the principles of the GitOps operating model at scale can be accelerated by working with the right partners. Weaveworks and AWS work closely together to support enterprises who are adopting GitOps and applying operational best practices to their Kubernetes architecture. Learn more about how we can accelerate your journey to Amazon EKS-Anywhere with GitOps.
If you would like to learn more about the concept of a Hybrid Shared Services Platform (SSP) managed with GitOps as the operating model, join one of our free webinar and workshops!