Resources:
Kubernetes On-Premise
CONTENTS
Further Reading
Multi-cloud Strategies with Kubernetes and GitOps
Download Whitepaper
Hybrid & Multi-cloud Strategies for Kubernetes with GitOps
Download Whitepaper
Best Practices for Hybrid Cloud Kubernetes with EKS and Weave GitOps
Download Whitepaper
FROM THE BLOG
GitOps for On-Premise - What to Keep in Mind
Read More
Liquid Metal is Here: Supported, Multi-Cluster Kubernetes on micro-VMs and Bare Metal
Read More
Liquid Metal: Kubernetes on Bare Metal with MicroVMs
Read More
GitOps Beyond Kubernetes: Liquid Metal and EKS Anywhere Manage Hybrid Infrastructure Effortlessly
Read More
The New GitOps Extension on AKS and Azure Arc Enables Trusted Delivery and Control
Read More
Weaveworks Brings GitOps to Amazon EKS Distro
Read More
Weaveworks & AWS: Best Practices for Hybrid Cloud Kubernetes with EKS and Weave GitOps
Read More
Application Portability for the Cloud Era
Read More
An organization’s goals and requirements are paramount when it comes to determining the best Kubernetes architecture for them. Often described as a cloud-native technology, Kubernetes can be used in an on-premise setup in certain scenarios. More so, depending upon the overheads on running workloads and compliance concerns, Kubernetes deployments running on-premises can have numerous benefits.
Kubernetes goes a long way toward simplifying the deployment and management of microservices, and it’s one of the foremost reasons why it’s witnessed unprecedented adoption. But just as importantly, it also offers users who won’t use the public cloud the nifty feature of allowing them to operate in an environment that’s very cloud-like. It severs dependencies from the user’s application and abstracts the infrastructure to allow the users to scale just like any cloud-native application.
Why is it a good idea to run Kubernetes on-premise?
Kubernetes is easy to run on the public cloud, so why do some organizations go through the trouble of running it in their data centers? Despite a revolutionizing shift towards cloud-native, a large number of organizations are still not fully on board with the idea of migration even as some have adopted the hybrid cloud infrastructure. Here are a few reasons why an on-premises Kubernetes strategy is a good idea.
Low latency requirements
Latency is a big reason many organizations need to run Kubernetes on-premises. Often the compute capacity for an application needs to be close to the business locale. For example, 5G Radio antennas that have a very short range, or retailers where the required compute needs to run onsite for operational speed.
Edge use cases
When we hear on-prem, we mostly think of a physical data center. However, a new frontier of on-prem is edge computing. For example, smart buildings that need to manage air conditioning, mechanical escalators, lifts, and more. These environments need to run with great reliability even if the connection to the cloud or main servers is interrupted.
Figure: Edge computing as on-prem infrastructure - Source
Data gravity
For many real-world applications, there are implications after processing the data and this processing needs to be done in real-time, as close to the source as possible. This is the concept of data gravity. For example, a city’s public cameras that monitor for threats do not have the luxury of uploading the video streams to the cloud and then processing them. The processing needs to be done close to the source, so a response can be timely when necessary.
Data privacy and compliance
Data privacy and stringent regulations are some of the leading factors that prevent organizations from adopting the public cloud. For instance, GDPR compliance prevents them from serving customers in the European region through services hosted on public cloud platforms.
Geography and business policies
Some businesses require working from specific geographic locations which rule out the option to utilize the public cloud. In other scenarios, the requirements of the business policy might prevent companies from utilizing a given cloud provider’s services.
Cost overheads
Extra cost overheads may be the single most important reason behind choosing to run Kubernetes on-premises. At scale, running all applications over public clouds can get quite expensive, especially if these applications are data intensive. In such a scenario, on-premise data centers are an operationally sound solution.
Feature whitepaper: Multi-Cloud Strategies with Kubernetes and GitOps >
Challenges running Kubernetes on-premise
The idea of running Kubernetes on-premise may solve various problems involving data privacy and overhead costs but it also brings in its own set of challenges. For instance, Kubernetes is known for its steep learning curve and organizations will need to appoint experts to undertake large-scale projects. Since the technology is fairly new, finding capable admins can be tough. Not to mention, if an organization doesn’t want to bring in these experts, the whole venture can turn out to be more tedious. More specifically, here are the areas that will be difficult to manage:
Figure: Challenges with Kubernetes on-prem
Auto-scaling
Auto-scaling of nodes in your cluster is a necessity as it helps save resources. The clusters can expand and contract automatically as per the workload but the feat is difficult to achieve on bare metal Kubernetes clusters.
Persistent storage
A huge chunk of product workloads that run on Kubernetes depends on block or file storage. Your organization will need to work with your storage vendors to get your hands on CSI plugins and necessary components and other integrations with Kubernetes.
High availability
The Kubernetes infrastructure needs to be resilient to the data center and infrastructure downtimes. This would require multiple master nodes per cluster, and when necessary, multiple Kubernetes clusters across various availability zones.
Monitoring
Running Kubernetes on bare metal will require you to invest in peripheral tooling to monitor and analyze the health of your clusters. Most log management and monitoring tools in the market come with the hefty hidden cost of running and managing the observability stack. These solutions are geared towards cloud-native greenfield applications and are not easy to retrofit into an on-premise stack.
Etcd
To ensure business continuity organizations require highly available Etcd clusters. The issue with this is that it adds to the hardware required and drives up cost and management overhead.
Load balancing
Both applications running on Kubernetes and cluster master nodes need load balancing. There are multiple options available based on your existing network setup. The challenge with this is related to the configuration, management, and availability of load balancers.
Best practices for running Kubernetes on-premise
Here we list some of the best practices you can follow while implementing Kubernetes on-premises.
- Staffing your team with the right people is a challenge and yet highly recommended. CNCF offers various certifications likeCertified Kubernetes Application Developer (CKAD) and Certified Kubernetes Administrator (CKA) that serve as great parameters for assessing the credibility of a candidate’s Kubernetes skills.
- Avoid creating snowflake applications and services. Whether a traditional on-prem or an edge application, it should be considered an integral part of the overall cloud-native platform.
- Use SSDs to avoid any performance issues and to keep up with the speed at which etcd writes to the disk.
- You must prioritize leveraging a dynamic load balancer that can adjust as your cluster grows in size and makes room for all changes. A dedicated HAProxy, or NGINX load balancer node is a good addition to the production environments. Newer options include Emissary Ingress, which is part of the CNCF.
- Have at least 9 servers (as a bare minimum) to run Kubernetes on-premises. Three for Etcd, three master nodes, and three worker nodes where the kubelet is hosted.
- Make use of virtual machines wherever necessary. They can be used on your existing VMware vSphere environment or on other types of IaaS on-premises environments such as OpenStack. SDNs can be used to enforce secure and isolated sub-networks.
- Have your own repositories in place for Docker and Kubernetes in case you are deploying offline or in an air-gapped environment. This includes binary repositories as well as helm chart repositories for Kubernetes manifests. The latest version of Helm supports chart retrieval from OCI registries. With this option, you will need 3 registry servers as a bare minimum for high availability.
- When handling the faults, start with smaller clusters to limit the damage. Converge these clusters later into bigger clusters when your monitoring solution is in place.
- Keep the OS and all the drivers up to date.
- Whitepaper: Hybrid and Multi-Cloud Strategies for Kubernetes with GitOps
- Application Portability for the Cloud Era
“Our application teams have more or less all the basics in their hands to start deploying and running their application in our very specific environment. We are running everything on-prem. We run the workloads also traditionally in many locations. We have a handful of what we call core locations, 20 to 25, then there are near edge locations and there are more than 10,000 edge locations.” Vuk Gojnic, Deutsche Telekom
Listen to the full podcast episode “Kubernetes at Deutsche Telekom - GitOps at the Edge”
Liquid Metal for supporting multi-cluster Kubernetes on bare metal with microVMs
What is Liquid Metal?
Liquid Metal is a GitOps-enabled Cluster-as-a-Service (CaaS) platform that simplifies scaling Kubernetes clusters across multiple environments. It was originally built to help telecom companies like Deutsche Telekom deliver 5G services at the edge but now it can provision Kubernetes clusters dynamically across baremetal and virtualized infrastructure alike.
Figure: Liquid metal
To operate at scale, many organizations now need the ability to provision Kubernetes clusters dynamically across numerous platforms and environments, including bare metal hardware, virtualized infrastructure, and their chosen public clouds. But virtualizing multiple Kubernetes clusters has traditionally necessitated heavyweight (read: expensive) virtualization approaches. As an alternative, Liquid Metal enables you to use lightweight microVMs that are delivered by Firecracker and Cloud Hypervisor. Surely, this does drive the cost down but there are numerous other advantages to adopting Liquid Metal.
Featured Blog: Liquid Metal is Here: Supported, Multi-Cluster Kubernetes on micro-VMs and Bare Metal
Hybrid cloud management in Azure and AWS
Hybrid cloud management brings together the best of public and private clouds as well as on-premises data centers. The two top cloud vendors are well aware of the importance of hybrid cloud, and have made major advancements in this area in recent years. Azure has put tremendous focus on its Azure Arc solution, and AWS on its EKS Anywhere. Let’s look at how these cloud vendor services enable hybrid cloud and what it means for Kubernetes.
Azure Kubernetes Service (AKS)
The Azure Kubernetes Service (AKS) from Microsoft offers a simple and quick way to start developing and deploying Kubernetes-powered cloud-native applications. As a hybrid cloud offering, AKS also offers unified management and governance for all on-premises, edge, and multi-cloud Kubernetes clusters, not to mention the migration services, cost management, and security that come under the Azure umbrella.
AKS is a one-stop solution for developing and debugging microservice applications with Kubernetes extensions for Microsoft Visual Studio and Visual Studio Code. Simply adding a CI/CD pipeline through GitHub Actions gives the user the freedom to set up a test deployment strategy and observe the environment for anomalies. The user also gets other nifty features such as Kubernetes resources view, control-plane telemetry, insight into container health, and log aggregation.
Azure Arc
As many Azure customers routinely run workloads outside the Azure cloud - that is, on-prem - Azure needed a way to manage both Azure cloud, and on-prem resources from within the Azure platform itself. Azure Arc was the answer.

Figure: Azure Arc (Source: ThomasMaurer.ch)
Azure Arc-enabled Kubernetes allows users to connect Azure to Kubernetes clusters, thus extending Azure’s management capabilities in terms of Azure Resource Graph, Azure Policy, and Azure Monitor. With it, customers can easily attach and configure Kubernetes clusters both inside and outside of Azure and deploy modern applications at scale. Users can connect their Kubernetes clusters to other public cloud providers or their data centers to Azure Arc and manage app deployments, GitOps-based configurations, governance, monitoring, and even threat protection.
GitOps with AKS and Azure Arc
While Azure Arc brought better monitoring, and policy definitions for hybrid workloads, there was still a need to go beyond this and integrate the configuration and application delivery across hybrid systems.
GitOps has proven itself as the optimal operating model for consistently managing configuration and application delivery to Kubernetes at scale It comes as no surprise that Microsoft would look for Weaveworks help in integrating revolutionary GitOps capabilities into AKS and Arc Enabled Kubernetes clusters using Flux, part of the Weave GitOps toolkit. Flux is a secure and reliable toolkit for managing and deploying declarative configuration to your clusters, that strictly adheres to the OpenGitOps principles.
Similarly, AWS has its hybrid offering named EKS Anywhere.
Featured Blog: The New GitOps Extension on AKS and Azure Arc Enables Trusted Delivery and Control
Amazon Elastic Kubernetes Service (EKS)
For those who don’t wish to manage every aspect of Kubernetes themselves, Amazon Elastic Kubernetes Service (EKS) is a great option. It’s one of the most trusted platforms to start, run, and scale Kubernetes applications in the cloud or on-premises. It’s a hosted EC2 service that undertakes most of the heavy lifting involved in manual configurations, allowing you to run Kubernetes on AWS by providing a managed Kubernetes control panel, a highly available cluster, automated version upgrades, and out-of-the-box integration with some of the AWS services like PrivateLink, VPC, IAM, ELB, CloudWatch, and CloudTrail.
A few years ago, the company announced Amazon EKS Distro (EKS-D), which is a Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service in a bid to develop even more secure and reliable Kubernetes clusters. With EKS-D, you can keep using the same versions of Kubernetes and its dependencies as deployed by EKS which means you can manually deploy the clusters without worrying about updates and dependencies. This, of course, includes extended security patching support among other upgrades over time.
Weave GitOps Enterprise brings EKS Distro and GitOps together, and provides the necessary support for creating, installing, and managing EKS-D clusters on-premises. Like any other distribution of Kubernetes, EKS-D needs configurations, upgrades, and all the additional peripherals for logging, tracing, and monitoring metrics.
AWS EKS Anywhere (Source: AWS)
There’s also an EKS Anywhere version that allows organizations to expand the presence of their AWS-based infrastructure onto their on-premises data centers which results in a seamless hybrid cloud. For a long time, the only roadblock has been related to cost since there’s a need for an expensive virtualization layer, but now with Liquid Metal, we have the option of economic and lightweight virtualization through Firecracker which can be easily integrated with EKS-A tooling.

Accelerate your EKS Adoption with GitOps!
AWS and Weaveworks partner on technical advancements especially for the EKS product suite. If you want to accelerate your EKS adoption and automation with GitOps - we can get you to a well architected platform in just a few weeks.
Our accelerator package will support onboarding applications quickly and managing them effortlessly. It also includes a built-in Terraform Controller and supports EKS Blueprints (codified reference architectures).
Contact us for a no cost evaluation and see if you qualify for free funding.
Related Blogs:
- GitOps Beyond Kubernetes: Liquid Metal and EKS Anywhere Manage Hybrid Infrastructure Effortlessly
- Weaveworks & AWS: Best Practices for Hybrid Cloud Kubernetes with EKS and Weave GitOps
- Weaveworks Brings GitOps to Amazon EKS Distro
“We turned to Weaveworks because of their extensive EKS and Kubernetes experience, including their close partnership with AWS. With Weaveworks’ proven track record of running Kubernetes in production, we wanted to bring new thinking into our organization to accelerate our learnings” - Nicola Le Poidevin, Head of Technology Wealth Management, National Australia Bank
Weave GitOps for Kubernetes On-Premise
Weaveworks is the leading provider of GitOps products and services, to the point that many in the cloud-native space immediately think of Weaveworks when they hear the term ‘GitOps’. Even as a separate technology, Liquid Metal is an accessible feature in the Weave GitOps ecosystem. It is designed specifically to help organizations that need support while managing multiple clusters on VMs or the ones operating clusters while taking advantage of both hardware acceleration and virtualization. Even organizations adopting Azure Arc and EKS Anywhere can leverage it without depending upon expensive virtualization on-premises.
Provision Kubernetes clusters declaratively on lightweight micro-VMS and bare-metal with Weave GitOps. Watch this short video to see how.
Summary
Kubernetes allows on-premise data centers to benefit from cloud-native infrastructure and applications regardless of public cloud providers and hosting. Organizations leveraging KVM, VMware vSphere, Openstack, and in this case, even bare metal can reap the cloud-native benefits that come from integrating Kubernetes. The operations of keeping it running on a day-to-day basis, though, is a hassle. For organizations that are setting up their infrastructure from scratch, it can get all the more challenging. To that end, it’s important they pay heed to the best practices mentioned above. The key mindset to adopt is to manage on-prem (including edge computing systems) as part of a unified cloud-native platform.
Liquid Metal is available along with enterprise-grade support as an offering from Weave GitOps as part of an ongoing campaign from Weaveworks to support bare metal Kubernetes. The Weave GitOps Platform offers core capabilities including the installation of EKS-D onto the organization’s existing infrastructure. Weave GitOps and the GitOps principles it's built on are key to managing Kubernetes on-premises.
Demo video: Manage multiple Kubernetes clusters across multiple cloud providers with Weave GitOps. Watch video >
Book a demo to see how Weave GitOps enables you to manage a fleet of clusters across hybrid and multiple cloud providers. Book a demo.
Book a demo to see how Weave GitOps enables you to manage a fleet of clusters across hybrid and multiple cloud providers.
Book a DemoFurther Reading
Multi-cloud Strategies with Kubernetes and GitOps
Download Whitepaper
Hybrid & Multi-cloud Strategies for Kubernetes with GitOps
Download Whitepaper
Best Practices for Hybrid Cloud Kubernetes with EKS and Weave GitOps
Download Whitepaper
FROM THE BLOG
GitOps for On-Premise - What to Keep in Mind
Read More
Liquid Metal is Here: Supported, Multi-Cluster Kubernetes on micro-VMs and Bare Metal
Read More
Liquid Metal: Kubernetes on Bare Metal with MicroVMs
Read More
GitOps Beyond Kubernetes: Liquid Metal and EKS Anywhere Manage Hybrid Infrastructure Effortlessly
Read More
The New GitOps Extension on AKS and Azure Arc Enables Trusted Delivery and Control
Read More
Weaveworks Brings GitOps to Amazon EKS Distro
Read More
Weaveworks & AWS: Best Practices for Hybrid Cloud Kubernetes with EKS and Weave GitOps
Read More
Application Portability for the Cloud Era
Read More
Whitepaper: Hybrid and Multi-Cloud Strategies for Kubernetes with GitOps
Download this Whitepaper to learn how GitOps is the effective model to successfully deploy and maintain a Kubernetes platform.
Download now