GitOps Beyond Kubernetes: Liquid Metal and EKS Anywhere Manage Hybrid Infrastructure Effortlessly
Liquid Metal supports EKS Anywhere and EKS-D natively across bare-metal and mico-vms compute pools. Learn in this blog post how you can effortlessly manage a hybrid infrastructure the GitOps way.
Getting Started With Weave GitOps
Liquid Metal is Here: Supported, Multi-Cluster Kubernetes on micro-VMs and Bare Metal
Liquid Metal: Kubernetes on Bare Metal with MicroVMs
As an AWS advanced technology partner, Weaveworks has been working behind the scenes to ensure that the experience of deploying EKS Anywhere using ‘eksctl’ is as effortless as possible, but also that GitOps is readily available from the point of deployment via Flux CD.
With the latest AWS announcement of EKS Anywhere (EKSA) now formally supporting bare-metal deployments, we are excited to share developments from our own Liquid Metal project. If you are not familiar with Liquid Metal, it provides options for deploying EKS Distro (EKS-D) in a number of extended scenarios and use-cases.
Overview of Liquid Metal
Liquid Metal provides an alternative virtualized compute solution which is suitable for any cloud, either public or private and optimized towards cloud-native workloads, the first of which are Kubernetes clusters. Liquid Metal is an ideal solution where space and resources are constrained, security and compliance are critical and where speed and efficiency are key.
Security in Liquid Metal
Leveraging micro-vm runtimes such as AWS Firecracker, Liquid Metal can minimize compute footprint and attack surface whilst increasing the verification and attestation of its source components. As an example of a zero trust runtime, Firecracker is secure by design, exposing the minimum necessary peripherals and resources and enforcing hard isolation by default.
From a supply chain perspective micro-vms also have significant advantages over traditional virtual machine image formats. Micro-vm images are compiled from discrete OCI compliant images (surfaced as container layers) which can be:
- Packaged as an immutable set of artifacts.
- Signed at the point of build/release.
- Reproduced from cacheable layers.
Each point above contributes to extending the Trusted Application Delivery model to include infrastructure. The benefits of moving towards trusted delivery include:
- Continuous validation of components (kernel, rootfs).
- Reusable cached resources ensure minimal attack surface.
- Higher development and deployment velocity through automation.
- Shifting security and testing left into CICD pipelines.
The same benefits can also be applied to the EKS Anywhere supported Tinkerbell provider, which shares the same architectural vision and aligns with the principles of trusted delivery.
Liquid Metal and GitOps
Liquid Metal can manage the lifecycle of a hybrid infrastructure declaratively using Cluster API / CAPMVM and orchestrate workloads through GitOps with FluxCD already installed in EKS-D. The Liquid Metal team has already published an example environment which can be demoed on Equinix metal, where the bare-metal servers and requisite networking/services are comparative to traditional on-premise operator environments. Try it out here.
Weaveworks can provide support for Liquid Metal through Weave GitOps Enterprise, this allows GitOps workflows to be applied to virtualized micro-vm clusters, bare-metal clusters, EKS clusters, VMware based Clusters (CAPV) and a number of other providers.
Weave GitOps enables teams to store templatable definitions of clusters in git which include the requisite platform components (network CNI and storage CSI) layered with any combination of additional components (admission controllers, service meshes), custom applications and operators. At runtime, configuration is merged to define desired state and bootstrapping ensures that all components are reconciled to deliver a fully operational cluster from a single commit.
Mixed-Mode for Bare Metal and VMs
The engineering team put ‘pedal to the metal’ to ensure that supporting EKS Anywhere and EKS-D natively across bare-metal and micro-vms compute pools was not only possible, but also advantageous to key industry use cases.
The first of these was highlighted through our partnership with Deutsche Telekom, whose own ‘Das Schiff’ kubernetes platform requires both virtualization and bare metal to be presented to support 5g network functionality. 5G functions may require residency on a baremetal node directly but may include components such as their own cluster control plane which has no requirement to be presented with logical access to physical resources. This can mean having to diversify compute between different hardware types or underutilizing/oversizing instances for tasks such as running a control plane.
Some additional scenarios where utilizing physical devices with virtualized nodes:
- Presenting a GPU as part of a HPC machine learning workload for a single cluster tenant.
- Presenting a RAN card as part of a 5G RAN workload for a single tenant cluster.
- Presenting an IPU/DPU as part of a network accelerated storage environment (for example NVME over TCP).
If operating these solutions today the current options are:
1. Utilize additional bare metal nodes for control planes, increasing footprint and often adding to underutilized compute across the platform.
2. Present physical hardware to virtualized nodes, lowering/bottlenecking performance and introducing unnecessary compute overhead for specific workload types.
Liquid Metal is now a 3rd option to hybridize a cluster across both Micro-vms and bare-metal, placing workloads on ‘physical tin’ while hosting the control plane on virtualized micro-vms. Should there be additional workloads which do not require bare metal but need to be deployed within the same cluster, the ‘mixed-mode’ approach also allows for virtualized worker nodes to be added to a pool of bare-metal compute. Kubernetes clusters can now comprise multiple runtimes (baremetal, VMs), multiple peripherals (GPUs, Smartnics) and multiple Architectures (ARM, Intel), providing platforms greater flexibility and choice over the configuration and placement of workloads.
Via the installation of EKS connector, EKS Anywhere enables the management of hybridized clusters from the AWS console, increasing visibility and insight into all the workloads running on top of a modern, hybridized, cloud native compute fabric.
With CAPI and Liquid Metal Mixed-mode there are now a broad range of options available to provision and manage Kubernetes clusters for real-world use cases:
- Edge operators running 5G workloads.
- Enterprises leveraging baremetal kubernetes for application specific performance (data science, low latency networking).
- On-premise Datacenter operators consolidating/repatriating workloads to lower cost and achieve better overall utilization.
If you are curious about Liquid Metal today: