This is the second in a two-part series on 'Kubernetes and cloud native at the Edge.' If you haven't already, we recommend you read part 1 first. In that post we cover the fundamentals of edge computing, and what an edge computing infrastructure looks like end-to-end and how you can manage it with a centralized Infrastructure as a service and GitOps
In this post, we dig deeper into the top priorities Telcos have related to edge computing. We also discuss ways to manage edge infrastructure at scale using a centralized Kubernetes management plane.
What does cloud native look like for edge computing?
According to the CNCF (Cloud Native Computing Foundation), "Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach." These principles apply to telcos that build their edge computing infrastructure.
As the larger ecosystem moves from monoliths and VMs to microservices and containers, Telcos are making a similar transition.
Telcos' top priorities
Telcos have some quirks that make them different from other organizations running edge computing workloads. For one, they have a unique mix of different infrastructures, for example large OpenStack implementations on-premise, VMware VMs, and some infrastructure on public cloud platforms like AWS.
What Telcos need is the ability to manage multiple backends - public clouds, on-prem data centers, and edge infrastructure - using a single management plane.
Second, Telcos have virtualized much of their hardware infrastructure using VNFs (virtual network functions). This virtualized hardware is run on VM infrastructure. Telcos are now looking at the prospects of using CNFs (container network functions) instead of VNFs. This is due to the fact that containers are more lightweight, ephemeral, and are better candidates than VMs to build resilient systems with. However, there are some systems that are not candidates for containerization due to compliance reasons or technical debt. That means, telcos need the ability to run containers alongside VMs, so they can support both VNFs and CNFs side-by-side.
VNFs or CNFs - Why not both?
CNFs are an improvement over VNFs on almost every point, except when managing legacy applications, where VMs may be a necessity.
Software packaged in a VM image
Software packaged in a container image
Run on a hypervisor
Run on a container engine
Decoupled from hardware
Runs fewer services per node
Higher service density per node
Long running VMs
Ephemeral & immutable containers
Meant for static networks
Enable dynamic networks
Follow cloud principles
Follow cloud-native principles
Difficult to replicate
Easy to replicate
Longer configuration time
Quick creation & tear down of containers
Difficult to scale
Faster scale out
Some legacy applications require VNFs
Some legacy applications cannot be containerized
To ease Telcos transition to a cloud native edge, they need the ability to build 5G infrastructure on CNFs while still being able to run VNFs alongside them.
KubeVirt is one example of an open source project that allows this. It is a 'virtual machine management add-on for Kubernetes' that allows managing VMs using Kubernetes.
An add-on approach to services
Kubernetes CRDs is a powerful feature that allows Kubernetes to be extended, beyond its default capabilities, for edge infrastructure needs.
Edge solution providers can leverage Kubernetes CRDs to enable service meshes like Istio. A service mesh will need to be modified for edge scenarios. For example, services communicating over a local network create different security issues such as the presence of hardware, and real-time communication patterns. They still require security practices derived from the cloud such as mTLS which secures two-way communication between components using certificates.
Other necessary add-ons include a CI/CD tool like Flux, or Calico for policy-based networking, and monitoring and IAM tools. Taking an add-on approach to managing edge infrastructure is powerful because the same functionality can be reused across teams and projects in large organizations. It also leaves room for organizations to greatly customize their Kubernetes management for their specific edge scenarios.
Weave Kubernetes Platform brings consistency across environments
Coming back to the architecture diagram from the previous post, there are three layers where Kubernetes clusters manage all infrastructure and services running on them.
The key benefit of Weave Kubernetes Platform (WKP) is the management of repeatable, flexible Kubernetes environments across multiple environments.
WKP automates Kubernetes operations like upgrades and security patches across all layers. It delivers workspaces that have built-in segmenting of responsibilities and RBAC. Essential for edge infrastructure, WKP allows applications and data to be ported across environments - from cloud to regional or edge nodes.
Perhaps the most powerful feature of WKP is that it allows add-ons to Kubernetes to be deployed as services following the GitOps model. These add-ons can be components for storage, networking, monitoring and more. This enables you to operate any management platform supported by Kubernetes, from EKS to OpenStack.
Any organization managing multi-layer edge infrastructure needs a versatile platform that supports a wide range of requirements across the stack, and has automation and team workflows built-in. That's what WKP has to offer.
The edge is a new paradigm in computing, and it requires holding on to the learnings from the past decade of cloud computing, while adapting those best practices for edge scenarios. Telcos need to run VMs and VNFs alongside containers and CNFs, they also need the ability to manage all parts of the edge and cloud from a unified management plane. Weave Kubernetes Platform and GitOps offer a robust management and automation platform with extensive support for existing technologies, and improved consistency of operations for Kubernetes at the edge.