Introduction to Kubernetes Pod Networking - Part 1

By Mark Ramm
November 26, 2018

In this three part series, we deep dive into the Kubernetes Pod networking options on Amazon, and provide some guidance on the trade-offs involved in selecting a particular Kubernetes Network technology for your cluster on Amazon.

Related posts

AWS and Kubernetes Networking Options and Trade-offs - Part 3

AWS Networking Overview - Part 2

Introducing the Weaveworks Kubernetes Library

The Ultimate Guide to Kubernetes Networking on AWS

In this three-part series, we deep dive into the Kubernetes Pod networking options on Amazon and provide a bit of guidance around the various trade-offs involved in selecting a particular Kubernetes Network technology for your cluster on Amazon. 

In this Part 1, we start off with a quick overview of the Kubernetes Network design and philosophy, and in Part 2 we provide an overview of the AWS VPC framework and its design philosophy. With that background, Part 3 describes the options available to users of AWS and Kubernetes, and the various trade-offs with guidance on how to make decisions based on your specific workloads and plans.

Part 1 - Introduction to Kubernetes Pod Networking

There are lots of ways to connect services together in a distributed container system, many of which are quite complicated to setup and maintain.

Traditional Docker networking allows all containers on a host to reach one another over private IP addresses on a virtual bridge on each host. Port forwarding can also be configured on the host to provide external access to containers on an as needed basis.

The native Docker model is both serviceable and flexible. But it places a significant burden on those deploying applications to containers. Developers need to either maintain a potentially complex set of rules to keep containers that require specific ports on separate hosts -- which quickly becomes unmanageable at scale -- or they need to update those applications to use dynamic ports. Applications themselves must be reconfigured to take ports as flags, and some sort of service discovery system also needs to be implemented, etc.

The Kubernetes authors, on the other hand, made the networking implementation a bit more complex in order to simplify the process of converting even complex legacy applications to run in a container environment.

Kubernetes Container Network Interface (CNI)

The CoreOS team created a pluggable networking interface for containers, called CNI which was adopted by Apache Mesos, Cloud Foundry, and rkt. In 2016 the Kubernetes team also adopted CNI. It now forms the basis for the kubernetes networking model

Fundamental to Kubernetes is the concept of a group of related containers called Pods. Pods share the same lifecycle, are installed together on the same node, and can talk to one another in the same ways as they could when installed onto a single VM.

cni-plugin.png

In terms of networking this means that:

  • Pods in a Kubernetes cluster share a single IP address
  • Containers in a Kubernetes cluster see the same IP address for themselves as other pods (and nodes) in the network use to reach that container
  • Without network policies (which we will get to in a second) that forbid such communication, all pods and nodes can communicate with one another freely -- without Network Address Translation (NAT).

The result is a flat network space where networking between containers on a Pod occurs over localhost, where all Pods and nodes can communicate with one another without special work.

What is a Network Policy?

For security reasons a fully accessible flat network is often not desirable. Kubernetes offers a solution to this problem with network policies that can be used to restrict network access between Pods or between collections of Pods in a Kubernetes namespace

Kubernetes clusters can be built with a wide variety of CNI plugins, not all of which implement Network Policies. In order to limit the impact of an intrusion it is generally wise to implement at least some network policy rules to prevent cross service escalation. One of the most common ways to do this is to limit teams to a particular Kubernetes namespace, and use policies to isolate network traffic along namespace boundaries.

Depending on how many and what kind of workloads are deployed to the cluster, the security requirements of those workloads, their network performance requirements, and the underlying infrastructure available, you may choose to forgo the ability to lock down the network using network policies in favor of raw performance.

Beyond the basics of Pod connectivity, Kubernetes offers additional constructs that help create flexible distributed systems, but are generally independent of your choice of Pod networking technologies. I will cover these in detail in a future series on Kubernetes service architecture and Pod design, but will give a brief overview here for the sake of providing a more complete picture of the Kubernetes networking space.

Given that distributed horizontally scalable services are always changing  it’s helpful to have a single endpoint by which to talk to any member of that service, rather than needing to constantly update clients when new service members are added or old ones removed. To handle this, Kubernetes provides Services. A service provides a single virtual IP and a load balancer that manages the state of a set of Pods, allowing clients to address a single endpoint.

In Part 2, we will cover the Amazon networking model and its design.

Links to other posts in the series:

Need Help?

We can help accelerate your Kubernetes journey with our subscription service that supports installing production-grade Kubernetes on-premise, in AWS and GCP. For the past 3 years, Kubernetes has been powering Weave Cloud, our operations as a service offering. We’re taking our knowledge and helping teams embrace the benefits of cloud native tooling.

Our approach is based on developer-centric tooling and uses GitOps workflows to help you install, set-up, operate and upgrade Kubernetes. Contact us for more details.


Related posts

AWS and Kubernetes Networking Options and Trade-offs - Part 3

AWS Networking Overview - Part 2

Introducing the Weaveworks Kubernetes Library

Download the whitepaper: Your Guide to a Production Ready Kubernetes Cluster