Kubernetes FAQ - How can I route traffic for Kubernetes on bare metal?

By Weaveworks
July 31, 2018

Bare metal, on-premise installations of Kubernetes can be challenging. A frequently asked question is how to route traffic in Kubernetes. This post discusses the different traffic routing methods and the pros and cons of each.

Related posts

Cloud-Native Now Webinar: Containers Kubernetes Management

Kubernetes Security - A Complete Guide to Securing Your Containers

KubeCon EU 2023 Recap – GitOps Sessions on Flux with OCI, Liquid Metal CI/CD Platforms & Telco Cloud Platforms

We recently conducted an unscientific poll amongst three hosts of the Kubernetes community call: Jorge Castro (@castrojo), Ilya Dmitrichenko (@errordeveloper) and Bob Killen (@mrbobbytables) and we asked them what the recurring Kubernetes topics were. The results of that poll are a list of the most frequently asked questions about running Kubernetes in production. Stay tuned for a deep dive into the answers to these questions with the goal of providing you with a good jumping off point in your own research.  

“A Kubernetes on bare metal question that comes up quite frequently is less about how to install it and more about configuration that is unique to a bare metal or on-premise installation of Kubernetes”, says Bob Killen.  There are two pain points that are an issue for most people wanting to install a cluster on bare metal:  

  1. Routing traffic into the cluster, for example configuring services such as a load balancer and other related services that can access your services. 

  2. Configuring the storage for any site you need to set up on bare metal.  


To start this off let’s look into best way to configure traffic routing for Kubernetes on bare metal.  (We’ll address the storage question in a future blog.) 

Routing traffic to Kubernetes on bare metal

If you are using one of the public clouds like GCP or AWS, routing traffic to your Kubernetes cluster is relatively straightforward where you can easily add on one of their convenient  services. And if you are using one of the managed Kubernetes services like GKE it’s even easier to expose a service where there is a built-in ingress controller.  

One of the main problems is that most standard out of the box load balancers can only be deployed to a public cloud provider and are not supported for on-premise installations.  

There has been some movement toward better support for on-premise installs with recent projects like MetalLB, an on-premise load balancer.  Other available options for on-premise traffic routing include NGINX, or you can manually configure TCP to also do round-robin DNS type load balancing.  

Methods to route traffic - pros and cons

There are several ways to access your services within a cluster. Below we have listed the recommended methods and what the pros and cons of each are:  

ClusterIP

The clusterIP provides an internal IP to individual services running on the cluster. On its own this IP cannot be used to access the cluster externally, however when used with kubectl proxy where you can start a proxy server and access a service. This method however should not be used in production.   

Pros

Good for quick debugging. In fact, the only time you should use this method is if you’re using an internal Kubernetes or other service dashboard or you are debugging your service from your laptop. 

Cons

Since this method requires you to run kubectl as an authenticated user, it is recommended that you don’t use this method in production.  Doing so will expose your service to the internet and could therefore risk the security of your entire cluster. 

NodePort

This is the most rudimentary way to open up traffic to a service from the outside. It involves opening a specific port on your node.  Any traffic sent to that port is then forwarded to the service.  If you use a NodePort and you don’t specify a particular port for NodePort in the YAML file, then Kubernetes will pick a random port.  As a general rule, it’s best to always let Kubernetes pick its own port.  

Pros

Provides quick access to your service and is suitable for running a demo app or a service that is not in production. 

Cons

There are many downsides to this method: You can only specify one service per port, only ports between 30000 - 32767 can be used and if the IP of your machine changes your services will be inaccessible. 

Ingress

An Ingress is a collection of rules that allow inbound connections to reach the cluster services that acts much like a router for incoming traffic.  Ingress is http(s) only but it can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting, and more. In Kubernetes, ingress comes pre-configured for some out of the box load balancers like NGINX and ALB, but these of course will only work with public cloud providers. Your option for on-premise is to write your own controller that will work with a load balancer of your choice. 

Pros

Flexible architecture and can be completely customized to suit your needs. 

Cons

Supports http rules on the standard http ports 80, 443 only. You will need to build your own ingress controller for your on-premise load balancing needs which will result in a lot of extra work that needs to be maintained. 

Load Balancer

A load balancer can handle multiple requests and multiple addresses and can route, and manage resources into the cluster.  This is the best way to handle traffic to a cluster. But most commercial load balancers can only be used with public cloud providers which leaves those who want to install on-premise short of services. 

But now with the recently released MetalLB it’s possible to deploy a load balancer on-premise or by following the instructions from NGINX you can set up a TCP or UDP round-robin method of load balancing. 

load_balancer .png


From “Kubernetes TCP load balancer service on premise (non-cloud)

Pros

Scales with your website by efficiently re-distributing resources as your traffic increases and can handle multiple addresses and requests.  

Cons

Out of the box solutions for on-premise load balancing is in alpha and therefore often requires a hand-forged and/or more complex set up.  

Other protocols

Not highlighted here are HostNetwork, and HostPort both of which can also be used to access services.  These addresses are best left up to Kubernetes to manage.  If you need to access a service for debugging purposes, the Kubernetes docs suggest you use NodePort: 

“Don’t specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a HostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don’t specify the HostIP and protocol explicitly, Kubernetes uses 0.0.0.0 as the default hostIP and TCP as the default protocol.”
(from: Kubernetes documentation)

Resources to consult: 

Wrapping up

Given the dynamic nature of Kubernetes, and also for security reasons, it is generally best to use an ingress controller with a load balancer as a standard method in which you can access your services.  For bare metal, you may have to write your own ingress controller, depending on the load balancer or you can check out MetalLB.

Need help?

For the past 3 years, Kubernetes has been powering Weave Cloud, our operations as a service offering. We’re happy to share our knowledge and help teams embrace the benefits of on-premise installations of Kubernetes.

Contact us for more details on our Kubernetes support packages.


Related posts

Cloud-Native Now Webinar: Containers Kubernetes Management

Kubernetes Security - A Complete Guide to Securing Your Containers

KubeCon EU 2023 Recap – GitOps Sessions on Flux with OCI, Liquid Metal CI/CD Platforms & Telco Cloud Platforms

Download our whitepaper - Implementing a Kubernetes Strategy