Securing Microservices in Kubernetes

By Adam Harrison
November 10, 2016

IntroductionStarting with version 1.3, Kubernetes includes beta support for network policies that allow you to exert control over connections to your containers. For those unfamiliar with Kubernetes network policies they have the following...

Related posts

Weave GitOps & Flux CD November Product Updates

Cloud-Native Now Webinar: Containers Kubernetes Management

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features

Introduction

Starting with version 1.3, Kubernetes includes beta support for network policies that allow you to exert control over connections to your containers. For those unfamiliar with Kubernetes network policies they have the following characteristics:

  • Control over ingress only
  • Connection oriented, stateful
  • Disabled by default; enabled on a per-namespace basis
  • Label selectors used to select source and destination pods
  • Filter by protocol (TCP/UDP) and port

If you’re used to working with standard firewalls, there are also a few notable simplifying omissions:

  • No ability to filter by CIDR
  • No ability to specify deny rules

Finally, it is important to understand that although the Kubernetes API server understands network policy objects, it does not ship with a controller that implements them. Fortunately Weave Net 1.8.0 includes just such an implementation, and is readily installed on your cluster with a single command.

If you would like to experiment with network policies but don’t have a cluster already, you can create one easily using the kubeadm tool – be sure to include the Weave pod network addon!

Policy Design Considerations

There are many reasons why you may chose to employ Kubernetes network policies:

  • Ensure containers assigned to different environments (e.g. dev/staging/prod) cannot interfere with one another
  • Isolate multi-tenant deployments
  • Regulatory compliance
  • Segregate teams
  • Enforce best practices

In this post we will explore the final category, and enforce microservices best practices at the network level:

  • By ensuring that backing data stores/message queues are accessible only by the microservice that owns them
  • That microservices are only accessible to the front end and to the other microservices
  • And that only the front end services are accessible from the internet (or more usually an external load balancer)
  • And finally, that the monitoring system can gather metrics from all services

Implementation

This section uses the Weaveworks Sock-shop microservices demo as a real-world example. You can see the full set of policy specifications for the sock-shop here.

Enabling Ingress Isolation

First of all, we must enable ingress isolation for the sock-shop namespace. As network policy is still a beta feature, it is enabled by adding an annotation:

kubectl annotate ns sock-shop net.beta.kubernetes.io/network-policy='{"ingress":{"isolation":"DefaultDeny"}}'

Alternatively, in YAML form:

kind: Namespace
apiVersion: v1
metadata:
  name: sock-shop
  annotations:
    net.beta.kubernetes.io/network-policy: |
      {
        "ingress": {
          "isolation": "DefaultDeny"
        }
    }

At this point, no pods in the sock-shop namespace can receive connections – everything is blocked. We must now add policies that selectively allow legitimate traffic.

If you wish to later remove the annotation you can do so with the following command (taking careful note of the trailing minus sign):

kubectl annotate ns sock-shop net.beta.kubernetes.io/network-policy-

NB do not attempt to

kubectl delete -f

the above YAML – you’ll delete the sock-shop namespace entirely!

Allowing External Access

As mentioned in the introduction, Kubernetes network policies identify traffic sources and destinations by label selector only – there is no provision (as yet) to specify IP addresses or masks. How then does one allow access into the cluster from external sources, once ingress isolation is enabled? The solution is straightforward, if slightly obscure; the from label selector can be omitted entirely, in which case traffic is not filtered by source at all:

apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
  name: front-end-access
  namespace: sock-shop
spec:
  podSelector:
    matchLabels:
      name: front-end
  ingress:
    - ports:
        - protocol: TCP
          port: 8079

This policy applies to any pods labelled name: front-end in the sock-shop namespace, and it allows TCP connections to port 8079. There are a few points of particular interest here:

  • Matching on name: front-end means that this policy will apply automatically to all replicas of the front end (assuming we have arranged for them to be so labelled)
  • Port filtering is applied after traffic has passed through the kubeproxy DNAT – consequently we must specify the pod container port, not the service or node port!

Now that our external loadbalancer can access our front end, we must add policies that allow access to the internal microservices and their underpinning databases and message queues.

Allowing Internal Access

The full Sock-shop demo includes multiple microservices – we will consider only one, the catalogue, as an example. You can see the full set of policies for the sock-shop here. First of all we need a policy that allows the front end to access the catalogue REST API:

apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
  name: catalogue-access
  namespace: sock-shop
spec:
  podSelector:
    matchLabels:
      name: catalogue
  ingress:
    - from:
        - podSelector:
            matchLabels:
              name: front-end
      ports:
        - protocol: TCP 
          port: 80

With that in place, we need a policy that grants exclusive catalogue database access to the catalogue microservice:

apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
  name: catalogue-db-access
  namespace: sock-shop
spec:
  podSelector:
    matchLabels:
      name: catalogue-db
  ingress:
    - from:
        - podSelector:
            matchLabels:
              name: catalogue
      ports:
        - protocol: TCP 
          port: 3306

That’s it for the catalogue microservice – equivalent policies for the other services can be found in the repo linked above. Finally, we need a cross-cutting policy that allows our local Prometheus agent to scrape metrics for upload to Weave Cloud:

apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
  name: cortex-access
  namespace: sock-shop
spec:
  podSelector:
    matchLabels:
  ingress:
    - from:
        - podSelector:
            matchLabels:
              name: cortex

Testing & Debugging

When drafting policies for an existing application, it is common for initial revisions to be incomplete or wrong – they need to be tested and debugged like anything else. It is very important that you exercise all aspects of your application, including:

  • Handling of exceptional conditions
  • Scheduled/batch jobs that run infrequently
  • Access provisions for backups, maintenance and debugging

since any of these may need to make or receive connections to operate correctly.

Future versions of the Weave Net policy controller will feature improved logging of blocked connections for diagnostic purposes – see weaveworks/weave#2629 for more details. In the meantime, you can use the metrics endpoints covered in the next section to get a summary of what is being blocked.

Monitoring

Weave Net exposes metrics endpoints which can be used to monitor the policy controller – see the documentation on configuring Prometheus for more information. Of particular interest is the weavenpc_blocked_connections_total counter – we recommend configuring an alarm to draw your attention if this value is increasing rapidly, as that is a sign of misconfiguration or potentially even an attack on your infrastructure. The counter has protocol and dport labels if you wish to be more selective.

External Resources


Related posts

Weave GitOps & Flux CD November Product Updates

Cloud-Native Now Webinar: Containers Kubernetes Management

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features