Who we are

Weaveworks is the creator of Weave Cloud and Weave Kubernetes Platform, systems that simplify deployment, monitoring and management for containers and microservices. It extends and complements popular orchestrators, and enables developers and DevOps to make faster deployments, insightful monitoring, visualization and networking through GitOps. We’ve been operating Weave Cloud, with Kubernetes, Prometheus and Docker in production on Azure for the past four years.    

In addition to this, we are Microsoft Azure Technical partners; major contributors to the Kubernetes Open Source project; originators of the Cortex, Flux and GitOps; and we’re also key members of the SIG Cluster Lifecycle.

Why Cloud Native Tools?

Cloud Native is open source cloud computing for applications—a trusted tool kit for modern architectures. Weaveworks is a founding member of the Cloud Native Computing Foundation (CNCF) and we believe the future is cloud native.

We use Docker containers and manage them in Kubernetes clusters for all of the same reasons that have led you to containers and Kubernetes. Containers are lightweight, portable and they allow you to make fast incremental changes, which ultimately provides more value more quickly to your customers—even more so if you’re using a microservices-based architecture.   

If you’re managing those containers with Kubernetes, you know that you can easily scale your application without having to worry about rebuilding the cluster.   

As developers we like that mostly hands-off approach. We’d rather spend our time coding without having to worry too much about the infrastructure on which it runs. The more we worry about infrastructure, the fewer features we produce and this is generally not a good thing in today’s competitive landscape.  

With our experience, we can help you navigate the challenges of getting Kubernetes running on Azure. We’ll help you select the right Kubernetes installation distribution based on your needs and we’ll identify the issues to avoid after you’ve made your selection.

Why Run Kubernetes on Azure?

Azure is a premiere solution for running cloud native apps, but setting up Kubernetes to run on it can be complex.  Despite this, there are many reasons to run Kubernetes on Azure. One of the most appealing reasons is to take advantage of the vast number of services that are available.  Other reasons to run Kubernetes include:

  • Complete control over your servers — An advantage of using Azure is that it puts you in control over your instances which is not always the case with other cloud providers.
  • Access to Open Source Software without Vendor Lock-in —  Kubernetes is completely open source and so are many of the tools surrounding the project. This provides you with a wide-open, well-supported community with many options.
  • Portability —  Kubernetes runs anywhere: bare metal, public cloud, private cloud, and can even run on multiple public clouds all at once if you wish.
  • Cloudbursting and Private workload protection — with Kubernetes, you can run part of your cluster in the public cloud, but then have sensitive workloads that spill over and run in a private cloud on-premise, for example.

When you’re installing Kubernetes on Azure, these are the services that will need to be familiar with. In each section, we describe what you need to know when you’re configuring a cluster.

But before we get into the details of each Azure service and how they apply to Kubernetes, it is useful to have some familiarity with the Kubernetes architecture and its parts. See the interactive tutorial,  "Kubernetes Basics" for a good overview. 

Microsoft Azure Kubernetes Service (AKS)

For those of you who don’t want to manage every aspect of Kubernetes yourselves, you can use the Azure Kubernetes Service or (AKS). This hosted Azure VM service takes away most of the heavy lifting of manual configuration so that you can easily run Kubernetes on Azure by providing:

To monitor your applications running in Kubernetes on AKS, you can use Weave Cloud which integrates with and extends Azure Monitor to take advantage of Weaveworks’ hosted Prometheus monitoring. See, “Monitoring Kubernetes with Prometheus” for more information on why Prometheus is the defacto monitoring solution for Kubernetes.

Links:

Azure Services and Kubernetes

Azure Virtual Networking

Azure Virtual Network gives you an isolated and highly-secure environment to run your virtual machines and applications. Use your private IP addresses and define subnets, access control policies, and more. Use Virtual Network to treat Azure the same as you would your own datacenter.

VPC networking vs. Kubernetes networking

One of the concepts that may be confusing is the networking. There are a few different networks that you need to be aware of when you’re running Kubernetes on Azure.  

A VPC has its own networking capabilities and it connects cluster nodes or VM instances to each other onto its own subnet.  A Kubernetes cluster also has its own network—a pod network—which is separate from a VPC instance network.  

Pods are collections of containers with shared storage/network with a specification for how to run the containers. Pods are generally co-located, and co-scheduled and they run in a shared context. This means that containers within pods share an application model and can also share components through local volumes between related services within an application. Each pod has its own IP that are managed and scheduled by the Kubernetes master node.   

But pods between VM instances need a way to communicate with each other.  The VPC itself provides support for setting routes through the kubenet plugin (deprecated as of 1.8).  This is a very basic Linux networking plugin that provides near-native performance throughput for your cluster but it lacks other advanced features such as extensive networking across availability zones, the ability to enforce a security policy and also when using a VPC, you cannot effectively network the cluster since it uses multiple route tables . This is why many people resort to using CNI plugins -- an open standard for container communications.   

CNI Plugins

Container Network Plugins (CNI) for Kubernetes provide a lot more features than the basic `kubenet linux` networking plugin does. While you do lose some performance with a CNI overlay network, you gain other things like being able to set security policy rules between your services as well as the ability to connect nodes and pods between high availability (HA) zones if you have a cluster that is larger than 50 nodes. 

During installation you can specify which CNI plugin you want to use for the pod network. There are several network plugins available: Weave Net, Calico, and Flannel and others.  

For a good discussion on CNI, why you need it and a comparison of the different CNI providers, see “Choosing a CNI Network Provider for Kubernetes”.

Out of these plugins Weave Net is the best option for a number of reasons.  See “Pod Networking in Kubernetes” for more information.  

networking-weave-net.png

Azure Virtual Machines

Azure’s Virtual Machine Cloud provides scalable secure instances within a VPC.  You can provision a virtual instance with any operating system by choosing one of the many Azure Machine Images (AMIs) available or create your own AMI for distribution and for your own use.

VM Nodes & Kubernetes

When creating instances for your cluster you’ll need to think about the size of the nodes. Even though Kubernetes automatically scales and adjusts to a growing app, the resources set for any VM nodes you initially create are static and they cannot be changed afterwards.

Scaling Nodes

Scaling nodes is not supported through kubernetes’ command-line interface, `kubectl` in Azure. If you need to use Kubernetes autoscaler, then you’ll need to do it manually through the Azure dashboard with the autoscaling feature or you can also manually create a set number of VM nodes to achieve the same result.

Azure DNS for Kubernetes Cluster Setup

Kubernetes clusters need DNS so that the worker nodes can talk to the master as well as discover the etcd and then the rest of its components.  

When running Kubernetes in Azure, you can make use of Azure DNS or you can run an external DNS.  

If you will be running multiple clusters, each cluster should have its own subdomain as well.

Load Balancers

There are basically two design patterns in Azure where you may need load balancers:

  • During the installation of Kubernetes on Azure
  • When exposing app services to the outside world with more than one master running, you may need to provision an external load balancer so that you have an externally-accessible IP address for your application that is accessible to the outside world.  

For more information about finding and exposing an external IP for Kubernetes see the section below on How to Define Ingress and for more in depth information refer to the topic, Publishing Services in the Kubernetes documentation.

How to Define Ingress in Microsoft Azure

Ingress is not a service in Azure and its rules must be defined separately for any of your app’s services that need to be exposed to the outside world.   

Setting up Ingress in Azure involves the following:

  • Transport Layer Security (TLS) certification
  • host-names
  • path endpoints (optional)
  • services and service ports

When running Kubernetes on Azure, there are a few different ways to handle ingress:

Network Policy & Security

Related to ingress is the ability to specify a security network policy for every service available in a pod and whether it’s accessible to the outside world or to another service.  

If you are using Weave Net as your CNI pod networking layer, then you will have a Network Policy available to you, and when configured, Weave Net will enforce that policy. Network policies are very easily specified in the kubernetes deployment manifests (YAML files).  

For information on how to do that, see “What is a Network Policy Controller?” and “Configuring a Network Policy”.

Azure Storage

Azure Storage provides persistent block storage volumes for use with VM cloud instances. Each Azure Storage volume is automatically replicated within its availability zone to protect you from component failure, offering high availability and durability. Azure Storage volumes provide consistent and low-latency performance needed to run your workloads.

We recommend using Azure Storage with Kubernetes if you require a backup for any services that are already backed up with Kubernetes persistent volumes.   

Data Stores and Kubernetes

Sometimes pods need persistent data across volumes. For example, if some of your containers are MySQL databases (or any databases for that matter), and they crash, having a backup for your persistent volumes ensures that when the MySQL container comes back up, it can resume where it left off.  

See the discussion on volumes for information on how Kubernetes manages data stores and Persistent Volumes for available parameters.

Basic Kubernetes with Azure Setup

azure-kubernetes-schematic.png

Identity & Access Management (AD)

Kubernetes does not provide specific Active Directory (AD) roles and permissions.  If you are storing and retrieving information from an Azure Disk Storage or from Cosmos DB (calls the Azure API directly), then you will need to think about how to provide AD permissions for your nodes, pods, and containers.  Normally you will want different AD roles for the masters and the nodes.

You could assign a global AD role to a Kubernetes node, where all of the AD roles required by all containers and pods running in Kubernetes are automatically inherited. But from a security standpoint, this is not an optimal. Instead, you will need a more granular approach, one that can assign AD roles at the Pod and the container level and not just at the node level.

If you use AKS Engine to set up your cluster two AD roles are set up for your cluster one for the masters and one for the nodes.  There are a few different approaches to manage the Azure security requirements:

  • Group authentication models for applications on Kubernetes and then give groups of nodes certain AD permissions
  • Implement a proxy such as kube2AD
  • Use a solution like vault to handle app-level secrets

Summary of Kubernetes on Azure

At a high level, these are the issues you need to consider when running Kubernetes on Microsoft Azure:

Azure Service

Why you need it & what to consider

Azure Virtual Network

  • Use a CNI network for HA clusters with  > 50 nodes

VM

  • Incorporate capacity planning for node resources
  • Nodes can’t be scaled through `kubectl`; needs the autoscaling feature either in the GUI or not

Azure DNS

  • Kubernetes clusters require DNS to discover all of its components
  • Multiple clusters need subdomains

Ingress Rules & Elastic Load Balancer

  • Ingress rules definition & planning
  • Use Azure’s LB
  • NGINX or use the ingress controller provided by the with Kubernetes API

Identity & Access Management (AD)

  • Kubernetes controller needs AD roles for master and nodes
  • May need finer grain control if you are accessing the Azure API directly

New whitepaper: Implementing a Kubernetes Strategy

How do you implement a Kubernetes strategy across your organization without suppressing innovation and productivity?

Download to learn more