Optimizing Kubernetes Resource Limits for Team Development
Find out how to optimize your development team. Learn about namespaces in Kubernetes, RBAC, Kubernetes resource limits, and more with Weave.Works.
After you’ve made the decision to run your application on Kubernetes, what are the next steps? Maybe you’re wondering how to organize your team to do development on Kubernetes? This of course is quite a large topic and involves many opinions and different strategies around organization and tools. We won’t dive into all of the different development philosophies out there, but what we will discuss in this post are five key areas to consider in order to take full advantage of the technology that Kubernetes offers.
In this post, we’ll explore:
- Multi-team development and Kubernetes
- How to use Namespaces in Kubernetes
- What is RBAC
- When to use Network Policy
- Kubernetes resource limits and quotas and how to use them
Multiple development teams ≠ Multiple Kubernetes clusters
Kubernetes challenges the way we have traditionally thought about development environments. It has also changed the way we implement and share them with different teams. Working in teams no longer requires that everyone use the same language, or even that deployment pipelines be set up in a specific way. Also with Kubernetes and the way namespace isolation works in the cluster, you may not even need three separate clusters: Dev, QA and Staging.
How many clusters do you need?
Most developers just starting might be inclined to create one cluster for production and another one for QA and then maybe a third cluster for staging.
Both the staging and the test clusters need to reflect production, and then of course there’s your development environments which many think need their own cluster.
As you can see, this can get out of hand, and if you are not careful, you can end up with a lot of clusters in your development environments. Multiple clusters not only add a lot more overhead and maintenance, but they can also incur a large expense both monetarily and time-wise.
Sharing environments between teams
Kubernetes has a built-in feature that allows you to safely share a cluster among different environments between different projects on separate teams. While you can split a single cluster into different environments and share them across your team, we’re not necessarily advocating that you should only use one Kubernetes cluster.
There are many cases where it makes sense to run more than one cluster. But it’s important to note that with Kubernetes, multiple clusters doesn’t have to be your default approach. You can start with one Kubernetes cluster, possibly two, and then share those clusters among different development teams all working in different environments.
If you decide to develop on multiple clusters, Weave Cloud has a very useful feature that allows you to promote your workloads between clusters
To efficiently and securely develop your application on Kubernetes in a team, these are the features you need to be concerned about:
- Namespaces
- Role Based Access Control (RBAC)
- Network Policy
- The difference between Kubernetes Resource Requests and Limits
- Kubernetes resource limits and quotas
Namespaces
Namespaces in Kubernetes are a very basic concept in Kubernetes. They are simple to create and delete and are used to subdivide your cluster so that multiple teams can work on it. This not only saves you server costs, but it can also increase quality by providing a convenient platform for integration testing and other smoke tests before deploying to production.
See the How and Why of Namespaces for more on creating namespaces.
This is what namespaces look like on your cluster.
The two namespaces shown here provide a virtual separation where different services can run alongside each other on the same node. They can still communicate with one another if you preface a container with the service name. In this case, the service name gets resolved to the service that is local to the namespace, see Understanding Namespaces and DNS.
A namespace provides a virtual separation only, and as mentioned, it is a great way for different teams to work together on one application, see, Sharing a Cluster with Namespaces.
For stronger security though, you will want to implement Role Based Access Control (RBAC) at the user level, and also Network Policies for service-to-service permissions.
Role Based Access Control (RBAC)
RBAC provides more fine-grained control over who sees what in a cluster. It provides an additional layer of security for running multiple environments across teams. With RBAC you can create generic roles like ‘developer’ or ‘admin’ and then assign permissions to them. This allows you to specify what roles can access which clusters and whether they have read and write access to them.
With multiple development teams, roles can also be applied to an entire namespace so that you can define who is allowed to create, read or write to pods within a particular namespace.
Network Policies
Network policies take security one step further, and apply rules to application security. A namespace in Kubernetes is a virtual separation. If you only define namespaces, applications can still easily communicate with each other.
But what if you need to restrict which services or even what namespaces can communicate with one another? To do this, you’ll need to implement a network policy.
Container firewalls - Implement security policies between your services
Network policy is an add-on feature that is implemented by the Kubernetes CNI and can be enforced by Weave Net, if you are using it for pod networking, and also monitored with Weave Cloud.
See Weave Net for NetworkPolicy for information on how to configure it.
Kubernetes Resource Limits and Quotas
When sharing a cluster with multiple environments or multiple teams, another important consideration is resource allocation. Each namespace can be assigned a resource quota. By default, a pod will run with unbounded CPU and memory requests/limits. By specifying quotas you can limit how much of the cluster resources are consumed across all pods in a namespace.
Start early with setting and testing Kubernetes resource limits and quotas. Without specifying these, everything in your cluster may still run properly, but you could get a big surprise if there is a sudden load on one of your containers and you haven’t set the correct resource quota for it.
The following is an example ResourceQuota:
Download example YAML file: admin/resource/quota-mem-cpu.yaml
apiVersion: v1 kind: ResourceQuota metadata: name: mem-cpu-demo spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi
ResourceQuota imposes the requirement for each container to define its memory and CPU requests and limits:
- The CPU request total for all containers in the namespace should not exceed 1 CPU.
- The memory request total for all containers must not exceed 1Gi.
- The CPU limit total for all containers in the namespace should not exceed 2 CPU.
- The memory limit total for all containers in the namespace should not exceed 2Gi.
There are many other settings than what is shown in the snippet above. You can specify the number of pods, and even the number of node ports to use, for example.
By ensuring that you have RBAC and resource limits and quotas specified on top of your namespaces, you can end up with strong, secure and isolated environments on top of a single cluster. This is a great way to share the underlying resources for doing development.
Final thoughts
There are many cases where it might not make sense to run more than one cluster, and running more than one cluster for all of your development environments shouldn’t be the default approach. One of the first things you may want to check is if you can solve some of your development strategy and organization by splitting a single cluster.
For even more information on how to set up a production-ready Kubernetes environment, download our whitepaper, “Production Ready Kubernetes: What it Means, and how to Achieve it”.
Let us know how you organize your teams for Kubernetes development by reaching out to us on twitter @weaveworks.