Optimizing Cluster Resources for Kubernetes Team Development
Find out how to organize your team for Kubernetes development. In this post we discuss: namespaces, RBAC, network policy and resource constraints and how to apply them to your development teams.
4 Challenges Retailers Face When Adopting Kubernetes at the Edge
Kubectl Port Forward: What is Port Forwarding and How Does it Work?
A Guide to ConfigMap in Kubernetes
After you’ve made the decision to run your application on Kubernetes, what are the next steps? Maybe you’re wondering how to organize your team to do development on Kubernetes? This of course is quite a large topic and involves many opinions and different strategies around organization and tools. We won’t dive into all of the different development philosophies out there, but what we will discuss in this post are five key areas to consider in order to take full advantage of the technology that Kubernetes offers.
In this post, we’ll explore:
- Multi-team development and Kubernetes
- How to use Namespaces
- What is RBAC
- When to use Network Policy
- Resource constraints and quotas and how to use them
Multiple development teams ≠ Multiple Kubernetes clusters
Kubernetes challenges the way we have traditionally thought about development environments. It has also changed the way we implement and share them with different teams. Working in teams no longer requires that everyone use the same language, or even that deployment pipelines be set up in a specific way. Also with Kubernetes and the way namespace isolation works in the cluster, you may not even need three separate clusters: Dev, QA and Staging.
How many clusters do you need?
Most developers just starting might be inclined to create one cluster for production and another one for QA and then maybe a third cluster for staging.
Both the staging and the test clusters need to reflect production, and then of course there’s your development environments which many think also need its own cluster.
As you can see, this can get out of hand, and if you are not careful, you can end up with a lot of clusters in your development environments. Multiple clusters not only adds a lot more overhead and maintenance, but they can also incur a large expense both monetarily and time-wise.
Sharing environments between teams
Kubernetes has a built-in feature that allows you to safely share a cluster among different environments between different projects on separate teams. While you can split a single cluster into different environments and share them across your team, we’re not necessarily advocating that you should only use one Kubernetes cluster.
There are many cases where it makes sense to run more than one cluster. But it’s important to note that with Kubernetes, multiple clusters doesn’t have to be your default approach. You can start with one Kubernetes cluster, possibly two, and then share those clusters among different development teams all working in different environments.
If you decide to develop on multiple clusters, Weave Cloud has a very useful feature that allows you to promote your workloads between clusters
To efficiently and securely develop your application on Kubernetes in a team, these are the features you need to be concerned about:
- Role Based Access Control (RBAC)
- Network Policy
- Resource constraints and quotas
Namespaces are a very basic concept in Kubernetes. They are simple to create and delete and are used to subdivide your cluster so that multiple teams can work on it. This not only saves you server costs, but it can also increase quality by providing a convenient platform for integration testing and other smoke tests before deploying to production.
See How and Why of Namespaces for more on creating namespaces.
This is what namespaces look like on your cluster.
The two namespaces shown here provide a virtual separation where different services can run alongside each other on the same node. They can still communicate with one another if you preface a container with the service name. In this case, the service name gets resolved to the service that is local to the namespace, see Understanding Namespaces and DNS.
A namespace provides a virtual separation only, and as mentioned, it is a great way for different teams to work together on one application, see, Sharing a Cluster with Namespaces.
For stronger security though, you will want to implement Role Based Access Control (RBAC) at the user level, and also Network Policies for service to service permissions.
Role Based Access Control (RBAC)
RBAC provides more fine-grained control over who sees what in a cluster. It provides an additional layer of security for running multiple environments across teams. With RBAC you can create generic roles like ‘developer’ or ‘admin’ and then assign permissions to them. This allows you to specify what roles can access which clusters and whether they have read and write access to them.
With multiple development teams, roles can also be applied to an entire namespace so that you can define who is allowed to create, read or write to pods within a particular namespace.
Network policies takes security one step further, and applies rules to application security. A namespace in Kubernetes is a virtual separation. If you only define namespaces, applications can still easily communicate with each other.
But what if you need to restrict which services or even what namespaces can communicate with one another? To do this, you’ll need to implement a network policy.
Container firewalls - Implement security policies between your services
Network policy is an add-on feature that is implemented by the Kubernetes CNI and can be enforced by Weave Net, if you are using it for pod networking, and also monitored with Weave Cloud.
See Weave Net for NetworkPolicy for information on how to configure it.
Resource constraints and quotas
When sharing a cluster with multiple environments or multiple teams, another important consideration is resource allocation. Each namespace can be assigned resource quota. By default, a pod will run with unbounded CPU and memory requests/limits. By specifying quotas you can restrict how much of the cluster resources are consumed across all pods in a namespace.
Start early with setting and testing constraints and quotas. Without specifying these, everything in your cluster may still run properly, but you could get a big surprise if there is a sudden load on one of your containers and you haven’t set the correct resource quota for it.
The following is an example ResourceQuota:
Download example YAML file: admin/resource/quota-mem-cpu.yaml
apiVersion: v1 kind: ResourceQuota metadata: name: mem-cpu-demo spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi
ResourceQuota imposes the requirement for each container to define its memory and CPU requests and limits:
- The CPU request total for all containers in the namespace should not exceed 1 CPU.
- The memory request total for all containers must not exceed 1Gi.
- The CPU limit total for all containers in the namespace should not exceed 2 CPU.
- The memory limit total for all containers in the namespace should not exceed 2Gi.
There are many other settings than what is shown in the snippet above. You can specify the number of pods, and even the number of node ports to use, for example.
By ensuring that you have RBAC and Resource quotas specified on top of your namespaces, you can end up with strong, secure and isolated environments on top of a single cluster. This is a great way to share the underlying resources for doing development.
There are many cases where it might not make sense to run more than one cluster, and running more than one cluster for all of your development environments shouldn’t be the default approach. One of the first things you may want check is if you can solve some of your development strategy and organization by splitting a single cluster.
For even more information on how to set up a production ready Kubernetes environment, download our whitepaper, “Your Guide to a Production Ready Kubernetes Cluster”.
Let us know how you organize your teams for Kubernetes development by reaching out to us on twitter @weaveworks.