Writing Policies for Pods, Network Objects, and OPA
Learn about writing network policy, rules guiding it, and how it's structured and evaluated. Specify traffic direction in policy and writing policy using Rego.
Introduction
The policy consists of a set of rules. Decisions are made when policies are pre-matched after being queried. Network policy controls traffic coming out of the pods and coming to the pods. It is a construct that manages traffic at the pod layer. Like you basically said that you want to control what comes to this pod and what goes out of this pod. Without the Kubernetes network policy, our cluster is like a house whose rooftop is on fire. The work on network policy API started around late 2015 bringing closer the networking concept to get to Kubernetes.
Writing Network Policies, the selectors are important factors for the identification of sets of objects and in a grouping of the basic units of Kubernetes clusters. Let us go through the basic selectors and factors to consider when writing policy.
Label Selector
When writing a network policy, we need to specify which pod does this policy apply. This is done by using a label selector. In order to write network policies, we need to know about labels. It's an important part of Kubernetes and its operation.
Labels basically associate some metadata with the pods, label selector selects a set of pods using its attribute.
In this case, we have a label selector to match any pod where the app equals shopping. In the other diagram, the selector specifies and selects all pods where tiers equal to the database(db) in that category.
Traffic Direction
Another important factor to consider is the direction of traffic and the clarity of the direction the policy applies to. Is it controlling the incoming (ingress) traffic or the outgoing traffic (egress)?
There are 3 ways to specify ingress and egress traffic for the pods, which are podSelector, namespace Selector, and ipBlock.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata: ....
name: ....
namespace: ....
spec:
podSelector: ...
ingress:
- ...
- ...
egress:
- ...
- ...
Above is a typical network policy object having metadata, namespace, and name. It also includes the spec, that specifies a pod selector, and the direction of traffic allowed. We can either specify either the ingress or the egress, or both can have multiple rules. In-network policy, unless there is a rule in place to control traffic, network traffic will not be restricted and all the traffic will be allowed. This is not a good practice. if there are network policies, we must also know that traffic is denied if none of those policies are allowing that particular source or destination. This implies that we write rules that allow traffic but don’t explicitly write the rules that deny them.
apiVersion: nertworking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
ingress: []
This is a good example of a policy object that denies all incoming traffic to Pods. The podSelector selects all pods and specifies the ingress with an empty array, which indicates that nothing is whitelisted. If we put this policy in place, all the incoming traffic from the pods both external and within the clusters will be blocked from accessing the selected pod.
Also, traffic is allowed if there is at least one policy allowing it, for instance, if we deploy a bunch of policies to deny a bunch of traffics if we have just one policy that allows traffic then it will go through.
1- Blacklisting and Whitelisting Traffics
This makes it possible to deny unwanted network traffic and vise versa from a specific or group of pods.
For example, say we have 3 pods, a database pod, the frontend and the backend pod. where we only want the backend to access our database, by whitelisting the backend pods traffic we will have only traffic from the backend and none from the front end.
kind: NetworkPolicy
spec:
podSelector:
matchLabels:
app: thrift
tier: db
ingress:
- from:
- podSelector:
matchLabels:
app: thrift
tier: backend
Here we have a network policy that allows some traffic from the backend to the database. The podSelector will first match all pods in the database tier, enforcing the rule on the pods. In the ingress section, we have a single rule allowing traffic only from the backend pods. so, if you deploy this, we are going to basically have a system where the backend can talk to the database but the front-end cannot do that. In short, we are basically blacklisting all traffic except from the backend pods.
2- Restricting Traffic by Port Numbers
Network policy allows us to restrict traffic from a particular port of a pod using its port number.
- from:
- podSelector:
matchLabels:
app: thrift
tier: db
ports:
- port: 3308
protocol: TCP
We are restricting traffic using port numbers by explicitly allowing TCP traffic from Port 3308 into the app database, but if we don't specify any port then all ports are open by default, this also improves security by limiting unwanted access.
How is the Policy Evaluated?
Network policy rules are additive, the resulting RULE is based on the union of the policy. If there is one policy allowing traffic, then the traffic we go through since the order of evaluation does not affect the policy, therefore, do not conflict.
ingress
- from:
- podSelector:
matchLabels:
app: thrift
tier: backend
- podSelector:
matchLabels:
role: test
In this case we have one ingress rule which has two pod selectors, if any of the pods are matching one of the podSelector, traffic would be allowed in.
ingress
- from:
- podSelector:
matchLabels:
app: thrift
tier: backend
- from:
- podSelector:
matchLabels:
role: test
Similarly, we have two rules here, the two rules are ORed and we basically have the same result with the previous example. These two examples show that policy rules are combined with an OR statement and not an AND operator.
1- Using Empty Selectors
Network policy is a namespace object i.e. network policies are scoped to the namespace they are deployed to, and the podSelector only selects pods from the current namespace. Let say we want to allow all the test pods to connect to a specific port of all pods.
spec:
podSelector: {}
ingress:
- from:
- podSelector:
matchLabels:
role: test
ports:
- port: 3000
This spec selects all the pods in the namespace by using an empty curly bracket and allows traffic from port 3000 of the test pods. It is only applicable if the test pods and the pods we are selecting are in the same namespace. If otherwise then we have to deploy it in the same namespace.
To allow traffic from other namespaces, we need a namespace selector; similar to a pod selector.
metadata:
namespace: test-storage
name: allow-test-app
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: test
apiVersion: v1
kind: Namespace
metadata:
name: test-check
labels:
purpose: test
product: check
This is an example of the use of namespace selector, in this case, we have two examples ‘test-storage’ and ‘test-check’, and we want to access our app in ‘test-storage’, we basically specify the namespace selector and its label purpose equals “test’ allowing all pods in that namespace. Note that when we use a namespace selector it does not allow us to select traffic from another namespace.
2- Using ipBlock
The “ipBlock” selector is also used to specify where the traffic can come from. From the CIDR block, we can specify the range of addresses we want to deny from the block.
#allow traffic from cidr block with a range to deny
- from:
- ipBlock:
cidr: 10.55.0.0/16
This rule allows traffic from cidr block, while we specified a range 10.55.xx.xx to deny.
#allow traffic from cidr block with an ip range to deny with an exception
- from:
- ipBlock:
cidr: 10.0.0.0/8
except:
- 10.11.12.0/24
Here we specified a range from the block to deny i.e. all IP in the range 10.xx.xx.xx will be denied with an exception to 10.11.12.xx range.
Policy Engines
The complexity of policy gets tremendous when you work with microservices, the engine allows better management of policy in source control. It helps to unify and decentralize the policy definition such that it facilitates the enforcement of the policies across the entire stacks.
Policy engines have many use cases ranging from authorization, data validation, Infrastructure-as-Code policies, cost-control, and much more. One notable use case of the OPA is cost control where we could scale up to a particular number of machines on a condition in order not to break the budget. It is also applicable for a configuration where the policies ensure the minimum and maximum limits are enforced.
Open Policy Agent (OPA)
OPA helps decouple the policy from application ensuing continuous change in policy does not require compiling new code. Policies are written in the Rego language and are easy to deploy. Since policy without data is not useful, all data are represented in JSON. While policies are normally updated frequently, data is continuously changing depending on the use case. There are different ways of providing data into OPA:
- Input data like username, role, etc. along with every request and used for decision making.
- Push data to OPA’s REST API at any rate which is kept in memory.
- Keep data in a Bundle server at a centralized location and queried to get the data at a predefined rate.
- Pull data inside the policy from an external source into OPA.
1- Rego
Rego is a declarative high-level language for writing policies. It has similarities with SQL, but works with hierarchically structured data (JSON) rather than rows and tables. It is an easy-to-write declarative language i.e. you tell it what you want but not how to do it, and OPA optimizes query execution.
2- Rego Rules
Format of a Rego rule
rule-name IS value IF body
Each line of the body must evaluate to true for the body to be true
allow {
input.resource == "deployments"
input.user.team == data.resources.deployments.ownedBy[_]
}
If we provide an input whose resource type is “deployment”. If the team in the input would match the owned by attribute then the value “allow” will be true. The underscore in the bracket [ _ ] is an include operator i.e. If any of the owned by values matches the user team then it is must allow.
3- A Simple Policy in Rego
Writing a policy in Rego starts by declaring a package, we start by declaring a package and setting up a default value, it is a good practice to set the default to not allow because this will be the last resort if it is unclear what to do. Then we extend by making use of the data provided in the input using the comparison operator.
package ben
default allow = false
allow {
input.user.role == "admin"
}
test_allow_is_false_by_default {
not allow
}
test_allow_if_role_admin {
allow with input as {"user”: {"role":"admin"}}
}
test_allow_if_role_not_admin {
not allow with input as {"user”: {"role":"viewer"}}
}
Here we are using the policy to allow only an admin. Using the data provided from the input, if role is “admin” we want the rule to change the default value to true, and if role is viewer (not admin) the rule would not allow.
{
"allow": false,
"test_allow_if_role_admin": true,
"test_allow_if_role_not_admin": true,
"test_allow_is_false_by_default": true
}
TL;DR
- Open Policy Agent can be deployed to Kubernetes as an admission controller. When there, it intercepts the requests arriving at the API server, and validates them against the policies it already has.
- OPA can be used not only to enforce security policies, but also in many Kubernetes best practices.
- this article is part of our OPA series you can check them here.