Kubernetes – Network Policies
October 21, 2019In the traditional server world, we’ve taken great lengths to ensure that we can micro-segment our servers instead of relying on a few select firewalls at strategically defined chokepoints. What do we do in the container world though? This is where network policies come into play.
Network Policies – The Theory
In a default deployment of a Kubernetes cluster, all of the pods deployed on the nodes can communicate with each other. Some security folks might not like to hear that, but never fear, we have ways to limit the communications between pods and they’re called network policies.
I still see it in many places today where companies have a perimeter firewall protecting their critical IT infrastructure, but if an attacker were to get through that perimeter, they’d have pretty much free reign to connect to whatever system they wanted. We introduced micro-segmentation with technologies like VMware NSX to make a whole bunch of smaller zones even to the point of a zone for each virtual machine NIC. This added security didn’t come easily, but it is a significant improvement over a few perimeter firewalls protecting everything.
Now before we go forward with this post, I should say that a Kubernetes Network Policy is not a firewall, but it does restrict the traffic between pods so that they can’t all communicate with each other.
We’ve defined our applications as code already, so why not code in some network security as well by only allowing certain pods to communicate with our pods. For instance if you have a three tiered application, maybe you’d set policies so your web pods could only communicate with the app pods, and the app pods communicate with the database pods, but not the web directly to the database. This is a more sound security design since we’re removing attack vectors in our pods.
Now, setting up a network policy is still done in the standard Kubernetes methodology of applying manifests in a yaml format. If you apply the correct network policies to the Kubernetes API, the network plugin will apply the proper rules for you to restrict the traffic. Not all network plugins can do this however so ensure you’re using the proper network plugin such as Calico. NOTE: flannel is not capable of applying network policies as of the time of this writing.
Network Policies can restrict, Ingress or Egress or both if you need. You might want to consider a strategy to keep your rules straight though or you could have a hard time troubleshooting later on. I prefer to only restrict ingress traffic if I can swing it, just to limit the complexity.
Network Policies – In Action
In this section we’ll apply a network policy to limit access to a backend MySQL database pod. In this example, I have an app pod that requires access to a backend MySQL database, but I don’t want some other “rogue” pod to get deployed and have access to that database as well. It COULD have super secret information in it that I need to protect.
Below is a picture of what we’ll be testing.
Below is a Network Policy that will allow ingress traffic to a pod with label app:hollowdb from any pods with a label of app:hollowapp over port 3306.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: hollow-db-policy #policy name
spec:
podSelector:
matchLabels:
app: hollowdb #pod applied to
policyTypes:
- Ingress #Ingress and/or Egress
ingress:
- from:
- podSelector:
matchLabels:
app: hollowapp #pod allowed
ports:
- protocol: TCP
port: 3306 #port allowed
Code language: PHP (php)
Once the manifest was applied with:
kubectl apply -f [manifest file].yml
Code language: CSS (css)
And our database has been deployed of course, then we can test out the policy. To do that we’ll deploy a mysql container with an allowed label to ensure we can connect to the backend database over 3306. The interactive command below will deploy the container and get us a mysql prompt.
kubectl run hollowapp --image=mysql -it --labels="app=hollowapp" -- bash
Code language: JavaScript (javascript)
From there we’ll try to connect to the hollowdb container with our super secret root password. As you can see we can login and show the databases in the container. So the network policy is allowing traffic as intended.
Now, lets deploy our example rouge container the same with but use a label that is not “hollowapp” as seen below.
kubectl run rogue1 --image=mysql -it --labels="app=rogue1" -- bash
Code language: JavaScript (javascript)
This time, when we get our mysql prompt we are unable to connect to the MySQL database.
Summary
Kubernetes clusters allow traffic between all pods by default, but if you’ve got a network plugin capable of using Network Policies, then you can start locking down that traffic. Build your Network Policy manifests and start including them with your application manifests.
[…] 21. Network Policies […]