Kubernetes – Role Based Access

Kubernetes – Role Based Access

May 20, 2019 1 By Eric Shanks

As with all systems, we need to be able to secure a Kubernetes cluster so that everyone doesn’t have administrator privileges on it. I know this is a serious drag because no one wants to deal with a permission denied error when we try to get some work done, but permissions are important to ensure the safety of the system. Especially when you have multiple groups accessing the same resources. We might need a way to keep those groups from stepping on each other’s work, and we can do that through role based access controls.

Role Based Access – The Theory

Before we dive too deep, lets first understand the three pieces in a Kubernetes cluster that are needed to make role based access work. These are Subjects, Resources, and Verbs.

  • Subjects – Users or processes that need access to the Kubernetes API.
  • Resources – The k8s API objects that you’d grant access to
  • Verbs – List of actions that can be taken on a resource

These three items listed above are used in concert to grant permissions such that a user (Subject) is allowed access to take an action (verb) on an object (Resource).

Now we need to look at how we tie these three items together in Kubernetes. The first step will be to create a Role or a ClusterRole. Now both of these roles will be used to tie the Resources together with a Verb, the difference between them is that a Role is used at a namespace level whereas a ClusterRole is for the entire cluster.

Once you’ve created your Role or your Cluster Role, you’ve tied the Resource and Verb together and are only missing the Subject now. To tie the Subject to the Role, a RoleBinding or ClusterRoleBinding is needed. As you can guess the difference between a RoleBinding or a ClusterRoleBinding is whether or not its done at the namespace or for the entire Cluster, much like the Role/ClusterRole described above.

It should be noted that you can tie a ClusterRole with a RoleBinding that lives within a namespace. This enables administrators to use a common set of roles for the entire cluster and then bind them to a specific namespace for use.

Role Based Access – In Action

Let’s see some of this in action. In this section we’ll create a service account (Subject) that will have full permissions (Verbs) on all objects (Resource) in a single namespace.

We can assume that multiple teams are using our k8s cluster and we’ll separate them by namespaces so they can’t access each others pods in a namespace that is not their own. I’ve already created a namespace named “hollowteam” to set our permissions on.

Lets first start by creating our service account or Subject. Here is a manifest file that can be deployed for our cluster to create the user. Theres not too much to this manifest, but a ServiceAccount with name “hollowteam-user” and the namespace it belongs to.

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: hollowteam-user
  namespace: hollowteamCode language: PHP (php)

Next item is the Role which will tie our user “hollowteam-user” to the verbs. In the below manifest, we need to give the role a name and attach it to a namespace. Below this, we need to add the rules. The rules are going to specify the resource and the verb to tie together. You can see we’re tying resources of “*” [wildcard for all] with a verb.

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: hollowteam-full-access
  namespace: hollowteam
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]Code language: JavaScript (javascript)

Then finally we tie the Role to the Service Account through a RoleBinding. We give the RoleBinding a name and assign the namespace. After this we list the Subjects. You could have more than one subject, but in this case we have one, and it is the Service Account we created earlier (hollowteam-user) and the namespace its in. Then the last piece is the roleRef which is the reference to the Role we created earlier which was named “hollowteam-full-access”.

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: hollowteam-role-binding
  namespace: hollowteam
subjects:
- kind: ServiceAccount
  name: hollowteam-user
  namespace: hollowteam
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: hollowteam-full-access

At this point I assume that you have deployed all these manifest files with the command below:

kubectl apply -f [manifestfile.yml]Code language: CSS (css)

At this point things should be ready to go, but you’re likely still logging in with administrator privileges through the admin.conf KUBECONFIG file. We need to start logging in with different credentials; the hollowteam-user credentials. To do this we’ll create a new KUBECONFIG file and use it as part of our connection information.

To do this for our hollowteam-user, we need to run some commands to generate the details for our KUBECONFIG file. To start, we need to get the ServiceAccount token for our hollowteam-user. To do this run:

kubectl describe sa [user] -n [namespace]Code language: CSS (css)

Now we can see the details from the example user I created earlier named “hollowteam-user”. Note down the service account token which in my case is hollowteam-user-token-b25n4.

Next up, we need to grab the client token. To do this we need to get the secret used for our hollowteam-user in base64. To do this run:

kubectl get secret [user token] -n [namespace] -o "jsonpath={.data.token}" | base64 -DCode language: JavaScript (javascript)

The output of running this command on my user token is shown below. Note down this client token for use later.

The last thing we need to gather is the Client Certificate info. To do this, we’ll use our user token again and run the following command:

kubectl get secret [user token] -n [namespace] -o "jsonpath={.data['ca\.crt']}"Code language: JavaScript (javascript)

The details of my certificate are shown below. Again, copy down this output for use in our KUBECONFIG file.

Next, we can take the data we’ve gathered and throw it in a new config.conf file. Use the following as a template and place your details in the file.

apiVersion: v1
kind: Config
preferences: {}
users:
- name: [user here]
  user:
    token: [client token here]
clusters:
- cluster:
    certificate-authority-data: [client certificate here]
    server: https://[ip or dns entry here for cluster:6443
  name: [cluster name.local]
contexts:
- context:
    cluster: [cluster name.local]
    user: [user here]
    namespace: [namespace]
  name: [context name]
current-context: [set context]Code language: JavaScript (javascript)

The final product from my config is shown below for reference. (Don’t worry, you won’t be able to use this to access my cluster.)

apiVersion: v1
kind: Config
preferences: {}
users:
- name: hollowteam-user
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJob2xsb3d0ZWFtIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImhvbGxvd3RlYW0tdXNlci10b2tlbi1iMjVuNCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJob2xsb3d0ZWFtLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYjUyYzc4Yi03NWU0LTExZTktYTQ1Ny0wMDUwNTY5MzQyNTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6aG9sbG93dGVhbTpob2xsb3d0ZWFtLXVzZXIifQ.vpqfaqRq1DI55lRcB6YwSn7sqdMspHHPEI7wWQ2XmVB8SqXiF8OW0e1lXc169Z1RcjTQUhSZfWuORm_pGXZBRuh6r1vS7tCxZR-MiCAM144A9_a6I1Q8F2WfAE5bT1q0YvfKMUiWaHLtewWSZG6pCK_USCWAvFP4tgCa5h83WU_Br-fYKt4n7JT2CglC5qnIk8RPxrY7Kj13NthUkKHyVdsCt43zbh82zg8tJqX6yCqcglLKXNxSRkYVKxREF-PLmA0S2lc4UE_GWLlDM5r69_ZyRTN3-qUIqV4k3EJTqdNMPt13SzZ1kguX-hT6NkadZ2VSXG3aKeasC6TVE1T53Q
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1EVXdNVEF5TURJeE5Wb1hEVEk1TURReU9EQXlNREl4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTXZzCnN0alYyQjJ2YW5pa1BGZDErMVNKaEpUR1lJUkRXR0xjcGhValg4cnZ5cndMcE1zdGlRZFFCK2prTjdZWmk4QmYKVHRVeWp4RXZYUnlmbndGWUhDK3huTUFPaTREbzR5MHpoemdibkxyY2NlVUJoV1pwcnBPeXdUOE1aakltaUI5LwpYL1M4bUFHdU8xRVphOS9kNjhUSXVHelkxZzBlQWUvOG93Rkx4MDBPNkY3dUd1RmwrNitpdEgxdzlUNnczbjFZCnpZbzdoMzlDeElDZGd1YjFMNTV4cWlTbVAyYnpJT2UybmsyclRxOGlKR3VveEM3Q01qaUxzQWFqNjRwemFHNWEKVHNXeXdqY3d0aWgxZGxTTzhjR3oveUNrdHRwTHJqZHhLbGdjRjRCL1pOb0NQQ3FRbU9iYmlid2dvUEZpZnpQdgpVY2lGWWNZUm1PT0FuRmYxRUNFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJcjEvUDljRFFjNDRPaHdkUjQzeG51VnlFM0cKL2FwbTdyTzgrSW54WGc1ZkkzSHdDTjNXSGowYXZYS0VkeHhnWFd5bjNkMVNIR0Y2cnZpYWVvMFNEMjNoUTZkcQp4SW9JaDVudi9WSUZ4OWdXQ1hLMnVGV3RqclUvUTI0aHZVcnRzSTVWV1Uwc0EzWjdoNDVVUUhDa25RVkN4N3NMCjZQYmJiY3F0dm1aQzk0TEhnS0VCUEtPMWpGWjRHcEF6d0ZxSStmWUQ2aHF3dk5kWC9PQVpDRUtLejJSOFZHT2MKS2N4b2k4cVRYV0NYL0x4OU1JSEFEcG1wUjFqT3p1Q3FEQ0RLc0VJdEhidUtWRHdHZzFlMFZPY3Q2Y0Fsc2dsawp5NkNpYW45aE9oYnVBTm00eHZGbTFpK2tBeFNRTkczb0x1d3VrU0NMMHkyNlYyT3FZcFMzdGFYVGkwdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.50.160:6443
  name: hollowcluster.local
contexts:
- context:
    cluster: hollowcluster.local
    user: hollowteam-user
    namespace: hollowteam
  name: hollowteam
current-context: hollowteam
Code language: PHP (php)

Save this file and then update your KUBECONFIG info which is usually done by updating your KUBECONFIG environment variable. On my Mac laptop I used:

export KUBECONFIG=~/.kube/newfile.confCode language: JavaScript (javascript)

Now its time to test. If we did everything right, the logged in entity should only have access to the hollowteam namespace. Lets first test to make sure that we can access our hollowteam namespaces resources by doing a get pods command.

The first command works and shows that we can access the pod (that I pre-created for this demo) and everything looks fine there. What about if we try a command that affects things outside of the namespace?

Here we see that we get an error message when we try to access pods that are outside of our namespace. Again, this was the desired result.

Summary

Role Based Access controls are part of the basics for most platforms and Kubernetes is no different. Its useful to carefully plan out your roles so that they can be reused for multiple use cases but I’ll leave this part up to you. I hope the details in this post will help you to segment your existing cluster by permissions, but don’t forget that you can always create more clusters for separation purposes as well. Good luck in your container endeavors.