Kubernetes – Services and Labels
January 31, 2019 6 By Eric ShanksIf you’ve been following the series, you may be thinking that we’ve built ourselves a problem. You’ll recall that we’ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven’t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn’t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we’ll access one of our pods … finally.
Services – The Theory
In the broadest sense, Kubernetes Services tie our pods together and provide a front end resource to access. You can think of them like a load balancer that automatically knows which servers it is trying to load balance. Since our pods may be created and destroyed even without our intervention, we’ll need a stable way to access them at a single address every time. Services give us a static resource to access that abstracts the pods behind them.
OK, we probably don’t have a hard time understanding that a Service sits in front of our pods and distributes requests to them, but we might be asking how the service knows which pods it should be providing a front end for. I mean of course we’ll have pods for different reasons running in our cluster and we’ll need to assign services to our pods somehow. Well that introduces a really cool specification called labels. If you were paying close attention in previous posts, we had labels on some of our pods already. Labels are just a key value pair, or tag, that provides metadata on our objects. A Kubernetes Service can select the pods it is supposed to abstract through a label selector. Neat huh?
Take a look at the example diagram below. Here we have a single Service that is front-ending two of our pods. The two pods have labels named “app: nginx” and the Service has a label selector that is looking for those same labels. This means that even though the pods might change addresses, as long as they are labeled correctly, the service, which stays with a constant address, will send traffic to them.
You might also notice that there is a Pod3 that has a different label. The Service 1 service won’t front end that pod so we’d need another service that would take care of that for us. Now we’ll use a service to access our nginx pod later in this post, but remember that many apps are multiple tiers. Web talks to app which talks to database. In that scenario all three of those tiers may need a consistent service for the others to communicate properly all while pods are spinning up and down.
The way in which services are able to send traffic to the backend pods is through the use of the kube-proxy. Every node of our Kubernetes cluster runs a proxy called the kube-proxy. This proxy listens to the master node API for services as well as endpoints (covered in a later post). Whenever it finds a new service, the kube-proxy opens a random port on the node in which it belongs. This port proxies connections to the backend pods.
Services and Labels – In Action
I know you’re eager to see if your cluster is really working or not, so let’s get to deploying our deployment manifest like we built in a previous post and then a Service to front-end that deployment. When we’re done, we’ll pull it open in a web browser to see an amazing webpage.
Let’s start off by creating a new manifest file and deploying it to our Kubernetes cluster. The file is below and has two objects, Deployment & Service within the same file.
apiVersion: apps/v1 #version of the API to use
kind: Deployment #What kind of object we're deploying
metadata: #information about our object we're deploying
name: nginx-deployment #Name of the deployment
labels: #A tag on the deployments created
app: nginx
spec: #specifications for our object
replicas: 2 #The number of pods that should always be running
selector: #which pods the replica set should be responsible for
matchLabels:
app: nginx #any pods with labels matching this I'm responsible for.
template: #The pod template that gets deployed
metadata:
labels: #A tag on the replica sets created
app: nginx
spec:
containers:
- name: nginx-container #the name of the container within the pod
image: nginx #which container image should be pulled
ports:
- containerPort: 80 #the port of the container within the pod
---
apiVersion: v1 #version of the API to use
kind: Service #What kind of object we're deploying
metadata: #information about our object we're deploying
name: ingress-nginx #Name of the service
spec: #specifications for our object
type: NodePort #Ignore for now discussed in a future post
ports: #Ignore for now discussed in a future post
- name: http
port: 80
targetPort: 80
nodePort: 30001
protocol: TCP
selector: #Label selector used to identify pods
app: nginx
Code language: PHP (php)
We can deploy this by running a command we should be getting very familiar with at this point.
kubectl apply -f [manifest file].yml
Code language: CSS (css)
After the the manifest has been deployed we can look at the pods, replica sets or deployments like we have before and now we can also look at our services by running:
kubectl get services
Code language: JavaScript (javascript)
Neat! Now we have a Service named ingress-nginx which we defined in our manifest file. We also have a Kubernetes Service which, for now, we’ll ignore. Just know this is used to run the cluster. But do take a second to notice the ports column. Our ingress-ngnix service shows 80:30001/TCP. This will be discussed further in a future post, but the important thing is that the port after the colon “:” is the port we’ll access the service on from our computer.
Here’s the real test, can we open a web browser and put in an IP Address of one of our Kubernetes nodes on port 30001 and get an nginx page?
Summary
Well the result isn’t super exciting. We have a basic nginx welcome page which isn’t really awe inspiring, but we did finally access an app on our Kubernetes cluster and it was by using Services and Labels coupled with our pods that we’ve been learning. Stay tuned for the next post where we dive deeper into Kubernetes.
Share this:
- Click to share on Facebook (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to share on Reddit (Opens in new window)
About The Author
Eric Shanks is a Principal Technical Marketing Manager at Portworx.
6 Comments
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
[…] the post where we first learned about Kubernetes Services, we saw that we could use labels to match a […]
[…] 4. Services and Labels […]
Thanks for this great tutorial. I am learning a lot. One question though, for the ingress-service, the cluster ip is 10.110.210.31 and the type is NodePort. But you have used 10.10.50.50 to access from the browser. That’s the public IP I believe. From where did you get the public IP address ?
The NodePort is the IP address + port of the kubernetes node hosting the container. ClusterIP is internal to the cluster.
Does the labels in ‘spec.selector’ and ‘spec.template.metadata.labels’ must be always the same, or can be different? Something like this:
—
spec: #specifications for our object
replicas: 2 #The number of pods that should always be running
selector: #which pods the replica set should be responsible for
matchLabels:
app: wordpress
tier: backoffice
template: #The pod template that gets deployed
metadata:
labels: #A tag on the replica sets created
app: wordpress
—
If they must always be same, why we need to fill them twice in file?
If they can be different, can you provide example in which situation this can be useful?
All this is easy when you always use the same label “app=nginx”. But this doesn’t help understand the concept. Does the Service select a Pod with the given label or a Deployment with the given label?
You should change all labels to unique ones whenever they are not meant to be the exact same instance, then it makes much more sense!