Kubernetes – Service Publishing

Kubernetes – Service Publishing

February 5, 2019 1 By Eric Shanks

A critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In previous posts we’ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?

This post will cover three options for publishing our services for accessing our applications.

ClusterIP – The Theory

Whenever a service is created a unique IP address is assigned to the service and that IP Address is called the “ClusterIP”. Now since we need our Services to stay consistent, the IP address of that Service needs to stay the same. Remember that pods come and go, but services will need to be pretty consistent so that we have an address to always access for our applications or users.

Ok, big deal right. Services having an IP assigned to them probably doesn’t surprise anyone, but what you should know is that this ClusterIP isn’t accessible from outside of the Kubernetes cluster. This is an internal IP only meaning that other pods can use a Services Cluster IP to communicate between them but we can’t just put this IP address in our web browser and expect to get connected to the service in our Kubernetes cluster.

NodePort – The Theory

NodePort might seem familiar to you. Thats because we used NodePort when we deployed our sample deployment in the Services and Labels post. A NodePort exposes a service on each node’s IP address on a specific port. NodePort doesn’t replace ClusterIP however, all it does is direct traffic to the ClusterIP from outside the cluster.

Some important information about NodePorts is that you can just publish your service on any port you want. For example if you have a web container deployed you’d likely be tempted to use a NodePort of 443 so that you can use a standard https port. This won’t work since NodePort must be within the port range 30000-32767. You can specify which port you want to use as long as its in this range, which is what we did in our previous post, but be sure it doesn’t conflict with another service in your cluster. If you don’t care what port it is, don’t specify one and your cluster will randomly assign one for you.

LoadBalancer – The Theory

LoadBalancers are not going to be covered in depth in this post. What you should know about them right now is that they won’t work if your cluster is build on-premises, like on a vSphere enviornment. If you’re using a cloud service like Amazon Elastic Container Service for Kubernetes (EKS) or other cloud provider’s k8s solution, then you can specify a load balancer in your manifest file. What it would do is spin up a load balancer in the cloud and point the load balancer to your service. This would allow you to use port 443 for example on your load balancer and direct traffic to one of those 30000 or higher ports.

ClusterIP – In Action

As we mentioned, we can’t access our containers from outside our cluster by using just a ClusterIP so we’ll deploy a test container within the Kubernetes cluster and run a curl command against our nginx service. The picture below describes the process we’ll be testing.

First, lets deploy our manifest which includes our service and nginx pod and container.

---
apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: nginx-deployment 
  labels: 
    app: nginx
spec: 
  replicas: 2 
  selector: 
    matchLabels:
      app: nginx 
  template: 
    metadata:
      labels: 
        app: nginx
    spec:
      containers:
      - name: nginx-container 
        image: nginx 
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
spec:
  type: ClusterIP
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: nginx

We can deploy the manifest via:

kubectl apply -f [manifest file].ymlCode language: CSS (css)

Once our pod and service has been deployed we can look at the service information to find the ClusterIP

We can see that the ClusterIP for our ingress-nginx service is 10.101.199.109. So next, we’re going to deploy another container just to run a curl command from within the cluster. To do this we’ll run some imperative commands instead of the declarative manifest files.

kubectl create -f https://k8s.io/examples/application/shell-demo.yaml
kubectl exec -it shell-demo -- /bin/bashCode language: JavaScript (javascript)

Once you’ve run the two commands above, you’ll have a new pod named shell-demo and you’ve gotten an interactive terminal session into the container. Now we need to update the container and install curl.

apt-get update
apt-get install curl
curl [CLUSTERIP]Code language: JavaScript (javascript)

As you can see, we can communicate with this service by using the ClusterIP from within the cluster.

To stop this test you can exit the interactive shell by ctrl+d. Then we can delete the test pod by running:

kubectl delete pod shell-demoCode language: JavaScript (javascript)

Then we can remove our service and pod by running:

kubectl delete -f [manifest file].ymlCode language: CSS (css)

NodePort – In Action

In this test, we’ll access our backend pods through a NodePort from outside our Kubernetes cluster. The diagram below should give you a good idea of where the ClusterIP, NodePort and Containers fit in. Additional containers (app) were added to help better understand how they might fit in.

Lets deploy another service and pod where we’ll also specify a NodePort of 30001. Below is another declarative manifest files.

apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: nginx-deployment 
  labels: 
    app: nginx
spec: 
  replicas: 2 
  selector: 
    matchLabels:
      app: nginx 
  template: 
    metadata:
      labels: 
        app: nginx
    spec:
      containers:
      - name: nginx-container 
        image: nginx 
        ports:
        - containerPort: 80 
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30001
    protocol: TCP
  selector:
    app: nginx

We can deploy our manifest file with:

kubectl apply -f [manifest file].ymlCode language: CSS (css)

Once it’s deployed we can look at the services again. You’ll notice this is the same command as we ran earlier but this time there are two ports listed. The second port is whats mapped to our NodePort.

If we access the IP Address of one of our nodes with the port we specified, we can see our nginx page.

And thats it. Now we know how we can publish our services both externally and internally. To delete your pods run:

kubectl delete -f [manifest file].ymlCode language: CSS (css)

Summary

There are different ways to publish your services depending on what your goals are. We’ll learn about other ways to expose your services externally in a future post, but for now we’ve got a few weapons in our arsenal to expose our pods to other pods or externally to the clients.