Kubernetes – Ingress

Kubernetes – Ingress

February 13, 2019 0 By Eric Shanks

It’s time to look closer at how we access our containers from outside the Kubernetes cluster. We’ve talked about Services with NodePorts, LoadBalancers, etc., but a better way to handle ingress might be to use an ingress-controller to proxy our requests to the right backend service. This post will take us through how to integrate an ingress-controller into our Kubernetes cluster.

Ingress Controllers – The Theory

Lets first talk about why we’d want to use an ingress controller in the first place. Take an example web application like you might have for a retail store. That web application might have an index page at “http://store-name.com/” and a shopping cart page at “http://store-name.com/cart” and an api URI at “http://store-name.com/api”. We could build all these in a single container, but perhaps each of those becomes their own set of pods so that they can all scale out independently. If the API needs more resources, we can just increase the number of pods and nodes for the api service and leave the / and the /cart services alone. It also allows for multiple groups to work on different parts simultaneously but we’re starting to drift off the point which hopefully you get now.

OK, so assuming we have these different services, if we’re in an on-prem Kubernetes service we’d need to expose each of those services to the external networks. We’ve done this in the past with a node port. The problem is, we can’t expose each app individually and have it work well because each service would need a different port. Imagine your users having to know which ports to use for each part of your application. It would be unusable like the example below.

A much simpler solution would be to have a single point of ingress (you see where I’m going) and have this “ingress-controller” define where the traffic should be routed.

Ingress with a Kubernetes cluster comes in two parts.

  1. Ingress Controller
  2. Ingress Resource

The ingress-controller is responsible for doing the routing to the right places and can be thought of like a load balancer. It can route requests to the right place based on a set of rules applied to it. These rules are called an “ingress resource.” If you see ingress as part of a Kubernetes manifest, it’s likely not an ingress controller, but a rule that should be applied on the controller to route new requests. So the ingress controller likely is running in the cluster all the time and when you have new services, you just apply the rule so that the controller knows where to proxy requests for that service.

Ingress Controllers – In Action

For this scenario we’re going to deploy the objects depicted in this diagram. We’ll have an ingress controller, a default backend, and two apps have different host names. Also notice, that we’ll be using an NGINX ingress controller and that it is setup in a different namespace which helps keep the cluster secure and clean for anyone that shouldn’t need to see that controller deployment.

First, we’ll set the state by deploying our namespace and a configmap that will have information used by our ingress controller. To do that lets first start with a manifest file for our namespace.

Once you deploy your namespace, we can move on to our configmap that has information about our environment, used by our ingress controller when it starts up.

Lastly, before we get to our ingress objects, we need to deploy a service account with permissions to deploy and read information from the Kubernetes cluster. Here is a manifest that can be used.

Now that those prerequisites are built, we’ll deploy a default backend for our controller. This is just a pod that will handle any requests where there is an unknown route. If someone accessed our cluster at http://clusternodeandport/bananas we wouldn’t have a route that handled that so we’d point it to the default backend with a 404 error in it. NGINX has a sample backend that you can use and that code is listed below. Just deploy the manifest to the cluster.

Now that the backend service and pods are configured, it’s time to deploy the ingress controller through another deployment file. This info came right from the Kubernetes.github.io/ingress-nginx site for reference.

To deploy your manifest, it’s again the:

Congratulations, your controller is deployed, lets now deploy a Service that exposes the ingress controller to the outside world and in our case through a NodePort.

Deploy the Service with:

Before we go any further, lets get the port that was assigned to that service by running:

As you can see from my screenshot, the outside port for http traffic is 30425. Your results may differ so check with the get service command shown above.

At this point you can try to access your nginx controller through a web browser and it should return an nginx 404 error.

Now we’ll deploy our application called hollowapp. The manifest file below has standard Deployments and a Service (that isn’t exposed externally). It also has a new resource of kind “ingress” which is our ingress rule to be applied on our controller. The main thing is anyone who access the ingress-controller with a host name of hollowapp.hollow.local will route traffic to our service. This means we need to setup a DNS record to point at our Kubernets cluster for this resource. I’ve done this in my lab and you can change this to meet your own needs in your lab.

Apply the manifest above, or a modified version for your own environment with the command below:

Now, we can test out our deployment in a web browser. First lets make sure that something is being returned when we hit the Kubernetes ingress controller via the web browser and I’ll use the IP Address in the URL.

So that is the default backend providing a 404 error, as it should. We don’t have an ingress rule for access the controller via an IP Address so it used the default backend. Now lets try that again by using the hostname that we used in the manifest file which in my case was http://hollowapp.hollow.local.

And before we do that here’s a screenshot showing that the host name maps to the EXACT same IP address we used to test the 404 error.

When we access the cluster by the hostname, we get our new application we’ve deployed.


The examples shown in this post can be augmented by adding a load balancer outside the cluster but you should get a good idea of what an ingress controller can do for you. Once you’ve got it up and running it can provide a single resource to access from outside the cluster and many service running behind it. Other advanced controllers exist as well, so this post should just serve as an example. Ingress controllers can do all sorts of things including handling TLS, monitoring, handling session persistence and others. Feel free to checkout all your ingress options from all sorts of sources including NGINX, Heptio, Traefik and other.