Before starting this post, I want to congratulate you all for making it till here! Kubernetes is intimidating, especially if you come from a developer background. The fact that you were able to understand the concepts we talked about in the previous articles is a huge feat. Wrapping your head around so many Kubernetes objects is no easy deal!

In the last post we looked at configuration and storage in Kubernetes. We saw how Secrets help provide sensitive configuration data to our app and how Persistent Volumes can be used to make sure we don’t lose our data in case some Pods crash. In the final post of this series, we’re going to take a look at how networking works in Kubernetes.

We will see how we can expose Pods running our application using the Service object. This would help us ensure that different parts of our application (backend, frontend, etc) are able to communicate with each other within the cluster. Then we will see how we can ease external access to our application by setting up routing using the Ingress object. Lots to cover in this one, so let’s begin!


To understand Services, let’s look at what problem they solve. If you recall our discussion of Deployments you’ll remember that deployments create and manage multiple pods for us.

So we would have a deployment for the frontend of our application which would manage multiple pods running our containerized frontend code. Now this code would need to interact with the backend pods. But how does it do that? What IP address should it ping? It clearly can’t be the IP address of a particular backend pod because we know pods are ephemeral so we can’t rely on their IP addresses.

This is where Services come into the picture. Services allow us to send traffic to Pods that match the labels we specify when creating the Service. Let us look at the YAML for a simple service to understand things better:

​​apiVersion: v1
kind: Service
  name: database-service
    app: database
    - protocol: TCP
      port: 80
      targetPort: 9376

The above YAML should be pretty easy to understand - it creates a Service that will route all traffic to Pods that have the label app: database on them. Coming to the ports section which might not be so obvious, we specify a port and a targetPort. targetPort is the port on which the selected pods will be listening. The Service will send requests to the Pods on this particular port. port, on the other hand, is simply the port within the cluster on which we want the Service to be exposed. This is needed because we might have different services in our cluster, which we would then expose on different ports. Another thing worth mentioning is that while TCP is the default network protocol for services in Kubernetes, you can use any of the following instead too.

So far, so good, right? Looking at the movies app we deployed on Okteto Cloud, we have a service created which is exposing the database for our backend pods to talk to. You can confirm that this Service exists by running:

kubectl get services

To see what the YAML for this Service looks like, you can run:

kubectl get service mongodb -o yaml

You’ll see that we have specified 3940 as the value of the port, which is arbitrarily chosen. For the targetPort, we specify 27017 because that is the default port a MongoDB database listens on.

Now, if you look at the backend code you’ll see that we use the name of this Service, which is mongodb, to connect to the database. The url used to connect to the database is:

const url = `mongodb://${process.env.MONGODB_USERNAME}:${encodeURIComponent(process.env.MONGODB_PASSWORD)}@${process.env.MONGODB_HOST}:3940/${process.env.MONGODB_DATABASE}`;

It is using the MONGODB_HOST environment variable, which is set to the name of the Service, that is, mongodb. After that, we also specify the port which we arbitrarily chose when creating our Service (3940).

And voila! This is all the K8s magic we need in order to make different microservices in our application talk to each other. Let’s now take our discussion further and see how we can route incoming traffic to various services.


So you’ve containerized your application, you’ve set up your Deployments and configured Services to enable communication between parts of your application, but what’s next? If your application is deployed on a cloud provider, you would set up a load balancer pointing to your services. The load balancer would provide you with a static IP address accessible from outside the cluster.

But is it cost-effective and manageable to set up different load balancers for each of your services? This is where Ingresses come to the rescue. Think of an Ingress object as a sort of signboard for vehicles at a cross-section of roads.

sign board

The roads in this metaphor are different services in our cluster and the vehicles are the requests coming to our cluster. An Ingress object we create will tell the incoming requests to the cluster which Service they should go to. Ingresses don’t enable external access to our app by themselves, but they make it easy. You would now just need one load balancer from your cloud provider, which would point to the Ingress, and then the Ingress will take care of handling the routing of all incoming traffic.

ingress flow

Let’s see what the Ingress object for our movies app looks like by running:

kubectl get ingress movies -o yaml

The output you get should look something like this:

  - host:
      - backend:
            name: frontend
              number: 80
        path: /
        pathType: Prefix
      - backend:
            name: api
              number: 8080
        path: /api
        pathType: Prefix

You’ll see that under the spec section, we set up two paths for different services in our application which we wanted to expose: the frontend service and the backend service. So requests sent to our cluster at /api will be redirected to the backend service, which further directs them to the backend pods serving our application. All other requests will be redirected to the frontend pods via the frontend service. This simplifies the management of different routes significantly when compared to the traditional way of doing this - setting up an NGINX proxy per application. Very convenient, right? :)

To sum our discussion up, Ingress isn’t something complicated - it’s just a Kubernetes object which allows us to specify routing rules!

However, there is another part to Ingresses that we didn’t touch in this article. The K8s objects we’ve seen so far (Pods, Deployments, etc) come with their controllers preinstalled in the cluster. This is not true for Ingresses. To create an Ingress object, you will need your cluster admin to install an ingress controller in your cluster. I won’t be covering how to do that since as developers we rarely have to do it ourselves. If you’re using Okteto Cloud, the cluster provided to you would already have an Ingress controller installed :)

Well, that wraps up our discussion for this article. We learned that different microservices in our application can talk to each other with the help of the Service K8s object. Then we went ahead and looked at Ingresses, which allow us to route incoming traffic to our cluster to the different Services which we’ve created.

I hope this discussion not only enables you to learn more about this topic but also sparks your curiosity enough so you go give things a try yourself. A very easy way to tinker with all this could be to fork the movies app, edit the YAMLs and deploy on Okteto Cloud using a single click!

This post also concludes our Kubernetes for Developer series. There will, however, be one final brief post to wrap things up and point you to resources that would be helpful for you on your Kubernetes journey ahead. Until then, happy hacking! 😄

The "Kubernetes for Developers" blog series has now concluded, and you can find all the posts in this series here.