Back to Blog

Beginners·

Kubernetes Service Types: ClusterIP vs NodePort vs LoadBalancer

Learn the differences between ClusterIP, NodePort, and LoadBalancer service types in Kubernetes, when to use each one, and why Gateway API is the modern recommendation for exposing services.

When you deploy an application to Kubernetes, you almost always need a way for other things to talk to it — whether that's another service inside the cluster, a user's browser, or a monitoring agent hitting a health-check endpoint. That's exactly what a Kubernetes Service is for.

But not all Services are created equal. Kubernetes ships with three main Service types: ClusterIP, NodePort, and LoadBalancer. Each one controls how and from where traffic can reach your pods, and picking the wrong one can expose too much (or too little) of your application.

This guide walks through each type, explains a few things that aren't always obvious, and ends with a recommendation on how modern Kubernetes deployments handle traffic exposure.

What is a Kubernetes Service?

A Service is a stable network endpoint that sits in front of one or more Pods. Because Pods are ephemeral — they can be rescheduled, restarted, or scaled — their IP addresses change constantly. A Service gives you a single, reliable address (and DNS name) that always routes to the right set of Pods, no matter what happens underneath.

Here's the simplest possible Service definition:

apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 8080

Notice there's no type field here. When you omit it, Kubernetes defaults to ClusterIP — which brings us to the first type.

ClusterIP — The Default

ClusterIP is the default Service type. Kubernetes assigns the Service a virtual IP address that is only reachable from within the cluster. Nothing outside the cluster — not the internet, not your laptop, not another cluster — can reach a ClusterIP Service directly.

apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  type: ClusterIP   # This is the default, you can omit it
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 8080

When to use ClusterIP

ClusterIP is the right choice for internal service-to-service communication. If you have a frontend talking to a backend, or a backend talking to a database, those services should all be ClusterIP. There's no reason to expose them to the outside world, and keeping them internal reduces your attack surface.

A good rule of thumb: every Service you create should start as a ClusterIP. Only change the type if you have a specific reason to expose it externally.

NodePort — External Access via the Node

NodePort extends ClusterIP by also opening a port on every node in your cluster. Traffic sent to <NodeIP>:<NodePort> is forwarded to the Service, which then routes it to the correct Pods.

apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  type: NodePort
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 8080
      nodePort: 30080   # Must be in the range 30000–32767

When nodePort is omitted, Kubernetes picks a random port in the 30000–32767 range.

What NodePort actually gives you

A NodePort Service is also a ClusterIP Service — Kubernetes creates the ClusterIP automatically. So you get both internal and external access from a single Service object.

The catch is that NodePort is a bit awkward for production use:

  • You have to manage node IP addresses yourself (which change if nodes are replaced).
  • The port range 30000–32767 looks ugly in URLs and often has to be firewalled open.
  • If a node goes away, clients pointing at its IP lose connectivity until they retry a different node.

When to use NodePort

NodePort is useful for development and testing when you need quick external access without setting up a load balancer. It also works well in bare-metal or on-premises environments where you manage your own edge infrastructure and want to route traffic to cluster nodes yourself.

LoadBalancer — NodePort with Cloud Integration

Here's something that surprises many people: a LoadBalancer Service is just a NodePort Service with an extra step. When you set type: LoadBalancer, Kubernetes does everything NodePort does (opens a port on every node, creates a ClusterIP) and then asks the cloud provider's controller to provision an external load balancer and point it at those node ports.

apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 8080

After a moment, kubectl get svc my-app will show an EXTERNAL-IP — the address of the cloud load balancer that was automatically created.

The cloud integration secret

The LoadBalancer type isn't magic built into Kubernetes itself. It's a hook that cloud-provider controllers (AWS, GCP, Azure, etc.) listen for. When they see a Service of type LoadBalancer, they call the cloud API on your behalf to provision a load balancer and wire it up to the NodePort that Kubernetes opened.

This also means that you can mimic LoadBalancer behaviour without using type: LoadBalancer. If you're on bare metal or in a private data center, you can:

  1. Create a NodePort Service.
  2. Set up your own load balancer (HAProxy, Nginx, an F5, a hardware appliance — anything) and configure it to send traffic to <any-node-IP>:<nodePort>.

You'll have the same architecture: an external entry point that distributes traffic across nodes, which then forward it to pods. Tools like MetalLB automate exactly this for bare-metal clusters, effectively implementing the LoadBalancer controller that cloud providers ship natively.

When to use LoadBalancer

Use LoadBalancer when you're on a managed cloud (EKS, GKE, AKS, etc.) and need a simple way to expose a single Service to the internet. It's the quickest path to an externally reachable endpoint with no extra tooling.

The downside is cost: each LoadBalancer Service provisions a separate cloud load balancer, which can add up quickly if you have many services to expose.

Side-by-side Comparison

FeatureClusterIPNodePortLoadBalancer
Reachable from within cluster
Reachable from outside cluster✅ (via node IP)✅ (via cloud LB)
Default type
Requires cloud provider✅ (or a controller like MetalLB)
Good for production external traffic⚠️ (limited)✅ (but one LB per Service)

The Modern Recommendation: Use Gateway API Instead

If you're designing a new Kubernetes deployment today, there's a strong case for keeping all your Services as ClusterIP and exposing external traffic through the Gateway API instead.

The Gateway API is a set of Kubernetes-native resources (Gateway, HTTPRoute, GRPCRoute, etc.) that give you fine-grained control over how traffic enters your cluster. It replaces both the older Ingress resource and the practice of using NodePort/LoadBalancer Services for external access.

Here's why this matters in practice:

  • One load balancer for everything: A single Gateway can route to dozens of services based on hostname, path, headers, or other rules. You pay for one cloud load balancer, not one per service.
  • Advanced routing: Path-based routing, traffic splitting, retries, timeouts, header manipulation — all declarative and standardised across providers.
  • Better separation of concerns: Platform teams manage the Gateway, application teams manage their HTTPRoute objects. No need for application teams to touch Service types at all.
  • Multi-protocol support: HTTP, HTTPS, TCP, gRPC — all handled by the same API surface.

A simplified example:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: my-app
spec:
  parentRefs:
    - name: main-gateway
  hostnames:
    - "my-app.example.com"
  rules:
    - backendRefs:
        - name: my-app   # This is a ClusterIP Service
          port: 80

Your my-app Service stays as ClusterIP. All the external routing logic lives in the HTTPRoute, which is managed independently. This keeps your Services simple and your routing layer flexible.

Conclusion

To recap:

  • ClusterIP is the default and is the right choice for internal communication between services. Start here unless you have a specific reason not to.
  • NodePort opens a port on every cluster node and is useful for development, testing, or bare-metal environments where you manage your own edge load balancer.
  • LoadBalancer is NodePort plus automatic cloud load balancer provisioning. Convenient on managed clouds, but creates one load balancer per Service which can get expensive. You can replicate its behaviour on bare metal using NodePort and a tool like MetalLB.
  • For production external traffic, the modern best practice is to keep all Services as ClusterIP and route external traffic through the Gateway API, which is more flexible, provider-agnostic, and cost-efficient.
Tired of using Kubectl? 😓

Give Aptakube a go — a modern, lightweight Kubernetes interface.

Screenshot of Aptakube showing a list of pods from 2 clusters in a single view