Kubernetes Basics - Load Balancer Service

Exposing a deployment using a Load Balancer service

Load Balancer Example

Download the YAML configuration files for the Load Balancer example

Deploy the App

kubectl apply -f deploy-app.yaml

Applies the configuration from the YAML file to create the Nginx deployment.

Deploy the Load Balancer Service

kubectl apply -f loadbalancer.yaml

Creates a Load Balancer service to expose the deployment.

Get the Pods List

kubectl get pods -o wide

Lists all pods with additional details including IP addresses.

Use the Load Balancer

kubectl get svc -o wide

Gets the Load Balancer public IP address (localhost on Docker Desktop).

Cleanup

kubectl delete -f loadbalancer.yaml

Deletes the Load Balancer service.

Cleanup Deployment

kubectl delete -f deploy-app.yaml

Deletes the Nginx deployment and pods.

deploy-app.yaml Configuration File

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx
  labels:
    app: nginx
    env: prod
spec:
  replicas: 2
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: nginx
      env: prod
  template:
    metadata:
      labels:
        app: nginx
        env: prod
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80

YAML Configuration Explanation:

Deployment Structure:

  • apiVersion: apps/v1 → Specifies the Kubernetes API version for Deployments
  • kind: Deployment → Defines this as a Deployment resource
  • metadata.name: deploy-nginx → Names the deployment "deploy-nginx"
  • metadata.labels → Applies labels for resource identification

Deployment Spec:

  • replicas: 2 → Creates 2 identical pod instances
  • revisionHistoryLimit: 3 → Keeps 3 old ReplicaSets for rollback capability
  • selector.matchLabels → Defines how the Deployment finds which Pods to manage

Pod Template:

  • template.metadata.labels → Labels applied to created Pods
  • template.spec.containers → Defines the container specification
  • name: nginx → Names the container "nginx"
  • image: nginx:alpine → Uses the lightweight Alpine-based Nginx image
  • containerPort: 80 → Exposes port 80 for web traffic

How It Works:

This Deployment creates and maintains 2 identical Nginx pods. The Deployment controller ensures that the specified number of pod replicas are running at all times. If a pod fails, the controller replaces it automatically.

loadbalancer.yaml Configuration File

apiVersion: v1
kind: Service
metadata:
  name: svc-example
spec:
  type: LoadBalancer
  selector:
    app: nginx
    env: prod
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 80

YAML Configuration Explanation:

Service Structure:

  • apiVersion: v1 → Specifies the core Kubernetes API version
  • kind: Service → Defines this as a Service resource
  • metadata.name: svc-example → Names the service "svc-example"

Service Spec:

  • type: LoadBalancer → Creates an external load balancer in cloud environments
  • selector → Identifies which pods to route traffic to based on labels
  • app: nginx, env: prod → Matches pods with these labels

Port Configuration:

  • protocol: TCP → Uses TCP protocol for communication
  • port: 8080 → The service exposes port 8080 externally
  • targetPort: 80 → Routes traffic to port 80 on the pods

How It Works:

This LoadBalancer service exposes the Nginx deployment externally. When using Docker Desktop, it maps to localhost. In cloud environments, it would provision an external load balancer with a public IP. Traffic coming to port 8080 on the load balancer is forwarded to port 80 on the Nginx pods.

Accessing the Service:

After applying this configuration, you can access your Nginx application at http://localhost:8080 when using Docker Desktop. The service load balances traffic between the two Nginx pod replicas.