Kubernetes Service

Discover Kubernetes Service. Our guide offers insights, examples, and practical explanations for managing and exposing applications using Kubernetes Services
E
Edtoks10:17 min read

What is a Kubernetes Service?

A Kubernetes Service is an abstraction that defines a logical set of Pods and a policy by which to access them. It provides a stable endpoint (IP address and port) to access a set of Pods, which can change dynamically due to scaling, upgrades, or pod restarts. This abstraction decouples the consumer of the service from the specifics of the underlying Pods.

Why do We Need Services?

  1. Dynamic Nature of Pods: Pods in Kubernetes are ephemeral. They can be created and destroyed frequently due to scaling, upgrades, or failures. A Service provides a consistent network identity and stable access point for these dynamically changing sets of Pods.

  2. Load Balancing: Services can automatically distribute traffic across multiple Pods, ensuring better resource utilization and application reliability.

  3. Decoupling and Abstraction: By using Services, clients do not need to be aware of the dynamic nature and the exact location of Pods. They simply interact with the Service.

  4. Discovery and Naming: Kubernetes provides built-in mechanisms for service discovery and DNS, allowing Pods to discover services through simple DNS lookups.

How Services Work

A Service in Kubernetes operates by defining a set of Pods (via label selectors) and then exposing them via an endpoint that can be accessed consistently. Here’s the basic flow:

  1. Label Selectors: A Service uses label selectors to identify the set of Pods it should target. Any Pod with matching labels will be included in the Service.

  2. Endpoints: Kubernetes creates an endpoint resource that keeps track of the IP addresses and ports of the Pods selected by the Service.

  3. Service Types: Based on the type of Service, Kubernetes configures networking rules to expose the Service either internally or externally.

Kubernetes Service Structure

A Kubernetes Service definition is specified in a YAML file and has several important fields that define its behavior. Below is the structure of a typical Kubernetes Service YAML file, followed by a detailed explanation of each field.

apiVersion: v1
kind: Service
metadata:
  name: webapp-service
  labels:
    app: webapp
spec:
  type: LoadBalancer  # or NodePort, ClusterIP, ExternalName
  selector:
    app: webapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    name: http
  sessionAffinity: None  # Optional, can be "ClientIP"
  externalTrafficPolicy: Cluster  # Optional, can be "Local"
  loadBalancerIP: 203.0.113.123  # Optional, for static IP
  loadBalancerSourceRanges:  # Optional, restricts IPs that can access the load balancer
  - 192.168.0.0/24
  - 203.0.113.0/24

Explanation of Fields

apiVersion, kind, metadata

  • apiVersion: Specifies the API version (e.g., v1).

  • kind: Specifies the type of Kubernetes resource (e.g., Service).

  • metadata: Provides metadata about the service, including:

    • name: The name of the service.

    • labels: Key-value pairs to organize and select resources.

spec

  • type: Specifies the type of service. Can be ClusterIP (default), NodePort, LoadBalancer, or ExternalName.

    • ClusterIP: Exposes the service on an internal IP in the cluster.

    • NodePort: Exposes the service on a static port on each node's IP.

    • LoadBalancer: Exposes the service externally using a cloud provider's load balancer.

    • ExternalName: Maps the service to a DNS name.

  • selector: Defines the label selector to identify the pods targeted by this service. Pods with matching labels will be part of the service.

  • ports: Defines the ports on which the service is exposed. Each port can include:

    • protocol: The protocol used by the port (e.g., TCP, UDP).

    • port: The port that will be exposed by the service.

    • targetPort: The port on the pod that the traffic will be forwarded to.

    • name: An optional name for the port.

  • sessionAffinity: Controls whether the service uses session affinity, which can be None or ClientIP. ClientIP ensures that requests from the same client IP go to the same pod.

  • externalTrafficPolicy: When set to Local, it preserves the source IP for the traffic coming from outside the cluster.

  • loadBalancerIP: (Optional) Specifies a static IP address for the load balancer.

  • loadBalancerSourceRanges: (Optional) Restricts access to the load balancer to specific IP ranges.

Service Types

Kubernetes supports several types of Services, each catering to different use cases:

  1. ClusterIP (default):

    • Purpose: Exposes the Service on an internal IP within the cluster. This type of Service is only accessible from within the cluster.

    • Use Case: Internal microservices communication.

  2. NodePort:

    • Purpose: Exposes the Service on a static port on each Node’s IP. This type of Service can be accessed from outside the cluster by sending a request to <NodeIP>:<NodePort>.

    • Use Case: Simple external access for development and testing.

  3. LoadBalancer:

    • Purpose: Exposes the Service externally using a cloud provider’s load balancer. This type is supported by most cloud platforms (e.g., AWS, GCP, Azure).

    • Use Case: Exposing Services to external clients, usually in production environments.

  4. ExternalName:

    • Purpose: Maps a Service to a DNS name, without creating any proxy or load balancer.

    • Use Case: Integrating with external services using DNS.

Services Without Selectors

Kubernetes Services can also be defined without selectors. In this case, you manually specify the endpoints to which the Service should route traffic. This is useful for:

  1. External Services: Connecting to services outside the Kubernetes cluster.

  2. Static Endpoints: Using specific IP addresses or hostnames for the backend Pods or services.

Example YAML for a Service without selectors:

apiVersion: v1
kind: Service
metadata:
  name: my-external-service
spec:
  ports:
  - port: 80
    targetPort: 80
  externalName: example.com
  type: ExternalName

Multiport Services

A Kubernetes Service can expose multiple ports. This is useful if your Pods are running multiple containers or services that listen on different ports.

Example YAML for a multiport Service:

apiVersion: v1
kind: Service
metadata:
  name: my-multiport-service
spec:
  selector:
    app: my-app
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 8080
  - name: https
    protocol: TCP
    port: 443
    targetPort: 8443

Expose Webapp with Kubernetes Service

Let's say you have a simple web application running in a Docker container. We'll create a Kubernetes Deployment to manage the Pods running the web application, and then expose these Pods using a Kubernetes Service.

Step-by-Step Guide

Create a Docker Image for the Web Application

  1. Assume you have a Dockerfile for your web application. Build and push this Docker image to a container registry (e.g., Docker Hub).

    # Dockerfile
    FROM nginx:alpine
    COPY index.html /usr/share/nginx/html/index.html

    Build and push the Docker image:

    docker build -t <your-dockerhub-username>/webapp:latest . docker push <your-dockerhub-username>/webapp:latest

Create a Kubernetes Deployment

  1. The Deployment ensures that a specified number of replicas of your web application are running.

    # deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: webapp-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: webapp
      template:
        metadata:
          labels:
            app: webapp
        spec:
          containers:
          - name: webapp
            image: <your-dockerhub-username>/webapp:latest
            ports:
            - containerPort: 80

    Apply the Deployment:

    kubectl apply -f deployment.yaml

Create a Kubernetes Service

  1. The Service will expose your web application Pods. We'll use the LoadBalancer type to expose the Service externally (suitable for cloud environments). For local clusters like Minikube, NodePort can be used instead.

    # service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: webapp-service
    spec:
      type: LoadBalancer # Use NodePort for local clusters
      selector:
        app: webapp
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80

    Apply the Service:

    kubectl apply -f service.yaml

Explanation

  • Deployment: Manages the Pods running the web application.

    • replicas: 3: Ensures that three replicas of the web application Pods are running.

    • selector: Matches Pods with the label app: webapp.

    • template: Defines the Pods, including the container image and the port to expose (80).

  • Service: Exposes the web application Pods.

    • type: LoadBalancer: Requests an external load balancer to route traffic to the Pods (in a cloud environment). For local setups, use type: NodePort.

    • selector: Matches Pods with the label app: webapp.

    • ports: Defines the port configuration. port is the port on which the Service is exposed, and targetPort is the port on the Pods.

Accessing the Web Application

  • LoadBalancer Service: If you are using a cloud environment, the cloud provider will provision a load balancer. You can get the external IP using:

    kubectl get service webapp-service

    It might take a few minutes for the external IP to be assigned. Once assigned, you can access your web application using the external IP.

  • NodePort Service: If you are using Minikube or another local Kubernetes setup, use:

    minikube service webapp-service

    This will open your web application in a web browser.

By following these steps, you have successfully created a Kubernetes Deployment to manage your web application Pods and exposed them using a Kubernetes Service. This setup ensures that your web application is scalable, highly available, and easily accessible.

Conclusion

Kubernetes Services are fundamental building blocks for managing network access to sets of Pods. They provide a stable endpoint, load balancing, service discovery, and can handle dynamic changes in Pod lifecycles. Understanding the different types of Services and how to configure them is crucial for building scalable and resilient Kubernetes applications. By leveraging Services, you can ensure your applications are accessible, load-balanced, and resilient to changes and failures within your cluster.

Let's keep in touch!

Subscribe to keep up with latest updates. We promise not to spam you.