Why Kubernetes Uses Pods Instead of Running Containers Directly?

Why Kubernetes Uses Pods Instead of Running Containers Directly?

·

4 min read

Kubernetes is the most popular container orchestration platform, but have you ever wondered why it doesn’t just run containers directly? Instead, it introduces Pods as the smallest deployable unit. But why?

In this blog, we’ll explore the importance of Pods in Kubernetes, their advantages, and provide real-world examples to illustrate their role in managing containerized applications efficiently.


What is a Pod?

A Pod is a group of one or more containers that share the same network namespace, storage, and lifecycle. Instead of running containers individually, Kubernetes schedules and manages them in Pods. This abstraction brings several key benefits that would be difficult or impossible to achieve if Kubernetes managed containers directly.


1. Grouping Multiple Containers as a Single Unit

Why?

Some applications require multiple tightly coupled containers that must always run together. A Pod ensures they are scheduled, managed, and scaled as a unit.

Example: Web Server + Sidecar Logger

  • A Nginx web server serves content.

  • A sidecar container collects logs and sends them to an external logging service.

Pod Definition:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-logger
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
    - name: logger
      image: busybox
      command: ["sh", "-c", "while true; do cat /var/log/nginx/access.log; sleep 10; done"]
      volumeMounts:
        - name: log-volume
          mountPath: /var/log/nginx
  volumes:
    - name: log-volume
      emptyDir: {}

🔹 Why a Pod? This ensures that both containers stay together and can communicate efficiently.


2. Networking Consistency

Why?

Pods share the same network namespace, meaning:

  • Containers inside a Pod can communicate via localhost.

  • No extra network configuration is needed.

Example: Database and Worker App

  • A PostgreSQL database and a worker app need to communicate.

  • Inside a Pod, the worker can simply use localhost:5432.

Pod Definition:

apiVersion: v1
kind: Pod
metadata:
  name: db-worker
spec:
  containers:
    - name: postgres
      image: postgres
      env:
        - name: POSTGRES_USER
          value: "admin"
        - name: POSTGRES_PASSWORD
          value: "password"
    - name: worker
      image: my-worker-app
      command: ["sh", "-c", "python process_data.py --db-host=localhost"]

🔹 Why a Pod? Without a Pod, the worker would need to look up a separate service, adding complexity.


3. Storage Sharing

Why?

Pods enable containers to share storage volumes, making data exchange easy.

Example: File Uploader & Image Processor

  • An Uploader container saves files.

  • A Processor container picks up files and modifies them.

Pod Definition:

apiVersion: v1
kind: Pod
metadata:
  name: image-processor
spec:
  containers:
    - name: uploader
      image: uploader-image
      volumeMounts:
        - name: shared-storage
          mountPath: /data
    - name: processor
      image: processor-image
      volumeMounts:
        - name: shared-storage
          mountPath: /data
  volumes:
    - name: shared-storage
      emptyDir: {}

🔹 Why a Pod? Both containers can access /data seamlessly.


4. Process Lifecycle Management

Why?

  • If a container inside a Pod crashes, the entire Pod restarts.

  • Ensures containers that rely on each other are always available.

Example: AI Model Serving

  • A REST API serves AI models.

  • A Model Updater periodically fetches new models.

Pod Definition:

apiVersion: v1
kind: Pod
metadata:
  name: ai-serving
spec:
  containers:
    - name: rest-api
      image: ai-rest-api
    - name: model-updater
      image: model-updater
      command: ["sh", "-c", "while true; do python update_models.py; sleep 600; done"]

🔹 Why a Pod? If the REST API crashes, Kubernetes ensures it restarts along with the model updater.


5. Scalability and Load Balancing

Why?

  • Kubernetes scales Pods, not individual containers.

  • Load balancing works more efficiently when Pods contain all required services.

Example: Scaling a Web Application

  • A Pod runs Nginx + a cache service.

  • Kubernetes scales entire Pods for better performance.

Deployment Definition:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: nginx
          image: nginx
        - name: cache
          image: redis

🔹 Why a Pod? Each web instance gets its own cache, improving performance.


6. Standardization and Extensibility

Why?

  • Pods allow Kubernetes to enforce resource limits, security policies, and service discovery consistently.

Example: Resource Limits

apiVersion: v1
kind: Pod
metadata:
  name: limited-resources
spec:
  containers:
    - name: app
      image: my-app
      resources:
        limits:
          memory: "256Mi"
          cpu: "500m"

🔹 Why a Pod? Ensures controlled resource allocation for predictable performance.


Summary Table

FeatureWithout PodsWith Pods
Multi-container groupingNot possibleSupported ✅
NetworkingEach container gets its own IPContainers share an IP ✅
StorageEach container has its own volumeContainers can share volumes ✅
Process lifecycleContainers restart separatelyAll containers restart together ✅
ScalingScale each container separatelyScale entire Pods ✅
StandardizationHard to apply policiesPolicies apply to Pods ✅

Final Thoughts

If Kubernetes ran containers directly, it would lack scalability, resilience, and standardization in a distributed system. Pods act as the fundamental deployment unit in Kubernetes, enabling:

  • Multi-container applications

  • Scalability, resilience and resource management

  • Improved security and networking

Want hands-on practice? Try deploying some of these examples in your Kubernetes cluster! 🚀