K8s by Example: Pods

A Pod is the atomic unit of scheduling in Kubernetes. It wraps one or more containers that share network and storage. Containers in a Pod communicate via localhost and share the same IP address, port range, and volumes.

pod.yaml

Pods use the core v1 API. The kind tells Kubernetes what resource to create.

apiVersion: v1
kind: Pod

The name uniquely identifies the Pod within its namespace. Labels enable selection and grouping.

metadata:
  name: my-app
  labels:
    app: my-app

The spec.containers array defines the containers. Each needs a name and image.

spec:
  containers:
    - name: app
      image: nginx:alpine

Declare container ports. containerPort is primarily documentation, but Services can reference named ports and tools like Istio use this metadata.

      ports:
        - containerPort: 80
terminal

Pod lifecycle states: Pending (waiting for scheduling or image pull), Running (at least one container running), Succeeded (all containers exited successfully), Failed (container exited with error).

$ kubectl get pods -w
NAME     READY   STATUS    RESTARTS   AGE
my-app   0/1     Pending   0          0s
my-app   0/1     Pending   0          1s
my-app   0/1     ContainerCreating   0   2s
my-app   1/1     Running   0          5s
terminal

Common Pending reasons include insufficient CPU/memory, no matching nodes (affinity/taints), or image pull issues. Check Events for details.

$ kubectl describe pod my-app | grep -A5 Events
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  10s   default-scheduler  Successfully assigned
  Normal  Pulled     8s    kubelet            Container image pulled
  Normal  Started    7s    kubelet            Started container app
terminal

Pods are ephemeral. When a Pod dies, it’s gone forever - new Pods get new IPs and identities. Use these commands to debug running Pods.

$ kubectl logs my-app
2024/01/15 10:30:00 [notice] nginx started
2024/01/15 10:30:00 [notice] ready to accept connections
terminal

Execute commands inside a running container. Use -c container-name for multi-container Pods.

$ kubectl exec -it my-app -- sh
/ # hostname
my-app
/ # cat /etc/nginx/nginx.conf
terminal

Create a temporary debug Pod. The —rm flag deletes it when you exit.

$ kubectl run debug --image=alpine --rm -it -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -qO- http://my-app:80
<!DOCTYPE html>
<html>
multi-container-pod.yaml

Kubernetes creates a “pause” container that holds the network namespace. App containers join this namespace, sharing the same IP and using localhost.

apiVersion: v1
kind: Pod
metadata:
  name: web-metrics
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - containerPort: 80

Multi-container Pods are for tightly coupled workloads. Common patterns: sidecar (logging, proxies), ambassador (proxy to external services), adapter (transform output).

    - name: metrics
      image: prom/node-exporter
      ports:
        - containerPort: 9100
sidecar-pod.yaml

The sidecar pattern uses a shared volume. The main app writes logs, and the sidecar container ships them.

apiVersion: v1
kind: Pod
metadata:
  name: app-with-sidecar
spec:
  containers:
    - name: app
      image: my-app:v1
      ports:
        - containerPort: 8080
      volumeMounts:
        - name: logs
          mountPath: /var/log/app

The sidecar reads from the same volume and forwards logs to your logging infrastructure.

    - name: log-shipper
      image: fluent-bit:latest
      volumeMounts:
        - name: logs
          mountPath: /var/log/app

emptyDir creates a temporary directory that’s shared between containers and deleted when the Pod terminates.

  volumes:
    - name: logs
      emptyDir: {}
pod-resources.yaml

Resource requests and limits control CPU/memory allocation. Without limits, a Pod can consume all node resources.

apiVersion: v1
kind: Pod
metadata:
  name: app-with-resources
spec:
  containers:
    - name: app
      image: my-app:v1

requests are guaranteed resources used for scheduling. limits are the maximum allowed. CPU is in millicores (100m = 0.1 cores).

      resources:
        requests:
          memory: "128Mi"
          cpu: "100m"
        limits:
          memory: "256Mi"
          cpu: "500m"
terminal

Pod deployment is atomic - either all containers start successfully, or the Pod fails. View the full Pod spec with -o yaml.

$ kubectl get pod my-app -o yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-app
  namespace: default
status:
  phase: Running
  podIP: 10.244.1.5
  conditions:
    - type: Ready
      status: "True"
terminal

Every Pod gets a unique cluster-internal IP. Pods can communicate directly with any other Pod using its IP - no NAT required.

$ kubectl get pod my-app -o wide
NAME     READY   STATUS    IP           NODE
my-app   1/1     Running   10.244.1.5   worker-1
terminal

This is the foundation of Kubernetes flat networking model. All Pods can reach each other directly by IP.

$ kubectl exec debug -- wget -qO- 10.244.1.5:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
terminal

View logs from a previous container instance. Useful when a container has crashed and restarted.

$ kubectl logs my-app --previous
2024/01/15 10:29:00 [error] out of memory
2024/01/15 10:29:00 [notice] nginx shutting down

Index | GitHub | Use arrow keys to navigate |