K8s by Example: Deployments

Deployments manage Pods via ReplicaSets. The hierarchy is: Deployment → ReplicaSet → Pods. Deployments add self-healing, scaling, rolling updates, and rollbacks on top of ReplicaSets.

deployment.yaml

Deployments use the apps/v1 API. The selector links the Deployment to its Pods.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app.kubernetes.io/name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: my-app

The Pod template defines what each replica looks like. Labels in the template must match the selector.

  template:
    metadata:
      labels:
        app.kubernetes.io/name: my-app
    spec:
      containers:
        - name: app
          image: my-app:v1.2.0
          ports:
            - name: http
              containerPort: 8080
terminal

Kubernetes uses reconciliation loops to maintain desired state. Delete a Pod and watch the Deployment recreate it automatically.

$ kubectl delete pod my-app-7d9f8b6c5d-abc12
pod "my-app-7d9f8b6c5d-abc12" deleted

$ kubectl get pods
NAME                      READY   STATUS    AGE
my-app-7d9f8b6c5d-def34   1/1     Running   5s
my-app-7d9f8b6c5d-ghi56   1/1     Running   10m
my-app-7d9f8b6c5d-jkl78   1/1     Running   10m
terminal

Scale up and new Pods are created automatically. Scale down and excess Pods are terminated.

$ kubectl scale deployment my-app --replicas=5
deployment.apps/my-app scaled

$ kubectl get deployment my-app
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
my-app   5/5     5            5           10m
rolling-update-strategy.yaml

Two update strategies: RollingUpdate (default) gradually replaces Pods; Recreate kills all Pods before creating new ones.

spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1

minReadySeconds delays marking Pods as ready. revisionHistoryLimit controls how many old ReplicaSets to keep for rollback.

  replicas: 3
  minReadySeconds: 10
  revisionHistoryLimit: 5
terminal

Rolling updates create a new ReplicaSet and gradually shift Pods. Old ReplicaSets are kept (scaled to 0) for rollbacks.

$ kubectl get rs
NAME                DESIRED   CURRENT   READY   AGE
my-app-7d9f8b6c5d   3         3         3       10m
my-app-5f8d9a7b3c   0         0         0       1h
terminal

Each update creates a new revision in the history. View all revisions with rollout history.

$ kubectl rollout history deployment/my-app
deployment.apps/my-app
REVISION  CHANGE-CAUSE
1         Initial deployment
2         kubectl set image deployment/my-app app=my-app:v1.3.0
3         kubectl set image deployment/my-app app=my-app:v1.4.0
terminal

Update the image to trigger a rolling update. Watch the status with rollout status.

$ kubectl set image deployment/my-app app=my-app:v1.3.0
deployment.apps/my-app image updated

$ kubectl rollout status deployment/my-app
Waiting for deployment "my-app" rollout to finish: 1 of 3 updated...
Waiting for deployment "my-app" rollout to finish: 2 of 3 updated...
deployment "my-app" successfully rolled out
terminal

Rollbacks are fast because old ReplicaSets still exist. Kubernetes just scales them up and down.

$ kubectl rollout undo deployment/my-app
deployment.apps/my-app rolled back

$ kubectl rollout undo deployment/my-app --to-revision=2
deployment.apps/my-app rolled back to revision 2
terminal

Pause and resume rollouts for canary-style deployments. While paused, you can test new Pods before completing the update.

$ kubectl set image deployment/my-app app=my-app:v2.0.0
deployment.apps/my-app image updated

$ kubectl rollout pause deployment/my-app
deployment.apps/my-app paused

$ kubectl rollout resume deployment/my-app
deployment.apps/my-app resumed

Index | GitHub | Use arrow keys to navigate |