K8s by Example: ReplicaSets

A ReplicaSet ensures N identical Pods are running at all times. It implements self-healing: if a Pod fails, the ReplicaSet creates a replacement. You rarely create ReplicaSets directly since Deployments manage them for you.

replicaset.yaml

ReplicaSets use the apps/v1 API. The selector links the ReplicaSet to Pods it manages.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app

The Pod template defines the Pods. Labels must match the selector.

  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: app
          image: my-app:v1
terminal

ReplicaSets run a reconciliation loop, constantly comparing desired state vs actual state. Delete a Pod and watch it get replaced.

$ kubectl delete pod my-app-abc12 &
$ kubectl get pods -w
NAME           READY   STATUS        AGE
my-app-abc12   1/1     Terminating   5m
my-app-def34   0/1     Pending       0s
my-app-def34   1/1     Running       2s
terminal

DESIRED vs CURRENT is always reconciled automatically. This is the core of Kubernetes self-healing.

$ kubectl get rs my-app
NAME     DESIRED   CURRENT   READY   AGE
my-app   3         3         3       10m
terminal

Deployments create a new ReplicaSet for each Pod template change. The RS name is deployment-name + pod-template-hash.

$ kubectl get rs -l app=my-app
NAME                DESIRED   CURRENT   READY   AGE
my-app-7d9f8b6c5d   3         3         3       10m
my-app-5f8d9a7b3c   0         0         0       1h

$ kubectl get rs my-app-7d9f8b6c5d -o jsonpath='{.metadata.labels.pod-template-hash}'
7d9f8b6c5d
terminal

ReplicaSets use owner references to track Pods. When you delete a ReplicaSet, Kubernetes cascades the deletion to its Pods (all Pods are also deleted).

$ kubectl get pod my-app-abc12 -o jsonpath='{.metadata.ownerReferences}'
[{"kind":"ReplicaSet","name":"my-app-7d9f8b6c5d",...}]

$ kubectl delete rs my-app
replicaset.apps "my-app" deleted
replicaset-labels.yaml

The selector links the ReplicaSet to its Pods. Pods must have all labels in the selector. Extra labels in the Pod template are fine.

spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
        version: v1
    spec:
      containers:
        - name: app
          image: my-app:v1
terminal

Warning: scaling ReplicaSet replicas directly (kubectl scale rs) gets overwritten by the Deployment controller. Always scale via the Deployment.

$ kubectl scale deployment my-app --replicas=5
deployment.apps/my-app scaled

Index | GitHub | Use arrow keys to navigate |