K8s by Example: DaemonSets

DaemonSets ensure one Pod runs on every node (or selected nodes). When nodes join, they get a Pod. When nodes leave, Pod is garbage collected. Use for: log collectors, monitoring agents, network plugins.

daemonset.yaml

DaemonSet ensures one Pod per node. No replicas field since count is determined by node count. Selector and template labels must match.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      containers:
        - name: fluentd
          image: fluent/fluentd:v1.16
daemonset-host.yaml

Access host resources with hostPath volumes and hostNetwork. Common for log collection, node monitoring, and network plugins. Be careful because hostPath breaks container isolation.

spec:
  template:
    spec:
      hostNetwork: true    # Use node's network namespace
      hostPID: true        # Access node's processes
      containers:
        - name: fluentd
          image: fluent/fluentd:v1.16
          volumeMounts:
            - name: varlog
              mountPath: /var/log
              readOnly: true
      volumes:
        - name: varlog
          hostPath:
            path: /var/log
            type: Directory
daemonset-tolerations.yaml

Tolerations allow DaemonSets to run on tainted nodes. First toleration: run on control plane nodes. Second toleration: tolerate all taints (run everywhere). Or use nodeSelector to target specific nodes.

spec:
  template:
    spec:
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule
        - operator: Exists
      nodeSelector:
        node-type: worker
daemonset-update.yaml

Update strategies: RollingUpdate (default) replaces Pods one at a time (maxUnavailable: 1). maxSurge: 0 because DaemonSets can’t surge (1 per node). OnDelete only updates Pods when manually deleted.

spec:
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 0
daemonset-affinity.yaml

DaemonSet Pods are scheduled by the DaemonSet controller, not the scheduler. Node affinity still works for selecting which nodes get Pods. They also ignore PodDisruptionBudgets during updates.

spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values: [linux]
terminal

Debug DaemonSets by checking desired vs ready count and inspecting Pods per node. Common issues: taints preventing scheduling, resource constraints, image pull failures.

$ kubectl get daemonset
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AGE
fluentd   3         3         3       3            5m

$ kubectl get pods -l app=fluentd -o wide
NAME            READY   STATUS    NODE
fluentd-abc12   1/1     Running   node-1
fluentd-def34   1/1     Running   node-2
fluentd-ghi56   1/1     Running   node-3

$ kubectl describe daemonset fluentd

$ kubectl describe node problem-node | grep -A5 Taints
Taints:  node-role.kubernetes.io/control-plane:NoSchedule

$ kubectl rollout status daemonset/fluentd
daemon set "fluentd" successfully rolled out

Index | GitHub | Use arrow keys to navigate |