K8s by Example: Pod Anti-Affinity

Pod anti-affinity spreads Pods across nodes or zones. Prevents single points of failure. Use for: high availability, spreading replicas, isolating workloads.

pod-anti-affinity.yaml

Pod anti-affinity is defined in spec.affinity.podAntiAffinity. It schedules Pods away from matching Pods. Same syntax as pod affinity but opposite effect.

spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app: my-app
          topologyKey: kubernetes.io/hostname
deployment-anti-affinity.yaml

Self anti-affinity spreads replicas of the same Deployment. Use the Deployment’s own labels. With 3 replicas, you need at least 3 nodes for required anti-affinity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: web-app
              topologyKey: kubernetes.io/hostname
pod-anti-affinity-preferred.yaml

Required anti-affinity can make Pods unschedulable if there aren’t enough nodes. Use preferred for flexibility. With 3 replicas but 2 nodes, required leaves 1 Pod Pending, preferred puts 2 on one node.

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchLabels:
                app: my-app
            topologyKey: kubernetes.io/hostname
pod-anti-affinity-zones.yaml

Spread across both nodes and zones using multiple rules. Higher weight = stronger preference. Zone spreading ensures AZ failure doesn’t take down all replicas.

affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: my-app
          topologyKey: kubernetes.io/hostname
      - weight: 50
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: my-app
          topologyKey: topology.kubernetes.io/zone
pod-anti-affinity-isolation.yaml

Use matchExpressions to avoid specific workload types. Isolate noisy neighbors, separate incompatible workloads, or keep sensitive data on dedicated nodes.

podAntiAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
          - key: workload-type
            operator: In
            values:
              - batch
              - ml-training
      topologyKey: kubernetes.io/hostname
    - labelSelector:
        matchLabels:
          team: competitor-team
      topologyKey: kubernetes.io/hostname
pod-anti-affinity-namespace.yaml

Cross-namespace anti-affinity using namespaceSelector. Isolate Pods from different teams or environments. Prevents noisy neighbor issues across namespace boundaries.

podAntiAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchLabels:
          app: database
      namespaceSelector:
        matchLabels:
          environment: production
      topologyKey: kubernetes.io/hostname
---
    - labelSelector:
        matchLabels:
          app: resource-hog
      namespaces:
        - ml-workloads
        - batch-processing
      topologyKey: kubernetes.io/hostname
pod-anti-affinity-narrow.yaml

Anti-affinity has performance impact on large clusters. Scheduler must check all Pods matching selector. Use nodeSelector or node affinity when possible. Narrow selector improves performance.

podAntiAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchLabels:
          app: my-app
          version: v2
      topologyKey: kubernetes.io/hostname
terminal

Debug anti-affinity issues by checking node capacity, Pod distribution, and scheduler events. Common issue: not enough nodes for required anti-affinity.

$ kubectl get pods -l app=my-app -o wide
NAME         READY   STATUS    NODE
my-app-1     1/1     Running   node-1
my-app-2     1/1     Running   node-2
my-app-3     0/1     Pending   <none>

$ kubectl describe pod my-app-3
Events:
  Type     Reason            Message
  Warning  FailedScheduling  0/2 nodes available:
    2 node(s) didn't match pod anti-affinity rules

$ kubectl get nodes
NAME     STATUS   ROLES    AGE
node-1   Ready    <none>   10d
node-2   Ready    <none>   10d

$ kubectl get pod my-app-1 -o jsonpath='{.spec.affinity}' | jq

Index | GitHub | Use arrow keys to navigate |