K8s em Exemplos: Restrições de Distribuição Topológica

Topology spread garante distribuição uniforme de Pods entre zones ou nodes. Mais flexível que anti-affinity para requisitos complexos de spreading. Controla o “skew” entre domínios.

topology-spread.yaml

Topology spread é definido em topologySpreadConstraints. maxSkew é o desbalanceamento permitido. Com 6 Pods em 3 zones e maxSkew=1, cada zone fica com 2 Pods.

spec:
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: topology.kubernetes.io/zone
      whenUnsatisfiable: DoNotSchedule
      labelSelector:
        matchLabels:
          app: my-app
topology-spread-multi.yaml

maxSkew: 1 significa que zones podem diferir por no máximo 1 Pod. DoNotSchedule aplica estritamente; ScheduleAnyway é best-effort. Múltiplas constraints podem ter rigidez diferente.

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app: my-app
  - maxSkew: 2
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: ScheduleAnyway
    labelSelector:
      matchLabels:
        app: my-app
topology-spread-mindomains.yaml

minDomains garante que Pods se espalhem em pelo menos N domínios. Previne todos Pods caindo em uma zone quando cluster é pequeno. Só aplica com DoNotSchedule.

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    minDomains: 3
    labelSelector:
      matchLabels:
        app: my-app
topology-spread-taints.yaml

nodeTaintsPolicy controla se nodes com taints são considerados. Honor (padrão) exclui nodes com taints que Pod não tolera. Ignore inclui todos os nodes no cálculo.

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: DoNotSchedule
    nodeTaintsPolicy: Honor
    labelSelector:
      matchLabels:
        app: my-app
topology-spread-affinity.yaml

nodeAffinityPolicy controla se node affinity é considerada no cálculo de spread. Honor só conta nodes correspondendo à affinity. Ignore conta todos os nodes.

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: workload-type
                operator: In
                values: [compute]
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: kubernetes.io/hostname
      whenUnsatisfiable: DoNotSchedule
      nodeAffinityPolicy: Honor
      labelSelector:
        matchLabels:
          app: my-app
topology-spread-production.yaml

Combine spreading por zone e node para workloads de produção. Spread estrito por zone previne impacto de falha de zone. Spread flexível por node lida com contagens desiguais de nodes.

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app: my-app
  - maxSkew: 1
    topologyKey: kubernetes.io/hostname
    whenUnsatisfiable: ScheduleAnyway
    labelSelector:
      matchLabels:
        app: my-app
topology-spread-rollout.yaml

matchLabelKeys automaticamente seleciona Pods do mesmo controller (revisão do Deployment). Previne cálculo de spread incluir Pods de ReplicaSets antigos durante rollout.

topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        app: my-app
    matchLabelKeys:
      - pod-template-hash
terminal

Debug topology spread verificando distribuição de Pods e eventos do scheduler. Verifique labels de zone nos nodes. Use colunas customizadas para ver topologia rapidamente.

$ kubectl get pods -l app=my-app -o wide
NAME         READY   STATUS    NODE
my-app-1     1/1     Running   node-1
my-app-2     1/1     Running   node-2
my-app-3     1/1     Running   node-3

$ kubectl get nodes -L topology.kubernetes.io/zone
NAME     STATUS   ZONE
node-1   Ready    us-east-1a
node-2   Ready    us-east-1b
node-3   Ready    us-east-1c

$ kubectl describe pod my-app-xyz
Events:
  Type     Reason            Message
  Warning  FailedScheduling  0/3 nodes available:
    3 node(s) didn't match topology spread constraints

$ kubectl get pod my-app-1 -o yaml | grep -A20 topologySpreadConstraints

Índice | GitHub | Use as setas do teclado para navegar |