K8s by Example: Pod Anti-Affinity
| Pod anti-affinity spreads Pods across nodes or zones. Prevents single points of failure. Use for: high availability, spreading replicas, isolating workloads. |
| pod-anti-affinity.yaml | |
| Pod anti-affinity is defined in | |
| deployment-anti-affinity.yaml | |
| Self anti-affinity spreads replicas of the same Deployment. Use the Deployment’s own labels. With 3 replicas, you need at least 3 nodes for required anti-affinity. | |
| pod-anti-affinity-preferred.yaml | |
| Required anti-affinity can make Pods unschedulable if there aren’t enough nodes. Use preferred for flexibility. With 3 replicas but 2 nodes, required leaves 1 Pod Pending, preferred puts 2 on one node. | |
| pod-anti-affinity-zones.yaml | |
| Spread across both nodes and zones using multiple rules. Higher weight = stronger preference. Zone spreading ensures AZ failure doesn’t take down all replicas. | |
| pod-anti-affinity-isolation.yaml | |
| Use | |
| pod-anti-affinity-namespace.yaml | |
| Cross-namespace anti-affinity using | |
| pod-anti-affinity-narrow.yaml | |
| Anti-affinity has performance impact on large clusters. Scheduler must check all Pods matching selector. Use nodeSelector or node affinity when possible. Narrow selector improves performance. | |
| terminal | |
| Debug anti-affinity issues by checking node capacity, Pod distribution, and scheduler events. Common issue: not enough nodes for required anti-affinity. | |