Endless descheduling of pods with node affinity preferredDuringSchedulingIgnoredDuringExecution and enough resources available on not tainted node but not on a tainted node #1410
Labels
kind/bug
Categorizes issue or PR as related to a bug.
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
What version of descheduler are you using?
descheduler version: 0.29.0/0.30.0
Does this issue reproduce with the latest release?
yes
Which descheduler CLI options are you using?
Please provide a copy of your descheduler policy config file
What k8s version are you using (
kubectl version
)?v.1.28.3
kubectl version
OutputWhat did you do?
Given a deployment with nodeAffinity
and not having enough resources on the tainted node pool but on an untainted node pool leads to following behaviour:
What did you expect to see?
Following test has been created:
The pod should not be descheduled - see
What did you see instead?
The pod got endlessly descheduled.
Analysis
I analyzed the line node_affinity#105 to cause this behaviour:
As a working example for debugging purposes i tested following code (without great knowledge of how to solve this best)
The text was updated successfully, but these errors were encountered: