Skip to content

Using a certain number of GPUs for Spark on YARN #11566

Answered by tgravescs
an-ys asked this question in Q&A
Discussion options

You must be logged in to vote

generally assigning nodes with either Resource Manager isn't recommended. I get the affinity part though where I think really you want to tell it to prefer consolidating containers vs spreading them across the nodes. But I am curious, are you seeing a bit performance difference in your applications by grouping them onto the same nodes?

There is still an issue open in spark to support yarn placement constraints (https://issues.apache.org/jira/browse/SPARK-26867). Otherwise I don't know how to do that easily.

Kubernetes has some node affinity and pod topology stuff. There are other add on schedulers too but I haven't tried either to do what you are asking here so don't have an answer for yo…

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@an-ys
Comment options

@tgravescs
Comment options

Answer selected by an-ys
@an-ys
Comment options

@tgravescs
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants