-
Notifications
You must be signed in to change notification settings - Fork 507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(pt): support CPU parallel training with PT #4224
base: devel
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThe changes in this pull request focus on the Changes
Assessment against linked issues
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## devel #4224 +/- ##
==========================================
- Coverage 83.52% 83.51% -0.02%
==========================================
Files 542 542
Lines 52544 52550 +6
Branches 3043 3047 +4
==========================================
- Hits 43888 43886 -2
- Misses 7709 7715 +6
- Partials 947 949 +2 ☔ View full report in Codecov by Sentry. |
nccl_available = dist.is_nccl_available() | ||
gloo_available = dist.is_gloo_available() | ||
# nccl first | ||
if nccl_available: | ||
backend = "nccl" | ||
elif gloo_available: | ||
backend = "gloo" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a question: when one installs the GPU version but doesn't have a GPU (or set CUDA_VISIBLE_DEVICES to empty), will backend be gloo?
if nccl_available: | ||
backend = "nccl" | ||
elif gloo_available: | ||
backend = "gloo" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that one can set the backend to cpu:gloo,cuda:nccl
. See https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group
Fix #4132.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation