Skip to content
This repository has been archived by the owner on May 17, 2022. It is now read-only.

Latest commit

 

History

History
56 lines (44 loc) · 2.49 KB

README.md

File metadata and controls

56 lines (44 loc) · 2.49 KB

Torch-UCC

The Torch-UCC plugin is a research prototype that enables collective communication over UCC for distributed PyTorch applications that load the plugin at application runtime.

Licenses

The torch-ucc plugin is licensed as:

Contributor Agreement and Guidelines

In order to contribute to torch-ucc, please sign up with an appropriate Contributor Agreement. Follow these instructions when submitting contributions and changes.

Build

Required packages:

# Build
UCX_HOME=<PATH_TO_UCX> UCC_HOME=<PATH_TO_UCC> WITH_CUDA=<PATH_TO_CUDA> python setup.py install

UCX_HOME required, specifies path to UCX installation directory

UCC_HOME required, specifies path to UCC installation directory

WITH_CUDA optional, if WITH_CUDA=no is set then only CPU tensors are supported

Run

Configuration variables

Name Values Description
TORCH_UCC_ALLGATHER_BLOCKING_WAIT 0 or 1 Sets behavior of wait function for CUDA Allgather. Async collective in PyTorch
TORCH_UCC_ALLREDUCE_BLOCKING_WAIT 0 or 1 Sets behavior of wait function for CUDA Allreduce.
TORCH_UCC_ALLTOALL_BLOCKING_WAIT 0 or 1 Sets behavior of wait function for CUDA Alltoall.
TORCH_UCC_BCAST_BLOCKING_WAIT 0 or 1 Sets behavior of wait function for CUDA Bcast.
export LD_LIBRARY_PATH=<PATH_TO_UCX>/lib:<PATH_TO_UCC>/lib:$LD_LIBRARY_PATH
python example.py
import torch
import torch.distributed as dist
import torch_ucc

....
dist.init_process_group('ucc', rank=comm_rank, world_size=comm_size)
....
dist.all_to_all_single(recv_tensor, send_tensor)