This project is a docker containing a minimal run NCCL-test across 2 gpus. The host only needs to install NVIDIA drivers, docker, docker-compose, and nvidia extensions for docker, all other sw is contained in this docker image.
sh run.sh to get results for 2 GPU
This code was created to debug the "ddp across 2x A6000 locks up because IOMMU was enabled" because my system has no CUDA, etc. Not well crafted and slapped together while debugging this issue. So use at your own risk.
pytorch issue with logs and final solution
pytorch-lightning issues for this issue
Nvidia NCCL troubleshooting NCCL used when ddp is used. Turns out it says instanility and lockups if IOMMU is enabled. It contains a bunch of other troubleshooting information.
These tests check both the performance and the correctness of NCCL operations.
To build the tests, just type make
.
If CUDA is not installed in /usr/local/cuda, you may specify CUDA_HOME. Similarly, if NCCL is not installed in /usr, you may specify NCCL_HOME.
$ make CUDA_HOME=/path/to/cuda NCCL_HOME=/path/to/nccl
NCCL tests rely on MPI to work on multiple processes, hence multiple nodes. If you want to compile the tests with MPI support, you need to set MPI=1 and set MPI_HOME to the path where MPI is installed.
$ make MPI=1 MPI_HOME=/path/to/mpi CUDA_HOME=/path/to/cuda NCCL_HOME=/path/to/nccl
NCCL tests can run on multiple processes, multiple threads, and multiple CUDA devices per thread. The number of process is managed by MPI and is therefore not passed to the tests as argument. The total number of ranks (=CUDA devices) will be equal to (number of processes)*(number of threads)*(number of GPUs per thread).
Run on 8 GPUs (-g 8
), scanning from 8 Bytes to 128MBytes :
$ ./build/all_reduce_perf -b 8 -e 128M -f 2 -g 8
Run with MPI on 40 processes (potentially on multiple nodes) with 4 GPUs each :
$ mpirun -np 40 ./build/all_reduce_perf -b 8 -e 128M -f 2 -g 4
See the Performance page for explanation about numbers, and in particular the "busbw" column.
All tests support the same set of arguments :
- Number of GPUs
-t,--nthreads <num threads>
number of threads per process. Default : 1.-g,--ngpus <GPUs per thread>
number of gpus per thread. Default : 1.
- Sizes to scan
-b,--minbytes <min size in bytes>
minimum size to start with. Default : 32M.-e,--maxbytes <max size in bytes>
maximum size to end at. Default : 32M.- Increments can be either fixed or a multiplication factor. Only one of those should be used
-i,--stepbytes <increment size>
fixed increment between sizes. Default : (max-min)/10.-f,--stepfactor <increment factor>
multiplication factor between sizes. Default : disabled.
- NCCL operations arguments
-o,--op <sum/prod/min/max/avg/all>
Specify which reduction operation to perform. Only relevant for reduction operations like Allreduce, Reduce or ReduceScatter. Default : Sum.-d,--datatype <nccltype/all>
Specify which datatype to use. Default : Float.-r,--root <root/all>
Specify which root to use. Only for operations with a root like broadcast or reduce. Default : 0.
- Performance
-n,--iters <iteration count>
number of iterations. Default : 20.-w,--warmup_iters <warmup iteration count>
number of warmup iterations (not timed). Default : 5.-m,--agg_iters <aggregation count>
number of operations to aggregate together in each iteration. Default : 1.-a,--average <0/1/2/3>
Report performance as an average across all ranks (MPI=1 only). <0=Rank0,1=Avg,2=Min,3=Max>. Default : 1.
- Test operation
-p,--parallel_init <0/1>
use threads to initialize NCCL in parallel. Default : 0.-c,--check <0/1>
check correctness of results. This can be quite slow on large numbers of GPUs. Default : 1.-z,--blocking <0/1>
Make NCCL collective blocking, i.e. have CPUs wait and sync after each collective. Default : 0.-G,--cudagraph <num graph launches>
Capture iterations as a CUDA graph and then replay specified number of times. Default : 0.
NCCL tests are provided under the BSD license. All source code and accompanying documentation is copyright (c) 2016-2021, NVIDIA CORPORATION. All rights reserved.