forked from NVIDIA/hpc-container-maker
-
Notifications
You must be signed in to change notification settings - Fork 0
/
mpi_bandwidth.py
63 lines (48 loc) · 1.75 KB
/
mpi_bandwidth.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
"""
MPI Bandwidth
Contents:
CentOS 7
GNU compilers (upstream)
Mellanox OFED
OpenMPI
PMI2 (SLURM)
UCX
Building:
1. Docker to Singularity
$ hpccm --recipe mpi_bandwidth.py > Dockerfile
$ sudo docker build -t mpi_bw -f Dockerfile .
$ singularity build mpi_bw.sif docker-daemon://mpi_bw:latest
2. Singularity
$ hpccm --recipe mpi_bandwidth.py --format singularity --singularity-version=3.2 > Singularity.def
$ sudo singularity build mpi_bw.sif Singularity.def
Running with Singularity:
1. Using a compatible host MPI runtime
$ mpirun -n 2 singularity run mpi_bw.sif mpi_bandwidth
2. Using the MPI runtime inside the container
$ singularity run mpi_bw.sif mpirun -n 2 -H node1:1,node2:1 --launch-agent "singularity exec \$SINGULARITY_CONTAINER orted" mpi_bandwidth
3. Using SLURM srun
$ srun -n 2 --mpi=pmi2 singularity run mpi_bw.sif mpi_bandwidth
"""
Stage0 += comment(__doc__, reformat=False)
# CentOS base image
Stage0 += baseimage(image='centos:7', _as='build')
# GNU compilers
Stage0 += gnu(fortran=False)
# Mellanox OFED
Stage0 += mlnx_ofed()
# UCX
Stage0 += ucx(cuda=False)
# PMI2
Stage0 += slurm_pmi2()
# OpenMPI (use UCX instead of IB directly)
Stage0 += openmpi(cuda=False, infiniband=False, pmi='/usr/local/slurm-pmi2',
ucx='/usr/local/ucx')
# MPI Bandwidth
Stage0 += shell(commands=[
'wget -q -nc --no-check-certificate -P /var/tmp https://hpc-tutorials.llnl.gov/mpi/examples/mpi_bandwidth.c',
'mpicc -o /usr/local/bin/mpi_bandwidth /var/tmp/mpi_bandwidth.c'])
### Runtime distributable stage
Stage1 += baseimage(image='centos:7')
Stage1 += Stage0.runtime()
Stage1 += copy(_from='build', src='/usr/local/bin/mpi_bandwidth',
dest='/usr/local/bin/mpi_bandwidth')