Skip to content

Commit

Permalink
Merge pull request NREL#645 from kphommat/gh-pages-typo
Browse files Browse the repository at this point in the history
Fixing typos
  • Loading branch information
yandthj authored Jun 17, 2024
2 parents 06a6912 + 16b0607 commit cf0e78a
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions docs/Friendly_User/BuildandRun/gpubuildandrun.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ ml craype-x86-genoa
This gives us a standard, known programming environment from which to start. The "source" line give us access to a new set of modules. ml craype-x86-genoa sets optimizations for the cpus on the GPU nodes.


After this header we load module specific to the example. We then compile and run the example.
After this header we load modules specific to the example. We then compile and run the example.

**If the file "runall" is sourced (after getting an interactive session on 2 gpu nodes) all examples will are run. This takes about 20 minutes. You can also sbatch script.**
**If the file "runall" is sourced (after getting an interactive session on 2 gpu nodes) all examples will run. This takes about 20 minutes. You can also sbatch script.**



Expand Down
2 changes: 1 addition & 1 deletion docs/Friendly_User/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ An example GPU job allocation command:
salloc --time=2:00:00 --reservation=<friendly user reservation> --partition=gpu-h100 --account=<project handle> --nodes=1 -n 1 --mem-per-cpu=8G --gres=gpu:h100:<# of GPUS per node>
```

You're automatically given access to all of the memory for the GPU or GPUs requested (80GB per GPU). Note that you'll need to use `-n` to request the number of CPU cores needed, and `--mem` or `--mem-per-cpu` to request the amount of CPU memory needed. You can use `--exclusive` to requst all of the resources on the GPU node.
You're automatically given access to all of the memory for the GPU or GPUs requested (80GB per GPU). Note that you'll need to use `-n` to request the number of CPU cores needed, and `--mem` or `--mem-per-cpu` to request the amount of CPU memory needed. You can use `--exclusive` to request all of the resources on the GPU node.

You can verify whether the GPU device is found by running `nvidia-smi` after landing on the node. If you receive any kind of output other than a `No devices were found` message, there is a GPU waiting to be used by your software.

Expand Down

0 comments on commit cf0e78a

Please sign in to comment.