diff --git a/docs/Friendly_User/BuildandRun/gpubuildandrun.md b/docs/Friendly_User/BuildandRun/gpubuildandrun.md index e5034caf7..76276ff11 100644 --- a/docs/Friendly_User/BuildandRun/gpubuildandrun.md +++ b/docs/Friendly_User/BuildandRun/gpubuildandrun.md @@ -32,9 +32,9 @@ ml craype-x86-genoa This gives us a standard, known programming environment from which to start. The "source" line give us access to a new set of modules. ml craype-x86-genoa sets optimizations for the cpus on the GPU nodes. -After this header we load module specific to the example. We then compile and run the example. +After this header we load modules specific to the example. We then compile and run the example. -**If the file "runall" is sourced (after getting an interactive session on 2 gpu nodes) all examples will are run. This takes about 20 minutes. You can also sbatch script.** +**If the file "runall" is sourced (after getting an interactive session on 2 gpu nodes) all examples will run. This takes about 20 minutes. You can also sbatch script.** diff --git a/docs/Friendly_User/index.md b/docs/Friendly_User/index.md index 0917fc3cd..9f47d585c 100644 --- a/docs/Friendly_User/index.md +++ b/docs/Friendly_User/index.md @@ -14,7 +14,7 @@ An example GPU job allocation command: salloc --time=2:00:00 --reservation= --partition=gpu-h100 --account= --nodes=1 -n 1 --mem-per-cpu=8G --gres=gpu:h100:<# of GPUS per node> ``` -You're automatically given access to all of the memory for the GPU or GPUs requested (80GB per GPU). Note that you'll need to use `-n` to request the number of CPU cores needed, and `--mem` or `--mem-per-cpu` to request the amount of CPU memory needed. You can use `--exclusive` to requst all of the resources on the GPU node. +You're automatically given access to all of the memory for the GPU or GPUs requested (80GB per GPU). Note that you'll need to use `-n` to request the number of CPU cores needed, and `--mem` or `--mem-per-cpu` to request the amount of CPU memory needed. You can use `--exclusive` to request all of the resources on the GPU node. You can verify whether the GPU device is found by running `nvidia-smi` after landing on the node. If you receive any kind of output other than a `No devices were found` message, there is a GPU waiting to be used by your software.