From 4fdfd352695c5d8b3500711b78549061cdaac52d Mon Sep 17 00:00:00 2001 From: Phommatha Date: Thu, 13 Jun 2024 10:10:47 -0600 Subject: [PATCH 1/2] fixing typos --- docs/Friendly_User/BuildandRun/gpubuildandrun.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/Friendly_User/BuildandRun/gpubuildandrun.md b/docs/Friendly_User/BuildandRun/gpubuildandrun.md index e5034caf7..76276ff11 100644 --- a/docs/Friendly_User/BuildandRun/gpubuildandrun.md +++ b/docs/Friendly_User/BuildandRun/gpubuildandrun.md @@ -32,9 +32,9 @@ ml craype-x86-genoa This gives us a standard, known programming environment from which to start. The "source" line give us access to a new set of modules. ml craype-x86-genoa sets optimizations for the cpus on the GPU nodes. -After this header we load module specific to the example. We then compile and run the example. +After this header we load modules specific to the example. We then compile and run the example. -**If the file "runall" is sourced (after getting an interactive session on 2 gpu nodes) all examples will are run. This takes about 20 minutes. You can also sbatch script.** +**If the file "runall" is sourced (after getting an interactive session on 2 gpu nodes) all examples will run. This takes about 20 minutes. You can also sbatch script.** From 16b060769c4e2f5831c5cfffaf2e74aa6d897614 Mon Sep 17 00:00:00 2001 From: Phommatha Date: Thu, 13 Jun 2024 10:17:53 -0600 Subject: [PATCH 2/2] typos I forgot to commit earlier --- docs/Friendly_User/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Friendly_User/index.md b/docs/Friendly_User/index.md index 0917fc3cd..9f47d585c 100644 --- a/docs/Friendly_User/index.md +++ b/docs/Friendly_User/index.md @@ -14,7 +14,7 @@ An example GPU job allocation command: salloc --time=2:00:00 --reservation= --partition=gpu-h100 --account= --nodes=1 -n 1 --mem-per-cpu=8G --gres=gpu:h100:<# of GPUS per node> ``` -You're automatically given access to all of the memory for the GPU or GPUs requested (80GB per GPU). Note that you'll need to use `-n` to request the number of CPU cores needed, and `--mem` or `--mem-per-cpu` to request the amount of CPU memory needed. You can use `--exclusive` to requst all of the resources on the GPU node. +You're automatically given access to all of the memory for the GPU or GPUs requested (80GB per GPU). Note that you'll need to use `-n` to request the number of CPU cores needed, and `--mem` or `--mem-per-cpu` to request the amount of CPU memory needed. You can use `--exclusive` to request all of the resources on the GPU node. You can verify whether the GPU device is found by running `nvidia-smi` after landing on the node. If you receive any kind of output other than a `No devices were found` message, there is a GPU waiting to be used by your software.