From 6a0fbc851c00a4ded1b9487bdb2f67c85f9c2667 Mon Sep 17 00:00:00 2001 From: Taylor Aubry Date: Tue, 29 Aug 2023 17:02:25 -0600 Subject: [PATCH 1/4] updates to friendly users VASP documentation for VASP module use --- docs/Friendly_User/Applications/vasp.md | 75 ++++++++++++++++--------- 1 file changed, 48 insertions(+), 27 deletions(-) diff --git a/docs/Friendly_User/Applications/vasp.md b/docs/Friendly_User/Applications/vasp.md index 663254120..03f055890 100644 --- a/docs/Friendly_User/Applications/vasp.md +++ b/docs/Friendly_User/Applications/vasp.md @@ -1,50 +1,36 @@ -## VASP modules +## VASP modules on Kestrel -Section coming soon. - -## Running VASP - -We have found that it is optimal to run an intel toolchain build of VASP using cray-mpich-abi at runtime. Cray-mpich-abi has several dependencies on cray network modules, so the easiest way to load it is to first load ```PrgEnv-intel``` and then swap the default cray-mpich module for the cray-mpich-abi ```module swap cray-mpich cray-mpich-abi```. You must then load your intel compilers and math libraries, and unload cray's libsci. A sample script showing all of this is in the dropdown below. +There are modules for CPU builds of VASP 5.4.4 and VASP 6.3.2 each with solvation, transision state tools, and BEEF-vdW functionals. These modules can be loaded with ```module load vasp/5.4.4``` or ```module load vasp/6.3.2```. A sample job script is shown below. !!! Note It is necessary to specify the launcher using srun --mpi=pmi2 -??? Sample job script for your own vasp build +??? example "Sample job script: using modules" ``` #!/bin/bash #SBATCH --nodes=2 #SBATCH --tasks-per-node=104 #SBATCH --time=2:00:00 - #SBATCH --mem=0 # ensures you are given all the memory on a node + #SBATCH --account= + #SBATCH --job-name= - # Load cray-mpich-abi and its dependencies within PrgEnv-intel, intel compilers, mkl, and unload cray's libsci - source /nopt/nrel/apps/env.sh - module purge - module load PrgEnv-intel - module swap cray-mpich cray-mpich-abi - module unload cray-libsci/22.10.1.2 - module load intel-oneapi-compilers/2022.1.0 - module load intel-oneapi-mkl/2023.0.0-intel + source /nopt/nrel/apps/env.sh #the need for this will eventually be removed + module load vasp/6.3.2 - set -x - export OMP_NUM_THREADS=1 #turns off multithreading - export VASP_PATH=/PATH/TO/YOUR/vasp_exe - - - srun --mpi=pmi2 ${VASP_PATH}/vasp_std |& tee out - - #Note: it may be optimal to run with more processers per task, especially for heavier gw calculations e.g: - srun --mpi=pmi2 -ntasks 64 --ntasks-per-node=32 ${VASP_PATH}/vasp_std |& tee out + srun --mpi=pmi2 vasp_std |& tee out ``` +## Compiling VASP yourself + +This section has reccomendations for toolchains to use for building and running VASP. Please read carefully before compilling on Kestrel's cray architecture. -## Building VASP +### Building VASP We recomend building vasp with a full intel toolchain and launching with the cray-mpich-abi at runtime. Additionally, you should build on a compute node so that you have the same architecture as at runtime: ``` -salloc -N 1 -p standard -t TIME [-A account once accounting has been implemented] +salloc -N 1 -p standard -t TIME -A ACCOUNT ``` Then, load appropriate modules for your mpi, compilers, and math packages: ``` @@ -59,3 +45,38 @@ module load intel-oneapi-mkl On Kestrel, any modules you have loaded on the login node will be copied to a compute node, and there are many loaded by default for the cray programming environment. Make sure you are using what you intend to. Sample makefiles for vasp5 and vasp6 on Kestrel can be found in our [Kestrel Repo](https://github.com/NREL/HPC/tree/master/kestrel) under the vasp folder. + +### Running your build + +We have found that it is optimal to run an intel toolchain build of VASP using cray-mpich-abi at runtime. Cray-mpich-abi has several dependencies on cray network modules, so the easiest way to load it is to first load ```PrgEnv-intel``` and then swap the default cray-mpich module for the cray-mpich-abi ```module swap cray-mpich cray-mpich-abi```. You must then load your intel compilers and math libraries, and unload cray's libsci. A sample script showing all of this is in the dropdown below. + +!!! Note + It is necessary to specify the launcher using srun --mpi=pmi2 + +??? example "Sample job script: using your own build" + + ``` + #!/bin/bash + #SBATCH --nodes=2 + #SBATCH --tasks-per-node=104 + #SBATCH --time=2:00:00 + #SBATCH --account= + #SBATCH --job-name= + + # Load cray-mpich-abi and its dependencies within PrgEnv-intel, intel compilers, mkl, and unload cray's libsci + source /nopt/nrel/apps/env.sh + module purge + module load PrgEnv-intel + module swap cray-mpich cray-mpich-abi + module unload cray-libsci + module load intel-oneapi-compilers + module load intel-oneapi-mkl + + export VASP_PATH=/PATH/TO/YOUR/vasp_exe + + srun --mpi=pmi2 ${VASP_PATH}/vasp_std |& tee out + + ``` + + + From 53bc4cdefd03fac63adc2912105d4d92b688bf6d Mon Sep 17 00:00:00 2001 From: Haley Yandt <46908710+yandthj@users.noreply.github.com> Date: Wed, 30 Aug 2023 08:04:57 -0600 Subject: [PATCH 2/4] Update docs/Friendly_User/Applications/vasp.md --- docs/Friendly_User/Applications/vasp.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Friendly_User/Applications/vasp.md b/docs/Friendly_User/Applications/vasp.md index 03f055890..96032c6dd 100644 --- a/docs/Friendly_User/Applications/vasp.md +++ b/docs/Friendly_User/Applications/vasp.md @@ -24,7 +24,7 @@ There are modules for CPU builds of VASP 5.4.4 and VASP 6.3.2 each with solvatio ## Compiling VASP yourself -This section has reccomendations for toolchains to use for building and running VASP. Please read carefully before compilling on Kestrel's cray architecture. +This section has recommendations for toolchains to use for building and running VASP. Please read carefully before compiling on Kestrel's cray architecture. ### Building VASP From 41523ece8f396cbf0c58a5c5a864bd6a0b6eb1d6 Mon Sep 17 00:00:00 2001 From: Haley Yandt <46908710+yandthj@users.noreply.github.com> Date: Wed, 30 Aug 2023 08:05:02 -0600 Subject: [PATCH 3/4] Update docs/Friendly_User/Applications/vasp.md --- docs/Friendly_User/Applications/vasp.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Friendly_User/Applications/vasp.md b/docs/Friendly_User/Applications/vasp.md index 96032c6dd..073bbc0ac 100644 --- a/docs/Friendly_User/Applications/vasp.md +++ b/docs/Friendly_User/Applications/vasp.md @@ -28,7 +28,7 @@ This section has recommendations for toolchains to use for building and running ### Building VASP -We recomend building vasp with a full intel toolchain and launching with the cray-mpich-abi at runtime. Additionally, you should build on a compute node so that you have the same architecture as at runtime: +We recommend building vasp with a full intel toolchain and launching with the cray-mpich-abi at runtime. Additionally, you should build on a compute node so that you have the same architecture as at runtime: ``` salloc -N 1 -p standard -t TIME -A ACCOUNT ``` From f494e7fe37429e9e9f736b0501d762a982f306ca Mon Sep 17 00:00:00 2001 From: Haley Yandt <46908710+yandthj@users.noreply.github.com> Date: Wed, 30 Aug 2023 08:05:09 -0600 Subject: [PATCH 4/4] Update docs/Friendly_User/Applications/vasp.md --- docs/Friendly_User/Applications/vasp.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Friendly_User/Applications/vasp.md b/docs/Friendly_User/Applications/vasp.md index 073bbc0ac..c9cfb5b9c 100644 --- a/docs/Friendly_User/Applications/vasp.md +++ b/docs/Friendly_User/Applications/vasp.md @@ -1,6 +1,6 @@ ## VASP modules on Kestrel -There are modules for CPU builds of VASP 5.4.4 and VASP 6.3.2 each with solvation, transision state tools, and BEEF-vdW functionals. These modules can be loaded with ```module load vasp/5.4.4``` or ```module load vasp/6.3.2```. A sample job script is shown below. +There are modules for CPU builds of VASP 5.4.4 and VASP 6.3.2 each with solvation, transition state tools, and BEEF-vdW functionals. These modules can be loaded with ```module load vasp/5.4.4``` or ```module load vasp/6.3.2```. A sample job script is shown below. !!! Note It is necessary to specify the launcher using srun --mpi=pmi2