Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

It should be noted that this example assumes the use of only one GPU per task and requests an equal amount of memory and CPU resources based on the total resources of each node. The amount of CPU and RAM memory utilized can be increased or decreased based on the user's experience with their system.

Slurm Configuration

NOTS (commons)

This partition includes 16 volta GPU nodes, each equipped with 80 CPUs and 182GB of RAM. In addition, each node includes 2 NVIDIA GPUs.

Code Block
languagebash
#SBATCH --account=commons
#SBATCH --partition=commons
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=40
#SBATCH --mem=90G
#SBATCH --gres=gpu:1
 
ml gomkl/2021a OpenMM/7.7.0-CUDA-11.4.2

NOTS (ctbp-common)

This partition includes two ampere GPU nodes, each equipped with an AMD EPYC chip featuring 16 CPUs and 512GB of RAM. In addition, each node includes 8 NVIDIA A40 GPUs with 48GB of memory.

Code Block
languagebash
#SBATCH --account=ctbp-common
#SBATCH --partition=ctbp-common
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --mem=64G
#SBATCH --gres=gpu:1
 
ml gomkl/2021a OpenMM/7.7.0-CUDA-11.4.2

NOTS (ctbp-onuchic)

This partition includes one GPU node, equipped with an AMD EPYC chip featuring 16 CPUs and 512GB of RAM. In addition, each node includes 8 NVIDIA A40 GPUs with 48GB of memory.

Code Block
languagebash
#SBATCH --account=ctbp-onuchic
#SBATCH --partition=ctbp-onuchic
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --mem=64G
#SBATCH --gres=gpu:1
 
ml gomkl/2021a OpenMM/7.7.0-CUDA-11.4.2

ARIES

This partition includes 19 GPU nodes, each equipped with an AMD EPYC chip featuring 48 CPUs and 512GB of RAM. In addition, each node includes 8 AMD MI50 GPUs with 32 GB of memory each. To submit a job to this queue, it is necessary to launch 8 processes in parallel, each with a similar runtime to minimize waiting time. This ensures that all of the GPUs are used efficiently.

Code Block
languagebash
#SBATCH --account=commons
#SBATCH --partition=commons
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --export=ALL
#SBATCH --gres=gpu:8
 
module load foss/2020b OpenMM

*Alternative if running 8 jobs in parallel

Code Block
languagebash
#SBATCH --account=commons
#SBATCH --partition=commons
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=6
#SBATCH --threads-per-core=1
#SBATCH --mem-per-cpu=3G
#SBATCH --gres=gpu:8
#SBATCH --time=24:00:00
#SBATCH --export=ALL
 
module load foss/2020b OpenMM

PODS

This partition includes 80 GPU nodes, each equipped with an AMD EPYC chip featuring 48 CPUs and 512GB of RAM. In addition, each node includes 8 AMD MI50 GPUs with 32 GB of memory each.

Code Block
languagebash
#SBATCH --account=commons
#SBATCH --partition=commons
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --export=ALL
#SBATCH --gres=gpu
 
module load foss/2020b OpenMM

Remote access to the clusters

...