Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

https://www.crc.rice.edu/app/rice_signup.php

Slurm configuration

To obtain information about the number of nodes, number of CPUS, memory and number of GPUs in each cluster use the following command:

sinfo -o "%N %c %m %f %G " -p your_partition

NOTS (commons)

This partition includes 16 volta GPU nodes, each equipped with 80 CPUs and 182GB of RAM. In addition, each node includes 2 NVIDIA GPUs.

...

Code Block
languagebash
#SBATCH --account=ctbp-onuchic
#SBATCH --partition=ctbp-onuchic
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --mem=64G
#SBATCH --gres=gpu:1
 
ml gomkl/2021a OpenMM/7.7.0-CUDA-11.4.2

ARIES

This partition includes 19 GPU nodes, each equipped with an AMD EPYC chip featuring 48 CPUs and 512GB of RAM. In addition, each node includes 8 AMD MI50 GPUs with 32 GB of memory each. To submit a job to this queue, it is necessary to launch 8 processes in parallel, each with a similar runtime to minimize waiting time. This ensures that all of the GPUs are used efficiently.

...

You should be able to connect to the compute servers without being prompted for a password.

Slurm Configuration

To obtain information about the number of nodes, number of CPUS, memory and number of GPUs in each cluster use the following command:

sinfo -o "%N %c %m %f %G " -p partition

More Information

Attachments
nameARIES_Quick_Start_wl52_20220406.pdf

...