Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: OpenMM on NOTS section

...

Code Block
languagebash
#SBATCH --account=ctbp-common
#SBATCH --partition=ctbp-common
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --mem=64G
#SBATCH --gres=gpu:1
 
ml gomkl/2021a OpenMM/7.7.0-CUDA-11.4.2

NOTS (ctbp-onuchic)

This partition includes one GPU node, equipped with an AMD EPYC chip featuring 16 CPUs and 512GB of RAM. In addition, each node includes 8 NVIDIA A40 GPUs with 48GB of memory.

Code Block
languagebash
#SBATCH --account=ctbp-onuchic
#SBATCH --partition=ctbp-onuchic
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --mem=64G
#SBATCH --gres=gpu:1
 
ml gomkl/2021a OpenMM/7.7.0-CUDA-11.4.2

OpenMM on NOTS

You can deploy and run you own version of OpenMM via conda environment. For that, first install the OpenMM inside a conda environment requesting the modules already installed on NOTS. Note that in order to run with Nvidia GPUs, it has to be complicated with CUDA/<version>.

Code Block
languagebash
titleConda environment with OpenMM
# Load conda and gpu modules
module load Anaconda3/2022.05 CUDA/11.4.2

# Create the openmm environment
conda create --prefix $HOME/openmm

# Activate the new env.
source /opt/apps/software/Anaconda3/2022.05/bin/activate
conda activate $HOME/openmm

# Then install OpenMM. You can also follow by installing your favorite MD wrapper
conda install -c conda-forge openmm cudatoolkit=11.4.2 h5py openmichrom opensmog


This would be an example of a running slurm script.

Code Block
languagebash
titleSlurm running OpenMM via environment
#!/bin/bash -l

#SBATCH --account=ctbp-common
#SBATCH --partition=ctbp-common
#SBATCH --job-name=Template-OPENMM
#SBATCH --ntasks=1
#SBATCH --threads-per-core=1
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=2G
#SBATCH --gres=gpu:1
#SBATCH --time=00:05:00
#SBATCH --export=ALL

module purge
module load Anaconda3/2022.05 CUDA/11.4.2
source /opt/apps/software/Anaconda3/2022.05/bin/activate
conda activate $HOME/openmm

python your_script.py

ARIES

This partition includes 19 GPU nodes, each equipped with an AMD EPYC chip featuring 48 CPUs and 512GB of RAM. In addition, each node includes 8 AMD MI50 GPUs with 32 GB of memory each. To submit a job to this queue, it is necessary to launch 8 processes in parallel, each with a similar runtime to minimize waiting time. This ensures that all of the GPUs are used efficiently.

...