Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Added acknowledgments. Please check if it is correct.

...

Jobs are allocated by nodes, not GPUs. Since there are 8 GPUs per node, each submitted job must be able to use at least 8 GPUs.  Below we provide examples for how to effectively use this resource.

...

This container does not have OpenSMOG and needs to be installed with pip3. 

Code Block
$pip3 install OpenSMOG

Usage example with bash submission script, openmm run python script, and input files can be downloaded below.

...

Code Block
languagebash
firstline1
titleJob submission script
linenumberstrue
#!/bin/bash -l
#SBATCH --job-name=ctbpexample
#SBATCH --nodes=1
#SBATCH --cpus-per-task=96        #set to 96 if not using MPI (OpenMM does not use MPI)
#SBATCH --tasks-per-node=1
#SBATCH --export=ALL
#SBATCH --mem=0                   #each GPU assigned 32 GB by default
#SBATCH --gres=gpu:8 
#SBATCH --time=1-00:00:00         #max run time is 1 day

Acknowledgments

Please acknowledge the use of this cluster in your publications if you used the cluster:

This work was made possible by the donation of critical hardware and resources from AMD COVID-19 HPC Fund.

The following image can be used for poster and presentations.

AMD COVID-19 HPC FundImage Added