Page tree

On this page

Overview

The development of GROMACS is variety of molecular properties, ranging from simple dipole moments to frequency dependent hyperpolarizabilities may be computed. Many basis sets are stored internally, and together with effective core potentials, all elements up to Radon may be included in molecules. Several graphics programs are available for viewing of the final results.

The development of GROMACS is mainly funded by academic research grants. The software is released under the GNU GPL.

Usage

First you need to decide on the version of the software you want to use. Use

$ module avail gromacs

to check what versions are available. We normally recommend using the latest version available. For example, to load the 2021 version of gromacs use

$ module load gromacs/2021

For more details on using modules see our modules help guide.

Here is an example of a parallel GROMACS job run under PBS on Gadi. The example pbs job script gromacs.pbs uses 4 cpus, and is submitted to run within 10 minutes of wall clock time, and 500MB of virtual memory.

#!/bin/bash

#PBS -P a99
#PBS -l ncpus=4
#PBS -l mem=500mb
#PBS -l walltime=10:00
#PBS -l wd

# Load module, always specify version number.
module load gromacs/2021

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`. Details on:
# https://opus.nci.org.au/display/Help/PBS+Directives+Explained

# This mdrun is run over the number of cpus specified in the pbs jobscript.
mpirun gmx_mpi mdrun -s water.tpr -o water.trr -c water_out.gro -v -g water.log

To submit the job to the queueing system:

$ qsub gromacs.pbs

Double precision versions of the gromacs executables are available with the suffix "_d". Therefore to run the above example in double precision mode replace gmx_mpi with gmx_mpi_d.

mpirun -np $PBS_NCPUS gmx_mpi_d mdrun  -s water.tpr -o water.trr -c water_out.gro -v -g water.log

Several example input decks can be found in the directory $GROMACS_BASE/share/gromacs/tutor/

Plumed

Old versions of gromacs have special plumed modules, for example gromacs/2020.1-plumed. You need to load this module if you need to use gromacs with plumed. Starting from version 2020.3 we include plumed into our standard installation, so gmx from gromacs/2020.3 includes plumed.

Using the GPUVOLTA queue

Corresponding gromacs modules will have -gpuvolta suffix, for example,

$ module load gromacs/2020.3-gpuvolta

will load gpuvolta optimized version of gromacs.

#!/bin/bash

#PBS -q gpuvolta
#PBS -l ncpus=24
#PBS -l ngpus=2
#PBS -l mem=40GB
#PBS -l walltime=1:00:00
#PBS -l wd

# Load module, always specify version number.
module load gromacs/2020.3-gpuvolta

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`. Details on:
# https://opus.nci.org.au/display/Help/PBS+Directives+Explained

export OMP_NUM_THREADS=$PBS_NCPUS

gmx mdrun -ntmpi 1 -ntomp $PBS_NCPUS -v -s mysystem.tpr

There are several parameters that can be set explicitly to speed up gromacs calculations with multiple GPUs. For example the following settings are recommended in the NVIDIA blog for gromacs 2020

#!/bin/bash

#PBS -q gpuvolta
#PBS -l ncpus=48
#PBS -l ngpus=4
#PBS -l mem=190GB
#PBS -l walltime=1:00:00
#PBS -l wd

# Load module, always specify version number.
module load gromacs/2022-gpuvolta

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`. Details on:
# https://opus.nci.org.au/display/Help/PBS+Directives+Explained

export OMP_NUM_THREADS=12
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_FORCE_UPDATE_DEFAULT_GPU=true

gmx mdrun -ntmpi 4 -ntomp $OMP_NUM_THREADS -v -s mysystem.tpr -nb gpu -bonded gpu -pme gpu -npme 1 -pin on

On my test system this gives about 40% speedup.

These settings are also good for multiple nodes with GPUs, for example:

#!/bin/bash

#PBS -q gpuvolta
#PBS -l ncpus=96
#PBS -l ngpus=8
#PBS -l mem=380GB
#PBS -l walltime=1:00:00
#PBS -l wd

# Load module, always specify version number.
module load gromacs/2022-gpuvolta

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`. Details on:
# https://opus.nci.org.au/display/Help/PBS+Directives+Explained

export OMP_NUM_THREADS=12
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_FORCE_UPDATE_DEFAULT_GPU=true

mpirun -np 8 --map-by ppr:1:NUMA:PE=12 gmx_mpi mdrun -ntomp $OMP_NUM_THREADS -v -s mysystem.tpr -nb gpu -bonded gpu -pme gpu -npme 1 -pin on

On the other hand on my test system I can get slightly better performance with

export OMP_NUM_THREADS=12
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
mpirun -np 8 --map-by ppr:1:NUMA:PE=12 gmx_mpi mdrun -ntomp $OMP_NUM_THREADS -v -s mysystem.tpr -pme gpu -npme 1

So you really need to run your own tests.

Note that gmx_mpi programs are only available starting from gromacs/2020.3-gpuvolta. Earlier version can run on 1 GPU node only.