Overview

NAMD3 is the 2024 release of the NAMD program introducing new features described at https://www.ks.uiuc.edu/Research/namd/3.0/announce.html especially with enhanced GPU support with the new GPU-resident mode for NVIDIA and compatible AMD GPUs.

How to use

We provide the following builds of NAMD3:

  • namd3 - multinode
  • namd3-node-gpu - single node with GPU support
  • namd3-netltrts - multicopy, multinode capable, with GPU support
  • namd3-plumed  - multinode, plumed support

An example PBS script to run NAMD3 with mpirun, without GPU support, i.e. namd3, namd3-plumed binaries will look like:

#!/bin/bash

#PBS -l walltime=20:00:00
#PBS -l mem=256GB
#PBS -l ncpus=96
#PBS -l software=namd
#PBS -l wd

# Load module, always specify version number.
module load namd/3.0.1

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`

mpirun -np $PBS_NCPUS namd3 input.namd > output


NAMD3 on GPUs has been optimised for the single node build, especially with the new GPU-resident mode, for which a sample PBS script will look like:

#!/bin/bash

#PBS -q gpuvolta
#PBS -l walltime=20:00:00
#PBS -l mem=190GB
#PBS -l ngpus=4
#PBS -l ncpus=48
#PBS -l software=namd
#PBS -l wd

# Load module, always specify version number.
module load namd/3.0.1

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`

namd3-node-gpu +p $PBS_NCPUS +setcpuaffinity +devices 0,1,2,3 input.namd > output

The new GPU-resident mode is far superior to the previous GPU-offload mode and we cannot find a user case where the previous namd3-gpu build can be used effectively so we have now removed it.
Also, +p does not need to be $PBS_NCPUS as most of the heavy lifting will be done by the GPUs. The number of CPUs will have to be at least the number of GPUs (as each GPU will need a CPU to control it). You may wish to experiment to find the sweet spot.


To run multicopy NAMD3 e.g. for replica exchange, especially with GPU support, we now provide the namd3-netlrts binary which should only be used by experienced users. The script will look something like:

#!/bin/bash

#PBS -l walltime=20:00:00
#PBS -l mem=256GB
#PBS -l ncpus=96
#PBS -l software=namd
#PBS -l wd

# Load module, always specify version number.
module load namd/3.0.1

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`

# create separate folders for the output of M=8 replicas
mkdir -p output
(cd output; mkdir -p {0..7})

# run a simulation with M=8 replicas
 charmrun +p $PBS_NCPUS ++mpiexec ++remote-shell mympiexec namd3-netlrts +ppn 11 +pemap 1-47:12.11 +commap 0-47:12.1 +setcpuaffinity +replicas 8 +devicesperreplica 1 apoa1.namd +stdout output/%d/apoa1.%d.log

See the NAMD3 release notes for an explanation of mympiexec.