Page tree

On this page

Overview

NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.

News and other information can be found on the Nwchem web site: https://nwchemgit.github.io/.

Usage

First you need to decide on the version of the software you want to use. Use

$ module avail nwchem

to check what versions are available. We normally recommend using the latest version available. For example, to load the 7.0.0 version of nwchem use

$ module load nwchem/7.0.0

For more details on using modules see our modules help guide.

Here is an example of a parallel Nchem job run under PBS on Gadi. The example pbs job script nwchem_pbs uses 480 cpus, and is submitted to run within 10h of wall clock time, and 1500GB of memory.

#!/bin/bash

#PBS -l ncpus=480
#PBS -l mem=1500gb
#PBS -l walltime=10:00:00
#PBS -l jobfs=3000GB
#PBS -l wd

# Load module, always specify version number.
module load nwchem/7.0.0

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`. Details on:
# https://opus.nci.org.au/display/Help/PBS+Directives+Explained

mpirun nwchem input > output

To submit the job to the queueing system:

$ qsub nwchem_pbs

SCF, DFT calculations and geometry optimisation

NWCHEM uses conventional SCF by default and writes all integrals to disk. This is not efficient on modern HPC machine. We strongly suggests adding DIRECT keyword to your SCF or DFT section. (For some system it may be useful to look at SEMIDIRECT option, see NWCHEM manual for details. However, it looks like DIRECT works well in all cases we tested.). For DFT calculations it is also useful to add GRID NODISK keyword. This will force NWCHEM not to write/read grid points to disk.

Memory requirements 

Most of the gadinodes have 4GB memory per CPU, so the following memory statement in input is appropriate:

memory stack 1600 mb heap 400 mb global 2000 mb

Multi reference CCSD(t) calculations (TCE engine)

Memory requirements are different. For 4GB per CPU nodes, the following seems to work well:

memory stack 2200 mb heap 400 mb global 1400 mb