Page tree


Gaussian 16 is the latest in the Gaussian series of programs. It provides state-of-the-art capabilities for electronic structure modelling and can predict the energies, molecular structures, vibrational frequencies and molecular properties of molecules and reactions in a wide variety of chemical environments. The latest version is Revision C.01

What's new in Gaussian 16 is available online here.

How to use

To use Gaussian 16, check for all available versions with module avail and the enter the appropriate version

 $ module load gaussian/g16c01

The default latest version of gaussian can be used by

$ module load gaussian
As the Gaussian program is made up of a myriad of methods and functionality there is no typical Gaussian jobscript and we strongly advise users to give careful thought to the resources needed for your system size (as measured by number of basis functions) and calculation type. If you are unsure of the resource requirements for your calculation refer to the Gaussian user guide's Efficiency Considerations.

The main resources to consider are memory, number of processors and scratch space which can be specified in various ways: see the equivalencies section in the Link 0 commands.
The default Gaussian memory setting is set low to allow quick interactive tests of your input deck. For production runs you will probably need to increase your memory request. You can request up to 48 CPUs (one Gadi node) for parallel Gaussian jobs but always choose the best number of processors appropriate for your job (this knowledge will come with experience).

On Gadi an issue with processor pinning is such that if you are not using the full node the %nproc option will not work efficiently and so the cpu list must be set explicitly.

Here is an example 12C PUs (under committed node) parallel Gaussian job run under PBS. This example requests 24 hours walltime, 24 GiB of memory and 200 GiB of temporary scratch space.

#PBS -P your_project_code
#PBS -l walltime=24:00:00
#PBS -l ncpus=12
#PBS -l mem=24GB
#PBS -l jobfs=200GB
#PBS -l software=g16
#PBS -l wd
# Load module, always specify version number.
module load gaussian/g16c01
# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`
cpulist=`grep Cpus_allowed_list: /proc/self/status | awk '{print $2}'`
export GAUSS_CDEF="$cpulist"
g16 < inputdeck > outputfile 2>&1

If using the script above, %nproc must be taken out of the Gaussian input file, and for the case above, correspond to:

%mem=(24-overhead for your method)Gb (see below)
# method/basis jobtype Maxdisk=200Gb (only relevant for post-SCF)

Typically, PBS requires 2-4 GiB memory overhead to what is specified in the Gaussian input for system libraries and other overheads. For post-SCF methods the overhead can be considerably more depending on the size of the system.

For scratch disk usage, post-SCF methods the route card makes use of the Maxdisk keyword which should correspond to the value set in the GAUSS_SCRDIR environment variable. Note that for jobfs requests < 400 GiB scratch the space will come out of local jobfs disk and will take the value set in #PBS -l jobfs.

For Gaussian jobs requiring more than 400 Gib scratch space you will need to use your project's scratch space and explicitly setenv/export GAUSS_SCRDIR to /scratch/<proj>/<username>/tmp after loading the Gaussian module.

Authors: Yue Sun, Rika Kobayashi, Mohsin Ali 
  • No labels