Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Panel
borderColor#21618C
bgColor#F6F7F7
titleColor#17202A
borderWidth1
titleBGColor#FFB96A
borderStyleridge
titleOverview

Gaussian 09 predicts the energies, molecular structures, vibrational frequencies and molecular properties of molecules and reactions in a wide variety of chemical environments. Gaussian 09’s models can be applied to both stable species and compounds which are difficult or impossible to observe experimentally (e.g., short-lived intermediates and transition structures).

The Gaussian 09 User's Reference is available as a tarball here. Changes between Gaussian 16 and Gaussian 09 is available online here.

How to use


To use Gaussian 09 load the module using

Code Block
themeFadeToGrey
$ module load gaussian/g09e01
Note
Note that as the Gaussian program is made up of a myriad of methods and functionality there is no typical Gaussian jobscript and we strongly advise users to give careful thought to the resources needed for your system size (as measured by number of basis functions) and calculation type. If you are unsure of the resource requirements for your calculation refer to the Gaussian user guide's Efficiency Considerations.

The main resources to consider are memory, number of processors and scratch space which can be specified in various ways: see the equivalencies section in the Link 0 commands.

The default Gaussian memory setting is set low to allow quick interactive tests of your input deck. For production runs you will probably need to increase your memory request. You can request up to 48 CPUs (one Gadi node) for parallel Gaussian jobs but always choose the best number of processors appropriate for your job (this knowledge will come with experience).

Here is an example 48 CPU parallel Gaussian job run under PBS. This example requests 24 hours walltime, 12 GiB of memory and 200 GiB of temporary scratch space.

Code Block
themeFadeToGrey
#!/bin/bash
 
#PBS -P your_project_code
#PBS -l walltime=24:00:00
#PBS -l ncpus=48
#PBS -l mem=12GB
#PBS -l jobfs=200GB
#PBS -l software=g09
#PBS -l wd
 
# Load module, always specify version number.
module load gaussian/g09e01
 
# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`
 
g09 < inputdeck > outputfile 2>&1

If using the script above, the resource requests must be reflected in the Gaussian input file, and for the case above, correspond to:

Code Block
themeFadeToGrey
%mem=(12-overhead for your method)Gb (see below)
%NprocShared=48
%chk=checkpoint.chk
# method/basis jobtype Maxdisk=200Gb (only relevant for post-SCF)

Typically, PBS requires 2-4 GiB memory overhead to what is specified in the Gaussian input for system libraries and other overheads. For post-SCF methods the overhead can be considerably more depending on the size of the system.

For scratch disk usage, post-SCF methods the route card makes use of the Maxdisk keyword and correspond to the value set in the GAUSS_SCRDIR environment variable. Note that for jobfs requests < 400 GiB scratch the space will come out of local jobfs disk and will take the value set in #PBS -l jobfs. For Gaussian jobs requiring more than 400 Gb scratch space you will need to use your project's scratch space and explicitly setenv/export GAUSS_SCRDIR to /scratch/<proj>/<username>/tmp after loading the Gaussian module.

Note
Note that for Gadi an issue with processor pinning is such that if you are not using the full node the %nproc option will not work efficiently.


Authors: Yue Sun, Mohsin Ali, Rika Kobayashi