Page tree

On this page

Usage

This release of GAMESS-US is available on Gadi. To run it user must load the appropriate environment via

$ module load gamess/2020-06-R1

The module provides the following environment:

setenv          GMSPATH /apps/gamess/2020-06-R1
prepend-path    PATH /apps/gamess/2020-06-R1/.
conflict        gamess
module load openmpi/4.0.2
setenv          GAMESS_BASE /apps/gamess/2020-06-R1
setenv          GAMESS_ROOT /apps/gamess/2020-06-R1
setenv          GAMESS_VERSION 2020-06-R1

See here for more information about modules.

Below is an example of a parallel GAMESS-US job runscript gamess_submit.pbs to be run under PBS on Gadi. It uses a fictitious project a99, using 4 cpus, and requiring 30 minutes of wall clock time, and 4GB of memory. The program uses the $PBS_JOBFS scratch directory for intermediate files, and for our example this is set to 1GB.

#!/bin/bash

#PBS -P a99
#PBS -l walltime=30:00
#PBS -l ncpus=4
#PBS -l mem=4gb
#PBS -l jobfs=1gb
#PBS -l wd

# Load module, always specify version number.
module load gamess/2020-06-R1

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`. Details on:
# https://opus.nci.org.au/display/Help/PBS+Directives+Explained

INPUT="molecule1.inp"
OUTPUT=${INPUT%.*}.log

rungms $INPUT $PBS_NCPUS >& $OUTPUT

To submit the job to the queueing system:

$ qsub gamess_submit.pbs

If you specify ncpus > the cores-per-node number please keep it multiple of of the cores-per-node number (multiple of 48 for the Gadi normal queue).

Some examples of input decks can be found in the directory $GMSPATH/tests. Documentation on input to GAMESS is in $GMSPATH/docs-input.txt.

Four versions of the major GAMESS binary are provided. All of them are compiled with Intel Parallel Studio 2020.x.xxx compilers and linked to the matched MKL libraries. For all of them the network DDI interface build in MPI 'mixed' mode using OpenMPI library, version 4.0.2. The distinctions between the binaries as follows:

  • gamess.00.x - the major binary, built with  GMS_MSUCC=false, GMS_OPENMP=false, GMS_LIBXC=true, VB2000=true, and no NBO support.
  • gamess.01.x - the binary for MSU Coupled Cluster calculations (not parallelized in any way), built with GMS_MSUCC=true, GMS_OPENMP=false, GMS_LIBXC=false, VB2000=false, and no  NBO support.
  • gamess.02.x - the binary for methods supportive of hybrid OpenMP/MPI parallelism, built with  GMS_MSUCC=false, GMS_OPENMP=true, GMS_LIBXC=true, VB2000=false, and no NBO support
  • gamess.nbo7.x - the binary which supports NBO version 7.0, built with  GMS_MSUCC=false, GMS_OPENMP=false, GMS_LIBXC=true, VB2000=true, supports and linked NBO version 7.0.

The default version of the executable is gamess.00.x. To use an alternative executable, you must provide its version number as the third command-line argument to the script 'rungms', e.g.,
to use  gamess.02.x you can modify the line as

rungms $INPUT $PBS_NCPUS 02 >& $OUTPUT

Note that GAMESS runs the conventional SCF algorithm by default . For modern day CPU architecture, like on Raijin nodes, the direct SCF algorithm is noticeably faster than conventional one. To invoke the direct SCF algorithm,  user must specify

$SCF DIRSCF=.TRUE. $END

in the input file.

The VB2000 calculations can run on a  single node only.
To run calculation having the NBO part your 'rungms' script call line should be

rungms $INPUT $PBS_NCPUS nbo7 >& $OUTPUT

The NBO part of calculation is not parallelized.