Page tree

On this page

Usage

This release of GAMESS-US is available on Gadi. To run it user must load the appropriate environment via

$ module load gamess/2022-R2

The module provides the following environment:

setenv          GMSPATH /apps/gamess/2022-R2
prepend-path    PATH /apps/gamess/2022-R2
conflict        gamess
module load openmpi/4.1.3

See here for more information about modules.

Below is an example of a parallel GAMESS-US job runscript gamess_submit.pbs to be run under PBS on Gadi. It uses a fictitious project a99, using 4 cpus, and requiring 30 minutes of wall clock time, and 4GB of memory. The program uses the $PBS_JOBFS scratch directory for intermediate files, and for our example this is set to 1GB.

#!/bin/bash

#PBS -P a99
#PBS -l walltime=30:00
#PBS -l ncpus=4
#PBS -l mem=4gb
#PBS -l jobfs=1gb
#PBS -l wd

# Load module, always specify version number.
module load gamess/2022-R2

# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`. Details on:
# https://opus.nci.org.au/display/Help/PBS+Directives+Explained

INPUT="molecule1.inp"
OUTPUT=${INPUT%.*}.log

rungms $INPUT 00 $PBS_NCPUS >& $OUTPUT

To submit the job to the queueing system:

$ qsub gamess_submit.pbs

If you specify ncpus > the cores-per-node number please keep it multiple of of the cores-per-node number (multiple of 48 for the Gadi normal queue).

Some examples of input decks can be found in the directory $GMSPATH/tests. Documentation on input to GAMESS is in $GMSPATH/docs-input.txt.

Three versions of the major GAMESS binary are provided. All of them are compiled with Intel Parallel Studio 2021.4.0 compilers and linked to the matched MKL libraries. For all of them the network DDI interface build in MPI 'mixed' mode using OpenMPI library, version 4.1.3. The distinctions between the binaries as follows:

  • gamess.00.x - the default GAMESS MPI executable, built with GMS_LIBXC=true, NBO=true (NBO 7.0 support) and VB2000=true, and other configurable GAMESS settings set as false;
  • gamess.01.x - the GAMESS MPI executable for MSU Coupled Cluster calculations (not parallelized in any way), built with GMS_MSUCC=true, and other configurable GAMESS settings set as false;
  • gamess.02.x - the GAMESS executable for methods supportive of hybrid MPI/OpenMP parallelism, built with  GMS_OPENMP=true, GMS_LIBXC=true, NBO=true (NBO 7.0 support),  VB2000=true, and other configurable GAMESS settings set as false.

IMPORTANT: the order of the 'rungms' script  arguments set in accordance with what used in the GAMESS distribution. This means the following:

  • The first argument is the name of input file.
  • The second argument is  the version of the executable in two digits (default is 00 which corresponds to use of gamess.00.x).  To use an alternative executable, you must provide its version number as the second command-line argument to the script 'rungms', e.g.,
    to use  gamess.02.x you can modify the last line of the script above  as
rungms $INPUT 02 $PBS_NCPUS >& $OUTPUT
  • The third argument is the number of CPUS to use for computing. If not given, the default value equal to 1 will be used.

Note that GAMESS runs the conventional SCF algorithm by default . For modern day CPU architecture, like on Raijin nodes, the direct SCF algorithm is noticeably faster than conventional one. To invoke the direct SCF algorithm,  user must specify

$SCF DIRSCF=.TRUE. $END

in the input file.