NCI Help

Page tree

Overview

ESMF stands for Earth System Modeling Framework. It is a high-performance, flexible software infrastructure for building and coupling weather, climate, and related Earth science applications. It defines an architecture for composing complex, coupled modeling systems and includes data structures and utilities for developing individual models.

The basic idea behind ESMF is that complicated applications should be broken up into coherent pieces, or components, with standard calling interfaces. In ESMF, a component may be a physical domain, or a function such as a coupler or I/O system. It also includes toolkits for building components and applications, such as regridding software, calendar management, logging and error handling, and parallel communications.

More information: http://www.earthsystemmodeling.org/

How to use


You can check the versions installed in Gadi with a module query:

$ module avail esmf

We normally recommend using the latest version available and always recommend to specify the version number with the module command:

$ module load esmf/8.0.1

For more details on using modules see our software applications guide.

An example PBS job submission script named esmf_job.sh is provided below.

 It requests 48 CPUs, 128 GiB memory, and 400 GiB local disk on a compute node on Gadi from the normal queue for its exclusive access for 30 minutes against the project a00. It also requests the system to enter the working directory once the job is started. This script should be saved in the working directory from which the analysis will be done.

 To change the number of CPU cores, memory, or jobfs required, simply modify the appropriate PBS resource requests at the top of this file according to the information in our queue structure guide.

Note that if your application does not work in parallel, setting the number of CPU cores to 1 and changing the memory and jobfs accordingly is required to prevent the compute resource waste.

#!/bin/bash
 
#PBS -P a00
#PBS -q normal
#PBS -l ncpus=48
#PBS -l mem=128GB
#PBS -l jobfs=400GB
#PBS -l walltime=00:30:00
#PBS -l wd
 
# Load module, always specify version number.
module load openmpi/4.0.2
module load esmf/8.0.1
 
# Must include `#PBS -l storage=scratch/ab12+gdata/yz98` if the job
# needs access to `/scratch/ab12/` and `/g/data/yz98/`
 
# Run ESMF application
mpirun -np $PBS_NCPUS <ESMF exe and options>

For more information about ESMF Options: http://earthsystemmodeling.org/doc/

To run the job you would use the PBS command:

$ qsub esmf_job.sh


Authors: Mohsin Ali
  • No labels