Suite Run Directory
At the module(site) level, we set suite run subdirectories inside `~/cylc-run` as symlinks to /scratch
. For example,
$ ls -l ~/cylc-run/u-cz535/run4 total 20 drwxr-xr-x 2 abc111 xy99 4096 Oct 12 16:14 bin -rw-r--r-- 1 abc111 xy99 1563 Oct 12 16:14 flow.cylc lrwxrwxrwx 1 abc111 xy99 44 Oct 12 16:14 log -> /scratch/xy99/abc111/cylc-run/u-cz535/run4/log drwxr-xr-x 2 abc111 xy99 4096 Oct 12 16:14 opt -rw-r--r-- 1 abc111 xy99 208 Oct 12 16:14 rose-suite.conf -rw-r--r-- 1 abc111 xy99 69 Oct 12 16:14 rose-suite.info lrwxrwxrwx 1 abc111 xy99 47 Oct 12 16:14 share -> /scratch/xy99/abc111/cylc-run/u-cz535/run4/share lrwxrwxrwx 1 abc111 xy99 46 Oct 12 16:14 work -> /scratch/xy99/abc111/cylc-run/u-cz535/run4/wor
To override this default configuration, you can add the modification into the user level configuration `~/.cylc/flow/global.cylc`. An example to symlink the suite run subdirectories to /g/data
with the exception of the share directory to /scratch
is provided as below.
#!Jinja2 [install] source dirs = ~/cylc-src, ~/roses [[symlink dirs]] [[[gadi]]] log = /g/data/{{environ['PROJECT']}}/{{environ['USER']}} share = /scratch/{{environ['PROJECT']}}/{{environ['USER']}} work = /g/data/{{environ['PROJECT']}}/{{environ['USER']}} [[[localhost]]] log = /g/data/{{environ['PROJECT']}}/{{environ['USER']}} share = /scratch/{{environ['PROJECT']}}/{{environ['USER']}} work = /g/data/{{environ['PROJECT']}}/{{environ['USER']}}
To be specific, the created suite run directory in the new run looks like:
$ readlink -f ~/cylc-run/u-cz535/run5/log /g/data/xy99/abc111/cylc-run/u-cz535/run5/log $ readlink -f ~/cylc-run/u-cz535/run5/share /scratch/xy99/abc111/cylc-run/u-cs535/run5/share $ readlink -f ~/cylc-run/u-cz535/run5/work /g/data/xy99/abc111/cylc-run/u-cz535/run5/work
Platforms
There are two platforms available for running the Cylc8 suites at NCI.
- localhost: Executing background job using the persistent session.
- pbs: Submit PBS jobs.
An example Cylc8 suite, u-cz535, is available for testing. Its flow.cylc file is shown below.
It defines two sections called [[local]] and [[HPC]] under [runtime].
The section [[local]] utilises 'localhost' platform to execute background tasks. The section [[HPC]] utilises "pbs" platform to submit tasks to the PBS job. In most cases, you only need these two platforms to run your Cylc8 suite.
#!Jinja2 [scheduling] [[graph]] R1 = """ get_stream => build_stream => run_stream """ [runtime] [[root]] [[[environment]]] STREAM_ROOT=$CYLC_SUITE_SHARE_DIR/stream PROJECTCODE = fp0 [[local]] inherit = root platform = localhost [[HPC]] inherit = root platform = pbs [[[directives]]] # project code that is billed for CPU hours -P = fp0 # job spec (must be all in one line) # make sure you include storage=gdata/hr22 (for rose/cylc apps) # and any other project codes you use -l = "walltime=00:10:00,ncpus=1,mem=1gb,storage=gdata/hr22+gdata/fp0+scratch/fp0" # queue to submit jobs to. Note any tasks requiring external network # access can override this to copyq -q = "normal" [[get_stream]] inherit = local script = ''' if [ -e $STREAM_ROOT ]; then echo "WARNING: STREAM_ROOT already exists, removing it ($STREAM_ROOT)" rm -rf $STREAM_ROOT fi hostname echo "Creating: $STREAM_ROOT" mkdir -p $STREAM_ROOT cd $STREAM_ROOT get_stream ''' [[build_stream]] inherit = HPC script = ''' cd $STREAM_ROOT/STREAM build_stream ''' [[run_stream]] inherit = HPC script = ''' cd $STREAM_ROOT/STREAM run_stream '''
Some other platforms are also available at this stage to run Cylc7 suites (from accessdev) in the compatible mode, such as "gadi_background", "gadi_localhost", "gadi" and "gadi.nci.org.au". These platforms will be removed in the future so please don't use these platforms directly in your Cylc8 suite.