Page tree

Directive List


-P <project> 

#PBS -P <project>

The project which you want to charge the jobs resource usage to.

If missing in the submission, it is set to the default project in the shell environment from which the job is submitted.

-q <queue> 

#PBS -q <queue>

The queue to run the job in. If missing in the submission, it is set to normal. Different queues have different limits o the amount of resources that can be requested using the -l directives.

-l walltime=<HH:MM:SS>  

#PBS -l walltime=<HH:MM:SS>

The wall clock time limit for the job. Time is expressed in the form:[[hours:]minutes:]seconds
System scheduling decisions depend heavily on the walltime request and it is always best to make it as close as possible, without exceeding, the actual walltime used.

-l storage=<scratch/a00+gdata/xy11+massdata/a00> 

#PBS -l storage=<scratch/a00+gdata/xy11+massdata/a00>

Identifies the specific filesystems that the job will need access to, and is expressed as a plus-separated list of identifiers of the form <filesystem>/<project> . The valid filesystems are currently scratch (for Gadi's /scratch  filesystem), gdata (for NCI's global filesystems, mounted at /g/data  on Gadi), and massdata (for NCI's massdata storage facility, available through the mdss command from jobs in the copyq queue). All jobs implicitly have scratch/<project> included in this list, where <project> is the project that the job is running under.

Locations that are not specified via this directive will not be available inside the job, and will result in, for example, "file not found" errors if you attempt to access them from the job.

-l mem=<10GB> 

#PBS -l mem=<10GB>

The total memory limit for the job's usage. If missing in the submission, the value is set to 500 MiB. The memory allocation in a multi-node jobs will be distributed equally among every node. 

-l ncpus=<4> 

#PBS -l ncpus=<4>

The number of CPU cores to allocate to the job. If missing in the submission, the value is set to 1. 

-l ngpus=<4> 

#PBS -l ngpus=<4>

The number of GPU devices to allocate to the job (for jobs in the gpuvolta queue).

-l jobfs=<10GB>  

#PBS -l jobfs=<10GB>

The maximum amount of local disk available to the job on the hosting compute nodes. If this is missing in the submission, the value is set to 100 MiB. The jobfs allocation in a multiple-node jobs will be distributed equally among every nodes.

-l image=raijin 

#PBS -l image=raijin

Run the job within a containerised Raijin-like environment. The job will use Raijin operating system image and /apps even though it is running on Gadi. This can be used as an interim measure while porting applications and workflows from Raijin to Gadi. 
However, please note that is it provided on "as-is" basis with limited support available, and will be eventually retired. Moreover, while a particular workflow may operate correctly now, we can not guarantee that this will still be the case at any point in the future.

-l software=<matlab_institution>  

#PBS -l software=<matlab_institution>

The licences required by the job. To request access to multiple licences, join names with colons, such as -l software=abaqus:matlab_anu.

Please note, the name of the licence is not necessarily the same as the name of the corresponding software group. Confirm the correct licence name on the licence live status page before submission.

Even though not recommended, users can request a specific number of seats for a given licence, using strings like abaqus/20:matlab_anu. To request the number of seats from a specific feature in a given licence, try using the format given by this example abaqus/abaqus=2/multiphysics=1:matlab_anu.

-l wd  

#PBS -l wd

At the start of the job, entering the directory from which the job was submitted.

-M <user@example.com> 

#PBS -M <user@example.com>

The list of addresses to which emails about the job will be sent.

-m <abe>  

#PBS -m <abe>

The set of conditions under which email about the job is sent. It may be any combination of "a" for when the job is aborted by the batch system, "b" for when the job begins execution, and "e" for when the job ends execution. Alternatively, "n" for under no circumstances.

-N <jobName>  

#PBS -N <jobName>

The name of the job. By default it is set to the name of the job submission script.

If no submission script is used in the submission, the default job name is STDIN.

-o <path>  

#PBS -o <path>

The path to the job's output log to which the job's standard output stream STDOUT is redirected. If missing in the submission, the output log is located inside the job's working directory $PBS_WORKDIR with the default name $PBS_JOBNAME.o$PBS_JOBID. If specified as a path to a directory, the default name becomes $PBS_JOBID.OU. If using a relative path, it takes the $PBS_WORKDIR as the base.

-e <path>  

#PBS -e <path>

The path to the job's error log to which the job's standard error stream STDERR is redirected. If missing in the submission, the error log is located inside the job's working directory $PBS_WORKDIR with the default name $PBS_JOBNAME.e$PBS_JOBID. If specified as a path to a directory, the default name becomes $PBS_JOBID.OU. If using a relative path, it takes the $PBS_WORKDIR as the base.

-j oe  

#PBS -j oe

The job's standard output stream STDOUT and standard error stream STDERR are merged into STDOUT. When using eo, it merges STDERR and STDOUT into STDERR.

-I 

#PBS -I

The submitted job is to be run interactively.

-X

#PBS -X

The submitted job is to be forwarding X output to the display set in DISPLAY in the login shell from which the job is submitted.

-v <var=10,"var2='A,B'"> 

#PBS -v <var=10,"var2='A,B'">

The environment variables and shell functions to be exported to the job. 

-W depend=beforeok:<jobid1:jobid2>  

#PBS -W depend=beforeok:<jobid1:jobid2>

The list of jobs may begin execution once this job terminated without errors. We recommend defining job dependencies using beforeok, rather than afterok, as the latter may lead to cases such as the prerequisites finishing before the dependent job is submitted.

-a <timestamp>  

#PBS -a <timestamp>

The time after which the job is eligible for execution, expressed in the form [[[[CC]YY]MM]DD]hhmm[.SS]. While waiting for this time, the job will be in state W.

Authors: Yue Sun, Mohsin Ali