PBS directives are expected to all be placed at the beginning of the job submission script with no blank lines between them, and no other non-PBS commands until after all the PBS directives.
The project which you want to charge the jobs resource usage to. If missing in the submission, it is set to the default project in the shell environment from which the job is submitted.
The queue to run the job in. If missing in the submission, it is set to `normal`. Different queues have different limits of the amount of resources can be requested using the `
The wall clock time limit for the job. Time is expressed in the form:
System scheduling decisions depend heavily on the walltime request and it is always the best to make it as close as possible to the walltime usage.
Identifies the specific filesystems that the job will need access to, and is expressed as a plus-separated list of identifiers of the form
<filesystem>/<project> . The valid filesystems are currently
scratch (for Gadi's
gdata (for NCI's global filesystems, mounted at
/g/data on Gadi), and
massdata (for NCI's massdata storage facility, available through the
mdss command from jobs in the
copyq queue). All jobs implicitly have
scratch/<project> included in this list, where
<project> is the project that the job is running under. Locations that are not specified via this directive will not be available inside the job, and will result in, for example, "file not found" errors if you attempt to access them from the job.
The total memory limit for the job's usage. If missing in the submission, the value is set to 500MB. The memory allocation in a multi-node jobs will be distributed equally among every nodes.
The number of CPU cores to allocate to the job. If missing in the submission, the value is set to 1.
The number of GPU devices to allocate to the job (for jobs in the
The maximum amount of local disk available to the job on the hosting compute nodes. If missing in the submission, the value is set to 100MB. The jobfs allocation in a multiple-node jobs will be distributed equally among every nodes.
Run the job within a containerised Raijin-like environment. The job will use Raijin operating system image and
/apps even though it is running on Gadi. This can be used as an interim measure while porting applications and workflows from Raijin to Gadi. However, please note that is it provided on "as-is" basis with limited support available, and will be eventually retired. Moreover, while a particular workflow may operate correctly now, we can not guarantee that this will still be the case at any point in the future.
The licences required by the job. To request access to multiple licences, join names with colons, such as `-l software=abaqus:matlab_anu`. Please note, the name of the license is not necessarily the same as the name of the corresponding software group. Confirm the correct license name on the licence live status page before submission.
Even though not recommended, user can request a specific number of seats for a given license, using strings like `abaqus/20:matlab_anu`. To request the number of seats from a specific feature in a given license, try using the format given by this example `abaqus/abaqus=2/multiphysics=1:matlab_anu`.
At the start of the job, entering the directory from which the job was submitted.
The list of addresses to which emails about the job will be sent.
The set of conditions under which email about the job is sent. It may be any combination of "a" for when the job is aborted by batch system, "b" for when the job begins execution, and "e" for when the job ends execution. Alternatively, "n" for under no circumstances.
The name of the job. By default it is set to the name of the job submission script. If no submission script is used in the submission, the default job name is `STDIN`.
The path to the job's output log to which the job's standard output stream STDOUT is redirected. If missing in the submission, the output log is located inside the job's working directory $PBS_WORKDIR with the default name $PBS_JOBNAME.o$PBS_JOBID. If specified as a path to a directory, the default name becomes $PBS_JOBID.OU. If using a relative path, it takes the $PBS_WORKDIR as the base.
The path to the job's error log to which the job's standard error stream STDERR is redirected. If missing in the submission, the error log is located inside the job's working directory $PBS_WORKDIR with the default name $PBS_JOBNAME.e$PBS_JOBID. If specified as a path to a directory, the default name becomes $PBS_JOBID.OU. If using a relative path, it takes the $PBS_WORKDIR as the base.
The job's standard output stream STDOUT and standard error stream STDERR are merged into STDOUT. When using `eo`, it merges STDERR and STDOUT into STDERR.
The submitted job is to be run interactively.
The submitted job is to be forwarding X output to the display set in DISPLAY in the login shell from which the job is submitted.
-v <var=10, "var2='A,B'">
The environment variables and shell functions to be exported to the job.
The list of jobs may begin execution once this job terminated without errors. We recommended to define job dependencies using "beforeok" rather than "afterok", as the latter may lead to corner cases such as the prerequisites finishing before the dependent job is submitted.
The time after which the job is eligible for execution, expressed in the form
[[[[CC]YY]MM]DD]hhmm[.SS]. While waiting for this time, the job will be in state