Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

PBS directives are expected to all be placed at the beginning of the job submission script with no blank lines between them, and no other non-PBS commands until after all the PBS directives.

-P <project>

The project which you want to charge the jobs resource usage to. The default project is specified by the PROJECT environment variable.

-q <queue>

The queue to run the job in. Different queues may different limits the amount of resources can be requested using the `-l` directives.

-l walltime=<HH:MM:SS>

The wall clock time limit for the job. Time is expressed in the form:[[hours:]minutes:]seconds
System scheduling decisions depend heavily on the walltime request and it is always the best to make it as close to the walltime usage as possible.

-l storage=<scratch/a00+gdata/xy11+massdata/a00>

Identifies the specific filesystems that the job will need access to, and is expressed as a plus-separated list of identifiers of the form <filesystem>/<project> . The valid filesystems are currently scratch (for Gadi's /scratch  filesystem), gdata (for NCI's global filesystems, mounted at /g/data  on Gadi), and massdata (for NCI's massdata storage facility, available through the mdss command from jobs in the copyq queue). All jobs implicitly have scratch/<project> included in this last, where <project> is the project that the job is running under. Affected locations that are not specified via this directive will not be available inside the job, and will result in, for example, "file not found" errors if you attempt to access them from the job.

-l mem=<10GB>

The total memory limit for the job's usage. The memory allocation in a multi-node jobs will be distributed equally among every nodes.

-l ncpus=<4>

The number of CPU cores to allocate to the job.

-l ngpus=<4>

The number of GPU devices to allocate to the job (for jobs in the gpuvolta queue).

-l jobfs=<10GB>

The maximum amount of local disk available to the job on the hosting compute nodes. The jobfs allocation in a multiple-node jobs will be distributed equally among every nodes.

-l image=raijin

Run the job within a containerised Raijin-like environment. The job will use Raijin operating system image and /apps even though it is running on Gadi. This can be used as an interim measure while porting applications and workflows from Raijin to Gadi. However, please note that is it provided on "as-is" basis with limited support available, and will be eventually retired. Moreover, while a particular workflow may operate correctly now, we can not guarantee that this will still be the case at any point in the future.

-l software=<matlab_institution>

The licences required by the job. To request access to multiple licences, join names with colons, such as `-l software=abaqus:matlab_anu`. Please note, the name of the license is not necessarily the same as the name of the corresponding software group. Confirm the correct license name on the licence live status page before submission.

Even though not recommended, user can request a specific number of seats for a given license, using strings like `abaqus/20:matlab_anu`. To request the number of seats from a specific feature in a given license,  try using the format given by this example `abaqus/abaqus=2/multiphysics=1:matlab_anu`.

-l wd

At the start of the job, entering the directory from which the job was submitted.

-M <user@example.com>

The list of addresses to which emails about the job will be sent.

-m <abe>

The set of conditions under which email about the job is sent. It may be any combination of "a" for when the job is aborted by batch system, "b" for when the job begins execution, and "e" for when the job ends execution. Alternatively, "n" for under no circumstances.

-W depend=beforeok:<jobid1:jobid2>

The list of jobs may begin execution once this job terminated without errors. We recommended to define job dependencies using "beforeok" rather than "afterok", as the latter may lead to corner cases such as the prerequisites finishing before the dependent job is submitted.

-a <timestamp>

The time after which the job is eligible for execution, expressed in the form [[[[CC]YY]MM]DD]hhmm[.SS]. While waiting for this time, the job will be in state W.

  • No labels