Overview

On Gadi, users should submit jobs to a specific queue to run jobs on the corresponding type of node. For example, jobs need to run on GPUs have to be submitted to gpuvolta queue to get access to nodes with GPUs, while jobs requiring large amounts of memory may use the hugemem queue. If your job can run on the nodes in one of the normal queues, you should use those queues. The normal queues have more nodes available for your jobs, and this allows users and jobs that do require more specialised queues to get fair access to those queues.

Gadi queue structure also has two main levels of priority, express and normal, which is reflected in the queue names. Express queues (express and expressbw), are designed to support work needs rapid turnaround, but at a higher service unit charge.

Intel Xeon Cascade Lake

express

normal

copyq

hugemem

megamem

gpuvolta

Intel Xeon Broadwell (ex-Raijin)

expressbw

normalbw

hugemembw

megamembw

Intel Xeon Skylake (ex-Raijin)

normalsl

NVIDIA DGX A100 (Specialised GPU)

dgxa100

Intel Xeon Sapphire Rapids

Hardware Specifications

NCI has installed an expansion to the compute capacity for Gadi containing the latest generation Intel Sapphire Rapids processors. The expansion consists of 720 nodes, each containing two Intel Xeon Platinum 8470Q (Sapphire Rapids) processors with a base frequency of 2.1GHz, with turbo up to 3.8GHz, 512GiB of RAM, and 400GiB of SSD available to jobs for jobfs.

The specifications of the queues for these nodes (a total of 74,880 additional cores) are:

normalsr

expresssr

Maximum turbo frequency is not always achievable. Please see http://www.intel.com/technology/turboboost/ for further information.

Building Applications

To generate a binary designed to operate on these nodes, you may need to recompile your application specifically targeting these nodes. We recommend using the latest Intel LLVM Compiler for these nodes (currently 2023.0.0, check for updated versions installed on Gadi with module avail intel-compiler-llvm )  with options to build your code for use on all nodes in Gadi.

A build with runtime dispatch for all architectures on Gadi can be build with:

module load intel-compiler-llvm/<version>
icx -O3 -march=broadwell -axSKYLAKE-AVX512,CASCADELAKE,SAPPHIRERAPIDS myCode.c -o myBinary

We always recommend using the latest version of the Intel compilers, as older versions may not be able to optimise for newer architectures.

There may be other options that will assist some codes. For example testing with -qopt-zmm-usage=high (the default is low, i.e. prefer 256-bit wide instructions instead of 512-bit). Some code that is especially floating-point heavy may benefit from this flag, but you should test with and without this flag to see if your code benefits as it will cause some code to slowdown.

Running Jobs

To submit jobs to these nodes you should select the appropriate queue for your job.

QueueMemory AvailablePriorityCharge Rate
normalsr512GiBRegular2.0 SU / (resource*hour)
expresssr512GiBHigh6.0 SU / (resource*hour)

You should specify the queue you wish to use via the -q option to qsub, or with a #PBS -q directive in your PBS job script.

As with the normal and express queues, any job larger than one node must request CPUs in multiples of full nodes. This means that you should consider the number of CPUs required for your job to ensure you are requesting an appropriate amount for your job. In particular where you may request 48 CPU cores in a normal queue job you should now look at requesting 104 CPU cores.