...
NCI has installed an expansion to the compute capacity for Gadi containing the latest generation Intel Sapphire Rapids processors. The expansion consists of 720 nodes, each containing two Intel Xeon Platinum 8470Q (Sapphire Rapids) processors with a base frequency of 2.1GHz, with turbo up to 3.8GHz, 512GiB of RAM, and 400GiB of SSD available to jobs for jobfs.
The specifications of the queues for these nodes (a total of 74,880 additional cores) are:
normalsr
...
Maximum turbo frequency is not always achievable. Please see http://www.intel.com/technology/turboboost/ for further information.
To generate a binary designed to operate on these nodes, you may need to recompile your application specifically targeting these nodes. We recommend using the latest Intel LLVM Compiler for these nodes (currently 2023.0.0, check for updated versions installed on Gadi with module avail intel-compiler-llvm
) with options to build your code for use on all nodes in Gadi.
A build with runtime dispatch for all architectures on Gadi can be build with:
Code Block |
---|
module load intel-compiler-llvm/<version>
icx -O3 -march=broadwell -axSKYLAKE-AVX512,CASCADELAKE,SAPPHIRERAPIDS myCode.c -o myBinary |
We always recommend using the latest version of the Intel compilers, as older versions may not be able to optimise for newer architectures.
There may be other options that will assist some codes. For example testing with -qopt-zmm-usage=high
(the default is low
, i.e. prefer 256-bit wide instructions instead of 512-bit). Some code that is especially floating-point heavy may benefit from this flag, but you should test with and without this flag to see if your code benefits as it will cause some code to slowdown.
To submit jobs to these nodes you should select the appropriate queue for your job.
Queue | Memory Available | Priority | Charge Rate |
---|---|---|---|
normalsr | 512GiB | Regular | 2.0 SU / (resource*hour) |
expresssr | 512GiB | High | 6.0 SU / (resource*hour) |
You should specify the queue you wish to use via the -q
option to qsub
, or with a #PBS -q
directive in your PBS job script.
As with the normal and express queues, any job larger than one node must request CPUs in multiples of full nodes. This means that you should consider the number of CPUs required for your job to ensure you are requesting an appropriate amount for your job. In particular where you may request 48 CPU cores in a normal queue job you should now look at requesting 104 CPU cores.