Page tree
Skip to end of metadata
Go to start of metadata

Hardware Specifications

NCI has installed an additional 814 compute nodes on Raijin through the support of the Australian Government. Each compute node consists of 2 x Intel Xeon E5-2690 v4 (14-core, 2.6GHz) Broadwell processors. The new system will appear on Raijin as separate queues (normalbw, expressbw and hugemem), where "bw" stands for Intel Broadwell architecture. The compute nodes in the system utilise a Mellanox EDR interconnect (100Gb/s) in 2:1 blocking fat-tree topology, and there is a total of 2016Gb/s (252GB/s) of bandwidth to Raijin's core network (e.g. for access to storage).

The specifications of these new nodes (a total of 22,792 additional cores) are:

# Nodes

MemoryJobFS (Local Disk)CoresBase FrequencyMax Turbo Frequency
536128GB440GB SSD282.6 GHz3.5 GHz
268256GB440GB SSD282.6 GHz3.5 GHz
101TB440GB SSD282.6 GHz3.5 GHz

Maximum turbo frequency is not always achievable. Please see for further information.

Building Applications

While applications for Raijin's original compute nodes will run on these nodes, in order to make the most of the newer architecture these nodes provides, we strongly encourage you to rebuild your application using the newer instruction sets. When doing so, for the Intel compilers you should specify -xBROADWELL or -xCORE-AVX2 while for the GNU compilers you should use -march=broadwell.

For example, if you are compiling the code for Broadwell nodes on Raijin's login nodes (which are Xeon Sandy Bridge nodes), you would use a command similar to this:

module load intel-cc/<version>
icc -O3 -xBROADWELL myCode.c -o myBinary

We always recommend using the latest version of the Intel compilers, as older versions may not be able to optimise for newer architectures.

Running Jobs

To submit jobs to these nodes you should select the appropriate queue for your job.

QueueMemory AvailablePriorityCharge Rate
normalbw128GB / 256GBRegular1.25 SU / core / hour
expressbw128GB / 256GBHigh3.75 SU / core / hour
hugemem1TBRegular1.25 SU / core / hour

You should specify the queue you wish to use via the -q option to qsub, or with a #PBS -q directive in your PBS job script.

As with the normal and express queues, any job larger than one node must request CPUs in multiples of full nodes. This means that you should consider the number of CPUs required for your job to ensure you are requesting an appropriate amount for your job. In particular where you may previously have requested 16 CPUs you should now look at requesting 28 CPUs.

  • No labels