NCI Help

Page tree

There are several major changes from the way ARE works compared to OOD.  On ARE, your sessions run on NCI's Gadi HPC where as on OOD they run on NCI's Nirin Cloud.  Below is a list of key differences that may affect the way you use ARE:

Compute resources

Gadi HPC has much larger and varied resources available for running ARE sessions.  As a result, you will be able to start much larger sessions:

  • CPUs: up to 48 cpu cores (previously 16)
  • Mem: up to 3TiB RAM (previously 48GiB)
  • GPUs: up to 4x V100 for GPU-based computation (per node; previously not available).  You can use multiple nodes for parallel workloads such as Dask with a jupyter lab session
  • GPU-accelerated desktop: up to 1x V100 GPU for accelerating your VDI desktop graphics (previously not available)
  • Nodes: up to 432 nodes (dependent on the queue and walltime, previously 2).  Note: you will need to run a secondary process to make use of the extra nodes as the ARE apps will only use the first node (e.g. VDI/Jupyter).

Storage

All Gadi storage options are available to your ARE sessions where as previously only /g/data projects were shared.  This includes:

  • /home
  • /scratch
  • /g/data

NOTE: as with gadi jobs, you will need to specify which projects you want mounted from within the session submission form.

Wall time

All queue wall times are the same as they are on Gadi.  See the Gadi queue limits page for details.  NOTE: this means the maximum walltime you can request is 48h (or shorter with some configurations).

SU usage

You will need membership of a project that contains a SU allocation for the current quarter to use ARE.

Gadi connection

To submit Gadi HPC jobs from your ARE sessions, just run qsub on the node your session is running from.  If your application needs to specify a host to SSH onto then specify localhost as you cannot ssh to gadi.nci.org.au from a compute node.

Internet access

The compute nodes on most gadi queues do not have external internet access.  The exception to this are the analysis and copyq queues.

Singularity containers

ARE VDI sessions are run from within a singularity container.  As a result, if you need to run a singularity container within it, you will need to add ssh localhost prior to your singularity run command.



  • No labels