There are several major changes from the way ARE works compared to OOD. On ARE, your sessions run on NCI's Gadi HPC where as on OOD they run on NCI's Nirin Cloud. Below is a list of key differences that may affect the way you use ARE:
Gadi HPC has much larger and varied resources available for running ARE sessions. As a result, you will be able to start much larger sessions:
All Gadi storage options are available to your ARE sessions where as previously only /g/data projects were shared. This includes:
NOTE: as with gadi jobs, you will need to specify which projects you want mounted from within the session submission form.
All queue wall times are the same as they are on Gadi. See the Gadi queue limits page for details. NOTE: this means the maximum walltime you can request is 48h (or shorter with some configurations).
You will need membership of a project that contains a SU allocation for the current quarter to use ARE.
To submit Gadi HPC jobs from your ARE sessions, just run qsub on the node your session is running from. If your application needs to specify a host to SSH onto then specify localhost as you cannot ssh to gadi.nci.org.au
from a compute node.
The compute nodes on most gadi queues do not have external internet access. The exception to this are the analysis
and copyq
queues.
ARE VDI sessions are run from within a singularity container. As a result, if you need to run a singularity container within it, you will need to add ssh localhost
prior to your singularity run command.