There are several major changes from the way ARE works compared to similar previous environments. The main difference between ARE and OOD / Strudel-VDI before it is, the sessions run directly on Gadi where as previously they were running from the Tenjin/Nirin cloud. Below is a list of key differences that may affect the way you use ARE.
Gadi HPC has much larger and varied resources available for running ARE sessions. As a result, you will be able to start much larger sessions:
All Gadi storage options are available to your ARE sessions where as previously only /g/data projects were shared. This includes:
NOTE: as with gadi jobs, you will need to specify which projects you want mounted from within the session submission form.
All queue wall times are the same as they are on Gadi. See the Gadi queue limits page for details. NOTE: this means the maximum walltime you can request is 48h (or shorter with some configurations).
You will need membership to a project that contains an SUs allocation for the current quarter to use ARE. OOD and Strudel VDI did not have this requirement.
To submit sub-jobs, just run qsub on the node your session is running from. If your application needs to specify a host to SSH onto then specify localhost as you cannot ssh to gadi.nci.org.au
from a compute node.
The compute nodes on most gadi queues do not have external internet access. The exception to this are the analysis
and copyq
queues.
ARE VDI sessions are run from within a singularity container. As a result, if you need to run a singularity container within it, you will need to add ssh localhost
prior to your singularity run command.