Page tree
Skip to end of metadata
Go to start of metadata

For information about NCI's current Gadi supercomputer, read the Gadi User Guide.

Raijin was decommissioned in December 2019 at the conclusion of its six year operational lifetime. This Raijin User Guide is provided for reference purposes.


Getting Started


The NCI user portal enables all users, including Lead Chief Investigators and NCI Partner Scheme Administrators, to:

  • Register as a new user
  • Update your details
  • Propose new partner or startup projects
  • Connect to existing projects

The portal provides self-service capabilities based on your role: User, CI, Lead CI, or Scheme Administrator.

New users can register through the portal in a simple, one-step procedure.

All users can log in and self-manage their account details without sending an email to the NCI Help Desk.

Lead CIs can log in and manage user connections to their projects.

Scheme Administrators can similarly review and approve or reject proposals for resources under their scheme.

Lead CIs and Scheme Administrators will receive automatic notifications of pending approvals by email, however approvals will need to be actioned through the online system.

To access the portal go to

Please note that NCI must comply with conditions specified in the Defence Trade Controls Act (2012, Cth.) This legislation imposes conditions on eligibility and access to NCI resources. Users must register and use an official, institutional email address for all correspondence with NCI. Please see the NCI Terms and Conditions for Access for more information.

Logging in

To login from your local desktop or other NCI computer run ssh:


where abc123 is your own username. Your ssh connection will be to one of six possible login nodes, raijin[1-6] (If ssh to Raijin fails, you should try specifying one of the nodes, i.e. As usual, for security reasons we ask that you do not set up passwordless ssh to Raijin. Entering your password every time you login is more secure, or use specialised ssh secure agents.

Connecting under Unix/Mac:

  • For ssh – ssh
  • For scp/sftp – scp, sftp
  • For X11 – ssh -Y, make sure you have installed XQuartz for OS X 10.8 or higher.

Connecting under Windows:

  • For ssh – putty, mobaxterm
  • For scp/sftp – putty, Filezilla, winscp, mobaxterm
  • For X11 – Cygwin, XMing, VNC, mobaxterm

If you are connecting for the first time, please change your initial password to one of your own choosing via the passwd command, which will prompt you as below:

Old password:
New password:
Re-enter new  password:

Interactive Use and Basic Unix

The operating system on all systems is Linux. You can read our Unix quick reference guide for basic usage.

When you login you will come in under the Resource Accounting SHell, (referred to as RASH), which is a local shell used to impose interactive limits and account for the time used in each interactive session.

Your account will be set up with an initial environment via a default .login file, and an equivalent .profile file, as well as a .rashrc file. The .rashrc file can be edited to change the default project (see Project Accounting) and the command interface shell to be started by RASH as you login. Your initial command interface shell will be bash. You can change this to tcsh by changing the line in .rashrc from

setenv SHELL /bin/bash

to be

setenv SHELL /bin/tcsh

instead. Other shells including ksh are available but may not provide the same support for modules as tcsh and bash do. There has been a local modification made for ksh. If you try to use a shell not registered with RASH for a particular machine you will default to bash.

Each interactive process you run on the login nodes has imposed on it a time (30mins) limit and a memory use (2GB) limit. If you want to run longer or more memory-intensive interactive jobs, please submit an interactive job (qsub -I); see Interactive PBS Jobs in the section below for more details.

Login Environment

At login you will not be asked which project to use. A default project will be chosen by the login shell if one is not already set in ~/.rashrc. You can change your default project by editing .rashrc in your home directory. To switch to a different project for interactive use once you have already logged in you can use the following command:

switchproj project_name

Note that this is just for interactive sessions. For PBS jobs, use the -P option to specify a project.


Monitoring Resource Usage

  • nci_account displays the usage of the project in the current quarter, as well as some recent history of the project if available. It also shows the /short and massdata storage system for the projects which you are connected to. You can also use -v to display detailed accounting information per user
  • lquota displays your disk usage and quota in your home directory and the /short/project/ directories
  • short_files_report reports /short files usage. Use -G project to see location and usage in /short owned by the group and use -P project to see group and user information of files in /short/ folder
  • nf_limits -P project -n ncpus -q queue displays walltime, memory limits for user. More default resources limits can be found in the section Queue Limits below.

Job Submission 

Queue Structure

Raijin's systems have a simple queue structure with two main levels of priority; the queue names reflect their priority. 

Intel Xeon Sandy Bridge 


  • high priority queue for testing, debugging or quick turnaround
  • each node has 2x 8 core Intel Xeon E5-2670 (Sandy Bridge) 2.6GHz
  • small limits particularly on time and number of CPUs
  • charging rate of 3 SUs per CPU-hour (walltime)


  • default queue designed for all production use
  • each node has 2x 8 core Intel Xeon E5-2670 (Sandy Bridge) 2.6GHz
  • allows the largest resource requests
  • charging rate of 1 SU per CPU-hour (walltime)


  • specifically for IO work, in particular, mdss commands for copying data to the mass-data system
  • runs on nodes with external network interface(s) and so can be used for remote data transfers (you may need to configure passwordless ssh)
  • tars, compresses and other manipulation of /short files
  • purely compute jobs will be deleted whenever detected
  • charging rate of 1 SU per CPU-hour (walltime)

Note: always use -l other=mdss when using mdss commands in copyq. This is so that jobs only run when the mdss system is available.

Intel Xeon Broadwell 


  • high priority queue for testing, debugging or quick turnaround
  • each node has 2x 14 core Intel Xeon E5-2690v4 (Broadwell) 2.6GHz
  • small limits particularly on time and number of CPUs
  • charging rate of 3.75 SUs per CPU-hour (walltime)


  • default queue designed for all production use of Broadwell nodes
  • each node has 2x 14 core Intel Xeon E5-2690v4 (Broadwell) 2.6GHz
  • allows the largest resource requests
  • charging rate of 1.25 SU per CPU-hour (walltime)

For more detailed specs see Broadwell Compute Nodes

Intel Xeon Skylake


  • default queue designed for all production use of Skylake nodes
  • 192 nodes with 2 x 16 cores (Intel Xeon Gold 6130, 2.1GHz) Skylake processor,.
  • 192 GBytes RAM
  • 400 GBytes of SSD local disk
  • charging rate of 1.5 SU per CPU-hour (walltime)

For more detailed specs see Skylake Compute Nodes

Specialised Nodes

Suitable for multithreading; highly vectorised codes:

knl (Knights Landing):

  • 1 x 64 cores with 4-way Hyperthreading (Intel Xeon Phi 7230, 1.30 GHz) in 32 compute nodes
  • 192 GBytes RAM
  • 16 GBytes MCDRAM on-package high bandwidth memory
  • 400 Gbytes of SSD local disk
  • Charge rate of 0.25SU per CPU-hour
  • #PBS -q knl
  • #PBS -l ncpus=64
  • #PBS -l other=hyperthread, to take advantage of Xeon Phi architecture, use of Hyperthreads is strongly recommended.

More information on using the Knights Landing queue, including how to vectorise your code, is available at Intel Knights Landing Compute Nodes

Suitable for large memory jobs:


  • 2 x 14 cores (Intel Xeon Broadwell technology, 2.6 GHz) in 10 compute nodes
  • 1 TBytes of RAM
  • 400GB local disk (SSD)
  • charge rate of 1.25SU per CPU-hour
  • minimum number of ncpus request is 7; must be a multiple of 7.
  • #PBS -q hugemem


  • 3 compute nodes with 4 x 8 cores (Intel Xeon Broadwell technology, 2.1 GHz)
  • 3 TBytes of RAM per node
  • 800GB local disk (SSD)
  • charge rate of 1.25SU per CPU-hour
  • minimum number of ncpus request is 32; must be a multiple of 32
  • minimum memory request is 1.5TB per node
  • #PBS -q megamem

Suitable for data parallel jobs:


  • 2 x 12 cores (Intel Haswell E5-2670v3, 2.3 GHz) in 14 compute nodes
  • 2 x 14 cores (12 usable) (Intel Broadwell E5-2690v4, 2.6 GHz) in 16 compute nodes
  • 4 x Nvidia Tesla 24 GBytes K80 Accelerator (or 8 x GPUs) on each node
  • 256 GBytes of RAM on CPU
  • 700 GBytes of SSD local disk
  • charge rate of 3SU per CPU-hour
  • #PBS -q gpu
  • #PBS -l ngpus = 2, minimum ngpus request is 2, in the multiple of 2, this is defined as the number of GPUs
  • #PBS -l ncpus = 6, minimum ncpus request is 6, in the multiple of 6, and 3 x ngpus


  • 2 x 12 cores (Intel Broadwell E5-2650v4, 2.2 GHz) in 2 compute nodes
  • 4 x Nvidia Tesla Pascal P100 Accelerator on each node
  • 128 GBytes of RAM on CPU
  • 400 GBytes of SSD local disk
  • charge rate of 4SU per CPU-hour
  • #PBS -q gpupascal
  • #PBS -l ngpus = 1, minimum ngpus request is 1.
  • #PBS -l ncpus = 6, minimum ncpus request is 6, in the multiple of 6, and 6 x ngpus

More information on how to use GPUs at NCI is available at GPU User Guide

Queue Limits

The command nf_limits -P project -n ncpus -q queue will show your current limits. 
If you require exemptions to these limits please submit a help ticket or contact

The current default walltime and CPU limits for the queues are as follows:



maximum jobs allowed queuing per project

available memory per node

CPU limit

default walltime limit


48 hours for 1-224 cores

24 hours for 256-480 cores

10 hours for 512-992 cores

5 hours for 1024-6144 cores

expressbwexpressbw(route)300128GB, 256GB3200

24 hours for 1-160 cores

5 hours for 161- 3200 cores

normalbwnormalbw(route)1000128GB, 256GB22512

48 hours for 1-255 cores

24 hours for 256-511 cores

10 hours for 512-1023 cores

5 hours for 1024- 22512 cores



express (route)


32GB, 64GB, 128GB


24 hours for 1-160 cores

5 hours for 176-3200 cores




normal (route)


32GB, 64GB, 128GB


48 hours for 1-255 cores

24 hours for 256-511 cores

10 hours for 512-1023 cores

5 hours for 1024-56960 cores








10 hours

megamemmegamem(route)2003TBmultiple of 32 (min 32)24 hours


hugemem (route)



multiple of 7 up to 28

(min 7)

96 hours for 7 cores

48 hours for 14 cores

24 hours for 28 cores







128GB, 256GB


48 hours

200128GB4848 hours







48 hours




The maximum job counts are not to be taken as a target for the number of jobs you should be submitting, but rather as a number to help stop runaway scripts from submitting too many jobs. Projects are expected to have a maximum job count closer to the "-def" execution queue limits not the routing queue limits.

The number of jobs that you can have running at any given time depends on the availability of resources. For express-def, max jobs allowed running also depends on the number of CPUs requested.

The version of PBS used on NCI systems has been modified to include customisable per-user/per-project limits:

  • All limits can be (and are intended to be) varied on a per-user or per-project basis – reasonable variation requests will be granted where possible.
  • Resources on the system are strictly allocated with the intent that if a job does not exceed its resource (time, memory, disk) requests, it should not be unduly affected by other jobs on the system. The converse of this is that if a job does try to exceed its resource requests, it will be terminated.

    If a project is in bonus, jobs submitted under an express queue will be moved to the normal queue. The maximum walltime for jobs of projects running in bonus has been limited to 4 hours.

    When explicit memory and jobfs requests are not provided during a job submission, the default values are used. Current defaults are mem=500MB and jobfs=100MB per node.

Please note OpenMP shared memory jobs that were previously restricted to 16 CPU cores can now run on up to 32 CPU cores depending on which queue the job is submitted to. The architecture of each node is 2 sockets with 8 CPU cores on normal/express queue nodes, 2*14 on normalbw/expressbw nodes, and 2*16 on normalsl nodes. As in the past, please check that your code can scale to these greater numbers of cores – many codes can’t.

In a PBS job script, the memory you specify using the -lmem= option is the total memory across all nodes. However, this value is internally converted into the per-node equivalent, and this is how it is monitored. For example, since each normal queue compute node has 16 CPUs, if you request -qnormal,-lncpus=32,mem=10GB, the actual limit will be 5GB on each of the two nodes. If the job exceeds this limit on either of the nodes, it will be terminated. 

Submitting a job

A simple example job script looks like this:

Single Node Job
#PBS -P a99
#PBS -q normal
#PBS -l walltime=20:00:00
#PBS -l mem=300MB
#PBS -l jobfs=1GB
#PBS -l ncpus=16
## For licensed software, you have to specify it to get the job running. For unlicensed software, you should also specify it to help us analyse the software usage on our system.
#PBS -l software=my_program 
## The job will be executed from current working directory instead of home.
#PBS -l wd 

./my_program.exe > my_output.out
Multi Node MPI Job
#PBS -P a99
#PBS -q normal
#PBS -l walltime=06:00:00
#PBS -l mem=128GB
#PBS -l jobfs=1GB
#PBS -l ncpus=64
## For licensed software, you have to specify it to get the job running. For unlicensed software, you should also specify it to help us analyse the software usage on our system.
#PBS -l software=my_program 
## The job will be executed from current working directory instead of home.
#PBS -l wd 

module load openmpi/1.10.2
mpirun ./my_program.exe > my_output.out
## Please make sure your program is MPI-enabled.

You submit this script for execution by PBS using the command:

qsub jobscript

More detailed PBSPro usage information can be found in How to use PBS.

Please make sure you specify #PBS -lother=gdata1a when submitting jobs accessing files in /g/data1a. If /g/data1a filesystem is not available, your job will not start and you will have to submit it again later.

Interactive PBS Jobs

Interactive batch jobs are likely to be used for debugging large or parallel programs etc. Since you want interactive response, it may be necessary to use the express queue to run immediately and avoid your session being suspended. However please note the express queue attracts a higher charging rate, so avoid leaving the session idle.

The -I option for qsub will result in an interactive shell being started on the compute nodes once your job starts.

A submission script cannot be used in this mode – you must provide all qsub options on the command line.

To use X windows in an interactive batch job, eg. if you want to launch a Graphical User Interface (GUI) such as for MATLAB, include the -X option when submitting your job – this will automatically export the DISPLAY environment variable.

Your job is subject to all the same constraints and management as any other job in the same queue. In particular, it will be charged on the basis of walltime, the same as any other batch job, since you will have dedicated access to the CPUs reserved for your request.

Don’t forget to exit your interactive batch session when finished to avoid both leaving CPUs idle on the machine and wasting your grant

Job Debiting

Once a job is submitted, x = factor * wall time * ncpus [SU] is pending in the accounting system where factor is the charging rate of the queue which the job was submitted to. The pending x [SU] is reserved from the project grant during the whole execution period. Once the job finishes execution, the used service unit (SU) is then calculated and the grant of the project which owns the submitted job is debited for that amount.

The used SU is NOT debited from the grant UNTIL the job finishes execution. Therefore, when submitting jobs that are expected to run over the boundary between allocation periods, you will need to check the grant available in the next quarter, ie midnight 00:00:00 of 1 January, 1 April, 1 July or 1 October.

Bonus Time

Most projects can continue to submit jobs when their time allocation is exhausted – such jobs are called “bonus jobs”.

bonus jobs:

  • queue at a lower priority than other jobs and will generally only run if there are no non-bonus jobs
  • make use of otherwise idle cycles while minimally hindering other jobs

    If a project is in bonus, jobs submitted to an express queue will be moved to a normal queue.

We recommend bonus jobs be as small and quick as possible to maximise the chance that available resources will be found to run them. 

Restrictions on the types of jobs that may run in bonus time are subject to change over time. If your project runs out of allocation we encourage you to contact the scheme manager for that project to obtain additional time if available.

Why isn't my job running?

There are several reasons why your job may not have started.

The first thing to do is run qstat -s jobid (your Job ID will look like 1234.r-man2 on Raijin). This command will print the comments from the job scheduler about your job.

  • If you see an “–” after the job, it means the scheduler has not yet considered your job. Please be patient.
  • If you see “Storage resources unavailable”, it means that you have exceeded one of your storage quotas. Run nci_account to get more information.
  • If you see “Waiting for software licenses”, it indicates that all the licenses for a software package you have requested are currently in use.
  • If you see “Not Running: Insufficient amount of resource ncpus” or “Not Running: Insufficient amount of resource job_tags” , it indicates that all CPUs are busy. Please be patient, PBS Pro scheduling is based on resources available and request; see our scheduling algorithm for more details. At the beginning and close to the end of each quarter the number of jobs tends to increase significantly compared to the rest of the quarter, hence a longer waiting time may be expected. You can view live Raijin usage stats at

To ensure your job launches as soon as possible, please submit job resource ncpus, walltime, memory and jobfs requests as close to the job's actual requirements as possible; if you request more resources than required, the scheduler may have to wait for longer periods than necessary for those resources to become free.

To run 'qstat' for multiple jobs, add '-E' option which improves 'qstat' performance and reduces load introduced to PBS server. The jobs in the output will be displayed in ascending ID order when this option is specified, so do not use this option if you want the output of jobs to be in an order other than ascending ID.

Be cautious of running 'watch qstat' to monitor the status of jobs. Run 'watch qstat' only when it is necessary and ensure the frequency is at least 60 seconds, preferably 600 seconds, by specifying '-n 600' option to 'watch'.









Irreproducible data eg. source code

Raijin only

2GB (user)




Large data IO, data maintained beyond one job

Raijin only

72GB (project)




Processing of large data files





Archiving large data files

external – access using the mdss command



2 copies in two different locations


IO intensive data, job lifetime

local to each individual Raijin nodes


duration of job


  • Each user belongs to at least two Unix groups:
    • unigrp– determined by their host institution, and
    • projectid(s) – one for each project they are attached to.
  • Increases to quotas in /short, /g/data and massdata will be considered on a case-by-case basis. /home allocation is fixed and will not be changed.
  • Timelimit defines time after which a file is erased on the filesystem since its most recent access time, as defined by the file access timestamp.
  • Please make sure you specify #PBS -lother=gdata1 when submitting jobs accessing files in /g/data1. If /g/data1 filesystem is not available, your job will not start. The following command can be used to monitor the status of /g/data1 on Raijin and can be incorporated inside your jobscript for checking the status of /g/data1:

/opt/rash/bin/modstatus -n gdata1_status

  • Please make sure you specify #PBS -lother=mdss when submitting jobs accessing files in mdss. If mdss filesystem is not available, your job will not start. The following command can be used to monitor the status of mdss on Raijin and can be incorporated inside your jobscript for checking the status of mdss:

/opt/rash/bin/modstatus -n mdss_status

  • Users request allocation of /jobfs as part of their job submission – the actual disk quota for a particular job is given by the jobfs request. Requests larger than 420GB for Sandybridge (copyq, normal, express), 700 GB for the GPU queue, 400GB for everything else (knl, normalbw, expressbw, normalsp, hugemem, gpupascal) will be automatically redirected to /short (but will still be deleted at the end of the job).

See Files and Filesystems for more detail.


Software Environment

At login users will have modules loaded for pbs, openmpi and the Intel Fortran and C compilers.

The module command syntax is the same no matter which command shell you are using.

module avail will show you a list of the software environments which can be loaded via a module load package command.

module help package should give you a little information about what the module load package will achieve for you. Alternatively module show package will detail the commands in the module file.

See the Environment Module manual for more details.

Application Software

Access to a licensed third-party software package is granted by joining the appropriate software Unix group. Before that, a user must fulfil all license requirements as stated in the ‘License’ on the third-party software package page in the ‘Software Available‘ Section.

Useful Links

  • No labels