Page tree

Cost Breakdown
On Gadi, all compute jobs are charged based on the resources reserved for the job, and the amount of walltime used by the job. The resources reserved are currently calculated based on the maximum of either CPUs requested, or proportion of node memory requested.

The equation for every job run on Gadi will be charged using the formula 

Job Cost (SU) = Queue Charge Rate  ✕  Max (NCPUsMemory Proportion)  ✕  Walltime Used (Hours)

All compute jobs that are run through Gadi are charged on the resources that are reserved for the job, and the amount of walltime that the job uses. The amount of resources reserved are currently calculated based on the maximum of CPUs requested, or the proportion of node memory requested.

Jobs that run on Gadi’s normal queue are charged 2SU to run 1 CPU with 4 GiB of memory for one hour.

For example, if a job has requested 4 CPUs, with 16 GiB or memory, for 5 hours of wall time, their equation would be

4CPU x 5 Hours x 2SU per hour = 40SU.


4x5x2SU=40SU

However, some jobs will request less CPUs and more memory. When this happens, you are taking memory away from the other CPUs in the node and will be charged accordingly, as other users can’t access those CPUs while your job is using that memory allocation.

Memory is charged in 4 GiB blocks, so 16 GiB = 4, 40GiB = 10, and so on.

For example, let’s say a job has requested 8 CPUs, 192GiB Memory, and 5 hours walltime in the normal queue.

They would normally be allocated 32 GiB of memory, but they have requested 192 GiB, which is an entire compute node worth of memory. Instead of being charged per CPU they will be charged based on their memory usage. Remembering that memory is charged in 4 GiB blocks, meaning 192 ÷ 4 = 48, so their equation would be,

48 mem x 5 hours x 2SU = 480SU




48x5x2SU=480SU

This charging model allows jobs that would request significant memory in a node, with a small number of CPUs (effectively preventing other jobs from running on the remaining CPUs as there's no memory for other jobs) to be charged according to the resources they are using.

For the most cost effective job, you should request only the resources you require.

Queue Charge Rate = Charge rate for queue listed at Queue Limits. Note that express queues increase job priority, while also increasing the job's cost.

NCPUs = Number of CPUs requested for job with PBS -l ncpus request

Memory Proportion = Memory requested  ÷  Memory per core (Memory Per Node ÷ NCPUs per node for queue)

Job cost examples


Queue

CPUs request

GPU requests

Memory Request (GB)

Walltime Usage

Cost

Comments

normal4n.a.16 GB5 hours4 x 5 x 2 = 40 SUCharged on CPU request
normal8n.a.16 GB5 hours8 x 5 x 2 = 80 SUCharged on CPU request
normal8n.a.128 GB5 hours32 x 5 x 2 = 320 SU

Charged on memory request

normal8n.a.192 GB5 hours48 x 5 x 2 = 480 SUCharged on memory request.
express8n.a.16 GB5 hours8 x 5 x 6 = 240 SUExpress charge rate is 6.0
gpuvolta121*90 GB5 hours12 x 5 x 3 = 180 SUCharged on CPU request. *Because of the `ncpus = 12 x ngpus` constraint in GPU job request, ngpus determines the cost through ncpus in the equation. 
gpuvolta121*380 GB5 hours

12 x max[1, (48/382) x (380/12)]

x

5 x 3 = 12 x max[1,3.97905....]

x

5 x 3 = 716.23 SU (rounded)

Charged on memory request.

Click here for an in-depth breakdown of Gadi queue limits can be found on our queue limits page.

If you are still having trouble, please contact the NCI Helpdesk or email us at help@nci.org.au

Authors: Yue Sun, Andrew Wellington, Adam Huttner-Koros, Mohsin Ali, Javed Shaikh, Andrew Johnston