Page tree
Skip to end of metadata
Go to start of metadata

The current default walltime and resource limits for Gadi jobs are summarised below. If higher limit on core count (PBS_NCPUS) and walltime is needed, please launch a ticket on NCI help desk with a short description of the reasons why the exception is requested. For example, what simulation uses which solver provided by which application that doesn't allow check-points or the current scalability study suggests linear speedup at the core count beyond the current PBS_NCPUS limit. We will see how we can help on a case-by-case basis.

 Queue  

 

Max queueing jobs per project

Charge rate per resource*hour ‡PBS_NCPUS

Max PBS_MEM/node †

Max PBS_JOBFS/node †

Default walltime limit

normalnormal(route§)10002 SU 

1-48 

multiple of 48

190 GB 400 GB

48 hours for 1-672 cores

24 hours for 720-1440 cores

10 hours for 1488-2976 cores

5 hours for  3024-20736 cores

normal-exec§300
expressexpress(route)10006 SU

1-48 

multiple of 48

190 GB400 GB

24 hours for 1-480 cores

5 hours for 528- 3168 cores

express-exec50

hugemem


hugemem(route)10003 SU

1-48 

multiple of 48

1470GB 1400 GB

48 hours for 1-48 cores

24 hours for 96 cores

5 hours for 144, 192 cores

hugmem-exec

50
megamemmegamem(route)3005 SU

1-48

multiple of 48

2990 GB1400 GB

48 hours for 1-48 cores

24 hours for 96 cores


megamem-exec50
gpuvoltagpuvolta(route)10003 SU multiple of 12382 GB400 GB

48 hours for 1-96 CPU cores

24 hours for 144-192 CPU cores

5 hours for 240-960 CPU cores

gpuvolta-exec50
normalbwnormalbw(route)10001.25 SU

1-28

multiple of 28

128GB, 256GB400 GB

48 hours for 1-336 cores

24 hours for 364-840 cores

10 hours for 868-1736 cores

5 hours for 1764- 10080 cores

normalbw-exec300
expressbwexpressbw(route)10003.75 SU 

1-28 

multiple of 28

128GB, 256GB400 GB

24 hours for 1-280 cores

5 hours for 308-1848 cores

expressbw-exec50
normalslnormalsl(route)10001.5 SU

1-32

multiple of 32

192 GB 400 GB

48 hours for 1-288 cores

24 hours for 320-608 cores

10 hours for 640-1984 cores

5 hours for 2016-3200 cores

normalsl-exec300
hugemembwhugemembw(route)5001.25 SU

7, 14, 21, 28

multiple of 28

1020 GB 390 GB

48 hours for 1-28 cores

12 hours for 56-140 cores

hugemembw-exec100

megamembw

megamembw(route)

300

1.25 SU

32, 64


3000 GB

800 GB

48 hours for 32 cores

12 hours for 64 cores

megamembw-exec50
copyqcopyq(route)10002 SU1190 GB400 GB10 hours
copy-exec50

† To make sure your jobs can be handled properly when they are terminated because of usage exceeding the memory and/or the local disk limit, please request no more than the amount listed in the corresponding column.

‡ The number of `resource` is calculated as ncpus_request * max[ 1, (ncpus_per_node/mem_per_node)*(mem_request/ncpus_request)].

§ The route queue is where jobs stay before they go to the execution queue. Only jobs in the execution queues, whose names end with `-exec`, are considered to be run on compute and data mover nodes by PBS.   




  • No labels