There are several reasons that a job could get stuck in the queue.
To look up the reason for yours, please run
$ qstat -u $USER -Esw
The comment printed under the lines that started with the job ID gives very good hint about the reason.
Please check this page and see if your reason is listed.
The job 1234567 is on hold because the project xy11 doesn't have sufficient allocation. It needs at least 14.4 kSU available in xy11's compute grant to be considered to run.
$ qstat -sw 1234567 gadi-pbs: Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time ------------------------------ --------------- --------------- --------------- -------- ---- ----- ------ ----- - ----- 1234567.gadi-pbs abc111 normal-exec test_2x2 -- 30 1440 400gb 05:00 H -- Project xy11 does not have sufficient allocation to run job (14.40 KSU required)
To confirm the project's grant position, please run `nci_account -P <project_code>`. For the <project_code>=xy11 example, run
$ nci_account -P xy11 Usage Report: Project=xy11 Period=2020.q3 ============================================================= Grant: 75.00 KSU Used: 64.22 KSU Reserved: 0.00 SU Avail: 10.78 KSU
There are also other common reasons for jobs not running. Please see below for more information and possible solutions.
Your job is held up because the project storage space on scratch/gdata is over it's disk quota.
Take action
Please run nci_account
to know what has been exceeded: Allocation or Inode Allocation (number of files/dirs).
Project team members will have to cleanup that space and bring the storage usage sufficiently under Allocation/iAllocation.
Only after this can jobs be released.
Currently, there are not enough CPU cores available to start this job.
Currently, there is not enough memory available to start this job.
This comment suggests there are not enough nodes of the right specification available to start this job.
The job is waiting for another job before it will have resources available to run.
This comment can be temporary for a job during the period that the job scheduler reconsiders to run it. When it is not transient, similar to the example shown above, it suggests that the project has not enough SU in the `Avail` account.
Jobs with a walltime limit that would extend into a scheduled downtime will not be started until the scheduled maintenance finishes. If you know the job won't use all the requested walltime, please request it as close to the actual usage as possible.
Jobs that flagged PBS with the directive `-lsoftware=<software_string>` may run into this when the LSD record shows there are not enough licence seats available for the job to run.
Most of the time, it is just a matter of waiting a bit longer. Once the licence seats are released from other jobs they will serve the next waiting job. To look up how many jobs waiting ahead of yours, please search the `<software_string>` in the licence status page.
This comment should be transitional. It suggests the job was scheduled to run just now but the compute node that was scheduled to run the job had some issues and is unable to run the job at that time.
This comment appears when a job has had too many failed attempts to start, similar to the error message above `Execution server rejected request`. PBS was trying to start the job several times, but failed every time. This is an indication that either there is something seriously wrong in the job submission script or every attempted start sends the job to the same failed but not yet detected node(s).
The job is put on hold to allow our HPC team to investigate. You can't release the job, but you can certainly try to submit it again after making sure that the script is OK. If the jobs is put again on hold, please launch a ticket along with the jobID's, and we will look into the issue for you.
Can't find your error here? Try looking in out PBS FAQ and see if your error is listed there.
If not, please contact the NCI Helpdesk, or contact help@nci.org.au, and we will endeavour to assist you.