Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Updated:  – DRAFT

A new file management policy for the Gadi scratch file system will be progressively introduced in May-June 2022, with full implementation from 1 July 2022. This policy will facilitate greater fairness in the use of temporary scratch storage for all NCI projects. Implementation of this policy will also support essential tuning and reconfiguration of the /scratch file system in a full production, peak performance capacity. 

The new automated process for scratch file management will remove files based on access time (atime). Implementation will be staged according to the following schedule.

  1.  : Files with atime greater than 365 days will be quarantined
  2.  : Files with atime greater than 100 days will be retired.
  3. From  , files with atime greater than 100 days will be retired on a continuous basis.

Users have the option to identify and restore files from quarantine using the nci-file-expiry command. See the attached document for more information about the nci-file-expiry utility.

Important points to note about the new /scratch file management process:

  • The /scratch file system is intended for temporary, working storage. For persistent storage, use the /g/data or massdata systems.
  • All projects with active NCMAS allocations now have /g/data directories. Default allocation is 2.5 GB/KSU.
  • Stakeholder projects will get /g/data allocations per entitlements and demand. For /g/data allocations, please contact your scheme allocation manager. NCI (help@nci.org.au ) can help put you in touch with the appropriate scheme managers if needed.
  • Project default scratch quotas will be raised at the time of July quarterly maintenance. (Note that default quotas are still necessary for file system safety.)
  • Projects expecting to use large amounts of scratch capacity will still need to request appropriate quotas. Consultation with NCI HPC and Storage groups may be needed for projects using peak-scale scratch capacity.
  • Large scratch requests (e.g. >= 10 TB) from projects with compute allocations of less than 1 MSU/year, or without demonstrated track records, will be accommodated in phases with advice from NCI Storage and HPC groups.
  • To ensure fair access to /scratch capacity, exceptions to the scratch file expiry policy will not permitted. If you need advice or assistance to prepare for the full implementation in  (TBC) contact NCI user support: help@nci.org.au .


  • No labels