Child pages
  • Appropriate location for data
Skip to end of metadata
Go to start of metadata

The following tables shows HPC filesystems, ordered by confirmed performance.  Some adjustment has been made for "likely perceived performance".

RankFilesystemSize (GiB)Size (per core)Retention
1/tmp 3007 GiB30 days
2/fast 7,150178 GiB30 days
3/gpfs01 528,384825 GiBDetermined by QCIF/ARDC
4/scratch 81,917128 GiB365 days
5/home524,285819 GiBDuration of your association with JCU research 

The "Size (per core)" column is used to highlight the fact that the HPC cluster is a shared resource.  The values are based on capacity (/home = 512TiB), not free space (/home = 92TiB on 22-Feb-2021).  Should a filesystem fill up, all jobs using that filesystem will be impacted (killing jobs is probably the best option).  As a result, if you are running an I/O intensive single core job that requires 9GiB of storage,  /fast is the best/safest option.

Most researchers will probably simply run their computational workflows under /home - for simplicity.  Generally speaking, the performance of a /home  filesystem will be not part of any purchasing consideration.  However, the current /home filesystem is delivered from a storage platform with 14 SSDs and 168 7200RPM disks.

The /gpfs01 filesystem is only accessible to researchers associated with a QRISCloud/ARDC storage allocation (data housed in/near Brisbane).  It's performance ranking is based on files held on the local cache.

Researchers with workflows that are I/O intensive would benefit from using /tmp/fast, or /scratch for execution of their workflows.  A hyperthetical I/O intensive workflow PBS script can be found (Example 5) in HPC PBSPro script files

IMPORTANT:   Scheduled deletion of 'old' files has been scheduled on /tmp  and /fast .  A schedule for /scratch is being considered - every 90 days is the most likely future configuration, however 180 day and 365 day cycles are being considered. 

  • No labels