You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

The following tables shows HPC filesystems, ordered by confirmed performance.  Some adjustment has been made for "likely perceived performance".

RankFilesystemSize (GiB)Size (per core)Retention
1/tmp 3007 GiB30 days
2/fast 7,150178 GiB30 days
3/gpfs01 528,384825 GiBDetermined by QCIF/ARDC
4/scratch 81,917128 GiB365 days
5/home524,285819 GiBDuration of your association with JCU research 

The "Size (per core)" column is used to highlight the fact that the HPC cluster is a shared resource.  The values are based on capacity (/home = 512TiB), not free space (/home = 92TiB on 22-Feb-2021).  Should a filesystem fill up, all jobs using that filesystem will be impacted (killing jobs is probably the best option).  As a result, if you are running an I/O intensive single core job that requires 9GiB of storage,  /fast is the best/safest option.

Most researchers will probably simply run their computational workflows under /home - for simplicity.  Generally speaking, the performance of a /home  filesystem will be not part of any purchasing consideration.  However, the current /home filesystem is delivered from a storage platform with 14 SSDs and 168 7200RPM disks.

The /gpfs01 filesystem is only accessible to researchers associated with a QRISCloud/ARDC storage allocation (data housed in/near Brisbane).  It's performance ranking is based on files held on the local cache.

Researchers with workflows that are I/O intensive would benefit from using /tmp/fast, or /scratch for execution of their workflows.  A hyperthetical I/O intensive workflow PBS script can be found (Example 5) in HPRC PBSPro script files

IMPORTANT:   Scheduled deletion of 'old' files has been scheduled on /tmp  and /fast .  A schedule for /scratch is being considered - 365 days is one of the schedules being considered. 

  • No labels