The following tables shows HPC filesystems, ordered by confirmed performance. Some adjustment has been made for "likely perceived performance".
|Rank||Filesystem||Size (GiB)||Size (per core)||Retention|
|1||300||7 GiB||30 days|
|2||7,150||178 GiB||30 days|
|3||528,384||825 GiB||Determined by QCIF/ARDC|
|4||81,917||128 GiB||365 days|
|5||524,285||819 GiB||Duration of your association with JCU research|
The "Size (per core)" column is used to highlight the fact that the HPC cluster is a shared resource. Should a filesystem fill up, all jobs using that filesystem will be impacted (killing jobs is probably the best option). As a result, if you are running an I/O intensive single core job that requires 9GiB of storage,
/fast is the best/safest option.
Most researchers will probably simply run their computational workflows under
/home - for simplicity. Generally speaking, the performance of a
/home filesystem will be not part of any purchasing consideration. However, the current
/home filesystem is delivered from a storage platform with 14 SSDs and 168 7200RPM disks.
/gpfs01 filesystem is only accessible to researchers associated with a QRISCloud/ARDC storage allocation (data housed in/near Brisbane). It's performance ranking is based on files held on the local cache.
Researchers with workflows that are I/O intensive would benefit from using
/scratch for execution of their workflows. A hyperthetical I/O intensive workflow PBS script can be found (Example 5) in HPRC PBSPro script files