The following tables shows HPC filesystems, ordered by confirmed performance. Some adjustment has been made for "likely perceived performance".
Rank | Filesystem | Size (GiB) | Size (per core) | Retention |
---|---|---|---|---|
1 | /tmp | 300 | 7 GiB | 30 days |
2 | /fast | 7,150 | 178 GiB | 30 days |
3 | /gpfs01 | 528,384 | 825 GiB | Determined by QCIF/ARDC |
4 | /scratch | 81,917 | 128 GiB | 365 days |
5 | /home | 524,285 | 819 GiB | Duration of your association with JCU research |
The "Size (per core)" column is used to highlight the fact that the HPC cluster is a shared resource. The values are based on capacity (/home = 512TiB), not free space (/home = 92TiB on 22-Feb-2021). Should a filesystem fill up, all jobs using that filesystem will be impacted (killing jobs is probably the best option). As a result, if you are running an I/O intensive single core job that requires 9GiB of storage, /fast
is the best/safest option.
Most researchers will probably simply run their computational workflows under /home
- for simplicity. Generally speaking, the performance of a /home
filesystem will be not part of any purchasing consideration. However, the current /home
filesystem is delivered from a storage platform with 14 SSDs and 168 7200RPM disks.
The /gpfs01
filesystem is only accessible to researchers associated with a QRISCloud/ARDC storage allocation (data housed in/near Brisbane). It's performance ranking is based on files held on the local cache.
Researchers with workflows that are I/O intensive would benefit from using /tmp
, /fast
, or /scratch
for execution of their workflows. A hyperthetical I/O intensive workflow PBS script can be found (Example 5) in HPC PBSPro script files
IMPORTANT: Scheduled deletion of 'old' files has been scheduled on /tmp
and /fast
. A schedule for /scratch
is being considered - every 90 days is the most likely future configuration, however 180 day and 365 day cycles are being considered.