|Rank||Filesystem||Size (GiB)||Size (per core)||Retention|
|1||300||7 GiB||30 days|
|2||7,150||178 GiB||30 days|
|3||528,384||825 GiB||Determined by QCIF/ARDC|
|4||81,917||128 GiB||365 days|
|5||524,285||819 GiB||Duration of your association with JCU research|
Usage of /fast/tmp
Please make sure you first create an place all files in a folder that matches your jc number eg: jcXXXXXXXX
The "Size (per core)" column is used to highlight the fact that the HPC cluster is a shared resource. The values are based on capacity (/home = 512TiB), not free space (/home = 92TiB on 22-Feb-2021). Should a filesystem fill up, all jobs using that filesystem will be impacted (killing jobs is probably the best option). As a result, if you are running an I/O intensive single core job that requires 9GiB of storage,
/fast/tmp is the best/safest option.
Researchers with workflows that are I/O intensive would benefit from using
/scratch for execution of their workflows. A hyperthetical I/O intensive workflow PBS script can be found (Example 5) in HPRC HPC PBSPro script files
IMPORTANT: Scheduled deletion of 'old' files has been scheduled on
/fast/tmp . A schedule for
/scratch is being considered - every 90 days is the most likely future configuration, however 180 day and 365 day cycles are being considered.