Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


RankFilesystemSize (GiB)Size (per core)Retention
1/tmp 3007 GiB30 days
2/fast/tmp 7,150178 GiB30 days
3/gpfs01 528,384825 GiBDetermined by QCIF/ARDC
4/scratch 81,917128 GiB365 days
5/home524,285819 GiBDuration of your association with JCU research 


Usage of /fast/tmp

Please make sure you first create an place all files in a folder that matches your jc number eg: jcXXXXXXXX

The "Size (per core)" column is used to highlight the fact that the HPC cluster is a shared resource.  The values are based on capacity (/home = 512TiB), not free space (/home = 92TiB on 22-Feb-2021).  Should a filesystem fill up, all jobs using that filesystem will be impacted (killing jobs is probably the best option).  As a result, if you are running an I/O intensive single core job that requires 9GiB of storage,  /fast/tmp is the best/safest option.


Researchers with workflows that are I/O intensive would benefit from using /tmp/fast/tmp, or /scratch for execution of their workflows.  A hyperthetical I/O intensive workflow PBS script can be found (Example 5) in HPRC HPC PBSPro script files

IMPORTANT:   Scheduled deletion of 'old' files has been scheduled on /tmp  and /fast/tmp .  A schedule for /scratch is being considered - every 90 days is the most likely future configuration, however 180 day and 365 day cycles are being considered.