The following tables shows HPC filesystems, ordered by confirmed performance. Some adjustment has been made for "likely perceived performance".
Rank | Filesystem | Size (GiB) | Retention |
---|---|---|---|
1 | /tmp | 300 | 30 days |
2 | /fast | 7,150 | 30 days |
3 | /gpfs01 | 528,384 | Determined by QCIF/ARDC |
4 | /scratch | 81,917 | 365 days |
5 | /home | 524,285 | Duration of your association with JCU research |
Most researchers will probably simply run their computational workflows under /home
- for simplicity. Generally speaking, the performance of a /home
filesystem will be not part of any purchasing consider.
The /gpfs01
filesystem is only accessible to researchers associated with a QRISCloud/ARDC storage allocation (data housed in/near Brisbane). It's performance ranking is based on files held on the local cache.
Researchers with workflows that are I/O intensive would benefit from using /tmp
, /fast
, or /scratch
for execution of their workflows. A typical I/O intensive workflow (commands in PBS script files) should follow:
- Create a subdirectory which will be used to run your workflow (e.g.,
mkdir -p /fast/jc012345/workflow_name
). - Copy files into the created subdirectory.
- Execute job(s) associated with your worfklow.
- Copy files that have long-term value back to your somewhere within your home directory.
- Remove the subdirectory that your created (e.g.,
rm -rf /fast/jc012345/workflow_name
).