Important: Most software will only consume 1 CPU core - e.g., requesting 8 CPU cores for a PAUP job blocks other people using the unused 7 CPU cores. Example 1 below would likely be the most users should be basing their job scripts from. If in doubt, contact HPRC staff.
For more information about PBSPro please click to see guide. HPC staff should be able to assist researchers needing help with PBS scripts.
Example 1:
The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime for the running of "paup -n input.nex".
#!/bin/bash #PBS -j oe #PBS -m ae #PBS -N JobName1 #PBS -l walltime=24:00:00 #PBS -l select=1:ncpus=1:mem=2gb #PBS -M FIRSTNAME.LASTNAME@jcu.edu.au cd $PBS_O_WORKDIR shopt -s expand_aliases source /etc/profile.d/modules.sh echo "Job identifier is $PBS_JOBID" echo "Working directory is $PBS_O_WORKDIR" module load paup paup -n input.nex
If the file containing the above content has a name of JobName1.pbs
, you simply execute qsub JobName1.pbs
to place it into the queueing system.
Example 3:
The following PBS script requests 20 CPU cores, 60GB of memory, and 10 days of walltime for running of an MPI job.
#!/bin/bash #PBS -j oe #PBS -m ae #PBS -N JobName3 #PBS -l walltime=240:00:00 #PBS -l select=1:ncpus=20:mem=60gb #PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au cd $PBS_O_WORKDIR shopt -s expand_aliases source /etc/profile.d/modules.sh echo "Job identifier is $PBS_JOBID" echo "Working directory is $PBS_O_WORKDIR" module load migrate module load mpi/openmpi mpirun -np 20 -machinefile $PBS_NODEFILE migrate-n-mpi ...
If the file containing the above content has a name of JobName3.pbs
, you simply execute qsub JobName3.pbs
to place it into the queueing system.
Example 2:
The following PBS script requests 8 CPU cores, 32GB of memory, and 3 hours of walltime for running of 8 MATLAB jobs in parallel.
#!/bin/bash #PBS -j oe #PBS -m ae #PBS -N JobName2 #PBS -l walltime=3:00:00 #PBS -l select=1:ncpus=8:mem=32gb #PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au cd $PBS_O_WORKDIR shopt -s expand_aliases source /etc/profile.d/modules.sh echo "Job identifier is $PBS_JOBID" echo "Working directory is $PBS_O_WORKDIR" module load matlab matlab -r myjob1 & matlab -r myjob2 & matlab -r myjob3 & matlab -r myjob4 & matlab -r myjob5 & matlab -r myjob6 & matlab -r myjob7 & matlab -r myjob8 & wait # Wait for background jobs to finish.
If the file containing the above content has a name of JobName2.pbs
, you simply execute qsub JobName2.pbs
to place it into the queueing system.
Example 4:
The following PBS script request uses job arrays. If you aren't proficient with bash scripting, using job arrays could be painful. The example below has each sub-job requesting 1 CPU core, 1 GB of memory, and 20 minutes of walltime.
#!/bin/bash #PBS -j oe #PBS -m ae #PBS -N ArrayJob #PBS -l pmem=1gb #PBS -l walltime=20:00 #PBS -l select=1:ncpus=1:mem=1gb #PBS -M FIRSTNAME.LASTNAME@jcu.edu.au cd $PBS_O_WORKDIR shopt -s expand_aliases source /etc/profile.d/modules.sh module load matlab matlab -r myjob$PBS_ARRAYID
If the file containing the above content has a name of ArrayJob.pbs
and you will be running 32 sub-jobs, you simply use qsub -t 1-32 ArrayJob.pbs
to place it into the queueing system.
Note: I haven't done extensive testing of job arrays.
Example 5:
The following script is a rework of Example 2 to use the /fast
filesystem for a hyperthetical workflow that is I/O intensive. This example assumes 1 output file per job.
#!/bin/bash #PBS -j oe #PBS -m ae #PBS -N JobName2 #PBS -l walltime=3:00:00 #PBS -l select=1:ncpus=8:mem=32gb #PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au cd $PBS_O_WORKDIR shopt -s expand_aliases source /etc/profile.d/modules.sh echo "Job identifier is $PBS_JOBID" echo "Working directory is $PBS_O_WORKDIR" mkdir -p /fast/jc012345/myjobs cp -a myjob1.m myjob2.m myjob3.m myjob4.m myjob5.m myjob6.m myjob7.m myjob8.m /fast/jc012345/myjobs/ pushd /fast/jc012345/myjobs module load matlab matlab -r myjob1 & matlab -r myjob2 & matlab -r myjob3 & matlab -r myjob4 & matlab -r myjob5 & matlab -r myjob6 & matlab -r myjob7 & matlab -r myjob8 & wait # Wait for background jobs to finish. cp -a out1.mat out2.mat out3.mat out4.mat out5.mat out6.mat out7.mat out8.mat $PBS_O_WORKDIR/ popd
Brief Details on some extra PBS/Torque directives
Directive(s) | Description of purpose |
---|---|
| Sets the working directory for you job to <PATH>. |
| Explicit specification of file that will hold the standard output stream from you job. |
| Export environment variables to the batch job |
For full details on directives that can be used, use "man qsub
" on a HPC login node or look at online documentation for Torque.
PBS/Torque Variables
Variable | Description |
---|---|
| Job name specified by the user |
| Working directory from which the job was submitted |
| Home directory of user submitting the job |
| Name of user submitting the job |
| Script shell |
| Unique PBS job id |
| Host on which job script is running |
| Name of the job queue |
| File containing line delimited list on nodes allocated to the job |
| Path variable used to locate executables within the job script |
Example CPU+Memory Resource Requests
Your requirements | ||
---|---|---|
CPU cores | Total Memory | PBS/Torque options |
1 | 1200 MB | -l select=1:ncpus=1:pmem=1200mb |
1 | 20 GB | -l select=1:ncpus=1:pmem=20gb |
2 | 6 GB | -l select=1:ncpus=2:pmem=3gb |
8 | 4 GB | -l select=1:ncpus=8:pmem=512mb |
24 | 120 GB | -l select=1:ncpus=24:pmem=5gb |
The most important thing to note is that the pmem
parameter is physical memory per core that your job requires.