Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Important:  Most software will only consume 1 CPU core - e.g., requesting 8 CPU cores for a PAUP job blocks other people using the unused 7 CPU cores.  Example 1 below would likely be the most users should be basing their job scripts from.  If in doubt, contact HPRC staff.


For more information about PBSPro please click to see guide.  For a brief description of PBS directives provided in examples below, see the "Brief Explanation of PBS directive used in examples above" section immediately following the final example PBS script.

HPC staff should be able to assist researchers needing help with PBS scripts.


Singularity Example

The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime

No Format

...

idPBS

...

labelSingle 1-CPU Job

This example runs PAUP on the input file input.nex that resides in the current working directory. A file (here we'll name it pbsjob) is created with the contents:

No Format

#!/bin/bash
#PBS -c s
#PBS -j oe
#PBS -m ae
#PBS -N jobnameJobName1
#PBS -M jc123456@jcuFIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=200024:00:00

echo "------------------------------------------------------"
echo " This job is allocated 1 cpu on "
cat $PBS_NODEFILE
echo "------------------------------------------------------"
echo "PBS: Submitted to $PBS_QUEUE@$PBS_O_HOST"
echo "PBS: #PBS -l select=1:ncpus=1:mem=2gb

cd $PBS_O_WORKDIR

# Add this to all of your scripts
shopt -s expand_aliases
source /etc/profile.d/modules.sh
 
# Output some useful information about
# the job we are running
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"
echo "PBS: Job identifier is $PBS_JOBID"
echo "PBS: Job name is $PBS_JOBNAME"
echo "------------------------------------------------------"
 
# Load the container we want to use
# It is a good idea to always specify the version number 
module load R/4.1.2u1

# Run your code
R ...    # Replace ... with your arguments & options.

Module Examples

Section


Column
width50%

Example 1:

The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime for the running of "paup -n input.nex".

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName1
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=24:00:00
#PBS -l select=1:ncpus=1:mem=2gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load paup/4b10
paup -n input.nex

To submit the job for execution on a HPRC compute node simply enter the command:

No Format
qsub pbsjob

If you know this job will require more than 4GB but less than 8GB of RAM, you could use the command:

No Format
qsub -l nodes=1:ppn=2 pbsjob

If you know this job will require more than 8GB but less than 16GB of RAM, you could use the command:

No Format
qsub -l nodes=1:ppn=8 pbsjob

The reason for the special cases (latter two) is to guarantee memory resources for your job. If memory on a node is overallocated, swap will be used. Job(s) that are actively using swap (disk) to simulate memory could take more than 1000 times longer to finish than a job running on dedicated memory. In most cases, this will mean your job will never finish.

...

labelMultiple 1-CPU Jobs

If the file containing the above content has a name of JobName1.pbs, you simply execute qsub JobName1.pbs to place it into the queueing system.

Example 3:

The following PBS script requests 20 CPU cores, 60GB of memory, and 10 days of walltime for running of an MPI job.

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName3
#PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=240:00:00
#PBS -l select=1:ncpus=20:mem=60gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load migrate
module load mpi/openmpi
mpirun -np 20 -machinefile $PBS_NODEFILE migrate-n-mpi ...

If the file containing the above content has a name of JobName3.pbs, you simply execute qsub JobName3.pbs to place it into the queueing system.


Using Job Arrays

Users with a knowledge of shell scripting (e.g., bash) may choose to take advantage of job arrays. This feature significantly reduces load on our Torque/Maui server (compared to lots of individual job submissions). The example below (assume the file name is pbsjob), will only be useful as a guide

No Format
Column
width50%

Example 2:

The following PBS script requests 8 CPU cores, 32GB of memory, and 3 hours of walltime for running of 8 MATLAB jobs in parallel.

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName2
#PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=3:00:00
#PBS -l select=1:ncpus=8:mem=32gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load matlab
matlab -r myjob1 &
matlab -r myjob2 &
matlab -r myjob3 &
matlab -r myjob4 &
matlab -r myjob5 &
matlab -r myjob6 &
matlab -r myjob7 &
matlab -r myjob8 &
wait    # Wait for background jobs to finish.

If the file containing the above content has a name of JobName2.pbs, you simply execute qsub JobName2.pbs to place it into the queueing system.

Example 4:

The following PBS script request uses job arrays. If you aren't proficient with bash scripting, using job arrays could be painful. The example below has each sub-job requesting 1 CPU core, 1 GB of memory, and 80 minutes of walltime.

No Format
#!/bin/bash
#PBS -
c s #PBS -
j oe
#PBS -m ae
#PBS -N 
jobarray
ArrayJob
#PBS -M 
jc123456@jcu
FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=
9
1:20:00
:00

#PBS -l select=1:ncpus=1:mem=1gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh

module load matlab
matlab -r myjob$PBS_ARRAYID
.m

Issuing the command qsub -S /bin/bash -t 1-8 pbsjob will see 8 jobs run under one major identifier. To view status of individual jobs in the array. The above example is identical to the one in the "Do It Yourself" section below.

Chances are you may need more advanced features of the scripting language than what is shown above. HPRC staff will endeavour to provide assistance with job arrays, if requested.

Do It Yourself

There are several legitimate reasons for wanting to run multiple single processor jobs in parallel within a single PBS script. For example, you may want to run 8 MATLAB jobs which require a toolbox that only has 4 licensed users. Only 1 MATLAB license is checked out if all 8 jobs are run on the same system. An example PBS script to do this task would look like

If the file containing the above content has a name of ArrayJob.pbs and you will be running 32 sub-jobs, you simply use qsub -t 1-32 ArrayJob.pbs to place it into the queueing system.

Note: I haven't done extensive testing of job arrays.


Example 5:

The following script is a rework of Example 2 to use the /fast/tmp filesystem for a hyperthetical workflow that is I/O intensive.  This example assumes 1 output file per job.

Note

Usage of /fast/tmp

Please make sure you first create an place all files in a folder that matches your jc number eg: jcXXXXXXXX


No Format
No Format

#!/bin/bash
#PBS -c s
#PBS -j oe
#PBS -m ae
#PBS -N jobnameJobName2
#PBS -M jc123456@jcuFIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=9993:00:00
#PBS -l nodesselect=1:ppnncpus=8:mem=32gb

echo "------------------------------------------------------"
echo " This job is allocated 8 cpus on "
cat $PBS_NODEFILE
echo "------------------------------------------------------"
echo "PBS: Submitted to $PBS_QUEUE@$PBS_O_HOST"
echo "PBS: cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

echo "PBS: Job identifier is $PBS_JOBID"
echo "PBS: Job name is $PBS_JOBNAME"
echo "------------------------------------------------------"

cd $PBS_O_WORKDIR
source /etc/profile.d/modules.shmkdir -p /fast/tmp/jc012345/myjobs
cp -a myjob1.m myjob2.m myjob3.m myjob4.m myjob5.m myjob6.m myjob7.m myjob8.m /fast/tmp/jc012345/myjobs/
pushd /fast/tmp/jc012345/myjobs

module load matlab
matlab -r myjob1.m &
matlab -r myjob2.m &
matlab -r myjob3.m &
matlab -r myjob4.m &
matlab -r myjob5.m &
matlab -r myjob6.m &
matlab -r myjob7.m &
matlab -r myjob8.m &
wait    # Wait for background jobs to finish.

cp -a out1.mat out2.mat out3.mat out4.mat out5.mat out6.mat out7.mat out8.mat $PBS_O_WORKDIR/

To submit the job for execution on a HPRC compute node simply enter the command:

No Format
qsub pbsjob

Note: The echo commands in the PBS script example above are informational only.

...

labelMPI/PVM/OpenMP Jobs
popd
rm -rf /fast/tmp/jc012345/myjobs

Consider the possibility that you may be running more than one workflow at any given time.  Using subdirectories is a good way of segregating workflows (at a storage layer).

Brief Explanation of PBS directive used in examples above

DirectiveDescription of impact
#PBS -j oeMerge STDOUT & STDERR streams into a single file
#PBS -m aeSend an Email upon job abort/exit.
#PBS -N ...Assign a meaningful name to the job (replace ... with 1 "word" - e.g., test_job).

#PBS -M ...

Email address that PBSPro will use to provide job information (if desired)
#PBS -l walltime=HH:MM:SSAmount of clock time that your job is likely to required.
#PBS -l select=1:ncpus=X:mem=YgbRequest 1 chunk of  "X CPU cores" & "Y GB of RAM".
The "select=1:" is not really required as it is the default.
Due to JCU cluster size, requests for more than 1 chunk should/will be rejected.

Brief Details on some extra PBS/Torque directives

Directive(s)

Description of purpose

#PBS -d <PATH_TO_DIRECTORY>

Sets the working directory for you job to <PATH>.

#PBS -o <OUTPUT_FILE_PATH>

Explicit specification of file that will hold the standard output stream from you job.

#PBS -V

Export environment variables to the batch job

For full details on directives that can be used, use "man qsub" on a HPC login node or look at online documentation for Torque.

PBS/Torque Variables

The following variables can be useful within your PBS job script.  Some are present in the examples above.

Variable

Description

PBS_JOBNAME

Job name specified by the user

PBS_O_WORKDIR

Working directory from which the job was submitted

PBS_O_HOME

Home directory of user submitting the job

PBS_O_LOGNAME

Name of user submitting the job

PBS_O_SHELL

Script shell

PBS_O_JOBID

Unique PBS job id

PBS_O_HOST

Host on which job script is running

PBS_QUEUE

Name of the job queue

PBS_NODEFILE

File containing line delimited list on nodes allocated to the job (may be required for MPI jobs).

PBS_O_PATH

Path variable used to locate executables within the job script

...