Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Important:  Most software will only consume 1 CPU core - e.g., requesting 8 CPU cores for a PAUP job blocks other people using the unused 7 CPU cores.  Example 1 below would likely be the most users should be basing their job scripts from.  If in doubt, contact HPRC staff.


For more information about PBSPro please click to see guide.  For a brief description of PBS directives provided in examples below, see the "Brief Explanation of PBS directive used in examples above" section immediately following the final example PBS script.

HPC staff should be able to assist researchers needing help with PBS scripts.


Singularity Example

The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName1
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=24:00:00}}|Request that 4GB of memory be reserved for the batch job.
Request that 2 CPU cores on 1 host be reserved for the batch job.
Advise the scheduler that this job will have completed within 24 hours.|
|{{#PBS -l nodes=2 -I -X}}|Request 2 CPU cores that can be used for interactive job(s).
*Note*:  Our 2 login nodes each provide 18 CPU cores and 64GB of memory for running interactive jobs (without {{qsub}}).|
|{{#PBS -m ae}}
{{#PBS -M john.doe\@jcu.edu.au}}
{{#PBS -M joe.blogg\@my.jcu.edu.au}}|Send mail at batch job abort/exit to the Email address provided.|
|{{#PBS -N job\_name}}|Assign a name ({{job\_name}}) to the batch job|
|{{#PBS -q normal}}
{{#PBS -q bigmem}}|Specify the queue into which your job will be placed.
*Note*:  The {{bigmem}} queue targets two nodes only, long delays can be experienced before your job is run.|
|{{#PBS -V}}|Export environment variables to the batch job|

While defaults exist for many options, HPC staff ask researchers to specify CPU core, memory, and walltime requirements as accurately as possible.

A {{-W}} option can be used for more complicated tasks such as job dependencies, stage-in and stage-out.  Researchers may wish to consult with HPC staff with regard to use of the {{-W}} options.  A {{man qsub}} will provide more information and more options than provided above.

Users interested in protecting there job runs with checkpointing should realize that this feature comes at a cost (I/O operations).  Checkpoint restart of a job (using BLCR) will not work for all job types.  HPC staff advise use to test this feature on a _typical_ job first before using it on other similar jobs.  Generally speaking, checkpointing will only be a real benefit to jobs that run for over a week.
    {card}

    {card:label=PBS Variables}
The variables listed in the table below are _commonly_ used within a PBS script file.

||Variable||Description||
|{{PBS_JOBNAME}}    |Job name specified by the user|
|{{PBS_O_WORKDIR}}  |Working directory from which the job was submitted|
|{{PBS_O_HOME}}     |Home directory of user submitting the job|
|{{PBS_O_LOGNAME}}  |Name of user submitting the job|
|{{PBS_O_SHELL}}    |Script shell|
|{{PBS_O_JOBID}}    |Unique PBS job id|
|{{PBS_O_HOST}}     |Host on which job script is running|
|{{PBS_QUEUE}}      |Name of the job queue|
|{{PBS_NODEFILE}}   |File containing line delimited list on nodes allocated to the job|
|{{PBS_O_PATH}}     |Path variable used to locate executables within the job script|

*Note*:  On multi-core systems, a node (line in {{PBS_NODEFILE}}) will identify the hostname and a CPU core.
    {card}

    {card:label=Single 1-CPU Job}
This example runs PAUP on the input file {{input.nex}} that resides in the current working directory.  A file (here we'll name it {{pbsjob}}) is created with the contents:
{noformat}

#PBS -l select=1:ncpus=1:mem=2gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load R/4.1.2
R ...    # Replace ... with your arguments & options.

Module Examples

Section


Column
width50%

Example 1:

The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime for the running of "paup -n input.nex".

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName1
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=24:00:00
#PBS -l select=1:ncpus=1:mem=2gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load paup
paup -n input.nex

If the file containing the above content has a name of JobName1.pbs, you simply execute qsub JobName1.pbs to place it into the queueing system.

Example 3:

The following PBS script requests 20 CPU cores, 60GB of memory, and 10 days of walltime for running of an MPI job.

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName3
#PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=240:00:00
#PBS -l select=1:ncpus=20:mem=60gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load migrate
module load mpi/openmpi
mpirun -np 20 -machinefile $PBS_NODEFILE migrate-n-mpi ...

If the file containing the above content has a name of JobName3.pbs, you simply execute qsub JobName3.pbs to place it into the queueing system.


Column
width50%

Example 2:

The following PBS script requests 8 CPU cores, 32GB of memory, and 3 hours of walltime for running of 8 MATLAB jobs in parallel.

No Format
#!/bin/bash
#PBS -
c s #PBS -
j oe
#PBS -m ae
#PBS -N 
jobname
JobName2
#PBS -
l pmem=5gb
M FIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=
500
3:00:00
#PBS -
M your.name@jcu.edu.au ncpu=`wc -l $PBS_NODEFILE | awk '{print $1}'` echo "------------------------------------------------------" echo " This job is allocated "$ncpu" CPU cores on " cat $PBS_NODEFILE | uniq echo "------------------------------------------------------" echo "PBS: Submitted to $PBS_QUEUE@$PBS_O_HOST" echo "PBS: Working directory is $PBS_O_WORKDIR" echo "PBS:
l select=1:ncpus=8:mem=32gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "
PBS:
Working 
Job
directory 
name
is $PBS_
JOBNAME" echo "------------------------------------------------------" cd $PBS_
O_WORKDIR
source /etc/profile.d/modules.sh
"

module load 
paup
matlab
paup
matlab -
n input.nex {noformat} To submit the job for execution on a HPRC compute node simply enter the command: {noformat}qsub pbsjob{noformat} If you know this job will require more than 4GB but less than 8GB of RAM, you could use the command: {noformat}qsub -l nodes=1:ppn=2 pbsjob{noformat} If you know this job will require more than 8GB but less than 16GB of RAM, you could use the command: {noformat}qsub -l nodes=1:ppn=8 pbsjob{noformat} The reason for the special cases (latter two) is to guarantee memory resources for your job. If memory on a node is overallocated, swap will be used. Job(s) that are actively using swap (disk) to simulate memory could take more than 1000 times longer to finish than a job running on dedicated memory. In most cases, this will mean your job will never finish. {card} {card:label=Multiple 1-CPU Jobs} h3. Using Job Arrays Users with a knowledge of shell scripting (e.g., {{bash}}) may choose to take advantage of job arrays. This feature significantly reduces load on our Torque/Maui server (compared to lots of individual job submissions). The example below (assume the file name is {{pbsjob}}), will only be useful as a guide {noformat}
r myjob1 &
matlab -r myjob2 &
matlab -r myjob3 &
matlab -r myjob4 &
matlab -r myjob5 &
matlab -r myjob6 &
matlab -r myjob7 &
matlab -r myjob8 &
wait    # Wait for background jobs to finish.

If the file containing the above content has a name of JobName2.pbs, you simply execute qsub JobName2.pbs to place it into the queueing system.

Example 4:

The following PBS script request uses job arrays. If you aren't proficient with bash scripting, using job arrays could be painful. The example below has each sub-job requesting 1 CPU core, 1 GB of memory, and 80 minutes of walltime.

No Format
#!/bin/bash
#PBS -
c s #PBS -
j oe
#PBS -m ae
#PBS -N 
jobarray
ArrayJob
#PBS -M 
your
FIRSTNAME.
name@jcu
LASTNAME@jcu.edu.au
#PBS -l 
pmem=2gb
walltime=1:20:00
#PBS -l 
walltime
select=
9
1:
00:00
ncpus=1:mem=1gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh

module load matlab
matlab -r myjob$PBS_ARRAYID
{noformat} Issuing the command {noformat}qsub -S /bin/bash -t 1-8 pbsjob{noformat} will see 8 jobs run under one major identifier. To view status of individual jobs in the array. The above example is _identical_ (in terms of what jobs would be executed) to the one in the "Do It Yourself" section below. Chances are you may need more advanced features of the scripting language than what is shown above. HPRC staff will endeavour to provide assistance with job arrays, if requested. h3. Do It Yourself There are several legitimate reasons for wanting to run multiple single processor jobs in parallel within a single PBS script. For example, you may want to run 8 MATLAB jobs which require a toolbox that only has 4 licensed users. Only 1 MATLAB license is checked out if all 8 jobs are run on the same system. An example PBS script to do this task would look like {noformat}

If the file containing the above content has a name of ArrayJob.pbs and you will be running 32 sub-jobs, you simply use qsub -t 1-32 ArrayJob.pbs to place it into the queueing system.

Note: I haven't done extensive testing of job arrays.


Example 5:

The following script is a rework of Example 2 to use the /fast/tmp filesystem for a hyperthetical workflow that is I/O intensive.  This example assumes 1 output file per job.

Note

Usage of /fast/tmp

Please make sure you first create an place all files in a folder that matches your jc number eg: jcXXXXXXXX


No Format
#!/bin/bash
#PBS -c s
#PBS -j oe
#PBS -m ae
#PBS -N jobnameJobName2
#PBS -M yourFIRSTNAME.LASTNAME@my.name@jcujcu.edu.au
#PBS -l walltime=10003:00:00
#PBS -l nodesselect=1:ppnncpus=8
#PBS -l pmem:mem=32gb

ncpu=`wc -lcd $PBS_NODEFILE`O_WORKDIR
echoshopt "------------------------------------------------------"
echo " This job is allocated "$ncpu" CPU cores on "
cat $PBS_NODEFILE | uniq
echo "------------------------------------------------------"
echo "PBS: Submitted to $PBS_QUEUE@$PBS_O_HOST"
echo "PBS: -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

echo "PBS: Job identifier is $PBS_JOBID"
echo "PBS: Job name is $PBS_JOBNAME"
echo "------------------------------------------------------"

cd $PBS_O_WORKDIR
source /etc/profile.d/modules.shmkdir -p /fast/tmp/jc012345/myjobs
cp -a myjob1.m myjob2.m myjob3.m myjob4.m myjob5.m myjob6.m myjob7.m myjob8.m /fast/tmp/jc012345/myjobs/
pushd /fast/tmp/jc012345/myjobs

module load matlab
matlab -r myjob1 &
matlab -r myjob2 &
matlab -r myjob3 &
matlab -r myjob4 &
matlab -r myjob5 &
matlab -r myjob6 &
matlab -r myjob7 &
matlab -r myjob8 &
wait    # Wait for background jobs to finish.
{noformat}
To submit the job for execution on a HPRC compute node simply enter the command:
{noformat}qsub pbsjob{noformat}

*Note*: The {{echo}} commands in the PBS script example above are informational only.
    {card}

    {card:label=MPI/PVM/OpenMP Jobs}
{noformat}
#!/bin/bash
#PBS -V
#PBS -m abe
#PBS -N migrate
#PBS -l pmem=62GB
#PBS -l nodes=1:ppn=24
#PBS -l walltime=240:00:00
#PBS -M your.email@my.jcu.edu.au
cd #PBS_O_WORKDIR
module load openmpi
module load migrate
mpirun -np 24 -machinefile $PBS_NODEFILE migrate-n-mpi ...
{noformat}
    {card}
{deck}

cp -a out1.mat out2.mat out3.mat out4.mat out5.mat out6.mat out7.mat out8.mat $PBS_O_WORKDIR/
popd
rm -rf /fast/tmp/jc012345/myjobs

Consider the possibility that you may be running more than one workflow at any given time.  Using subdirectories is a good way of segregating workflows (at a storage layer).

Brief Explanation of PBS directive used in examples above

DirectiveDescription of impact
#PBS -j oeMerge STDOUT & STDERR streams into a single file
#PBS -m aeSend an Email upon job abort/exit.
#PBS -N ...Assign a meaningful name to the job (replace ... with 1 "word" - e.g., test_job).

#PBS -M ...

Email address that PBSPro will use to provide job information (if desired)
#PBS -l walltime=HH:MM:SSAmount of clock time that your job is likely to required.
#PBS -l select=1:ncpus=X:mem=YgbRequest 1 chunk of  "X CPU cores" & "Y GB of RAM".
The "select=1:" is not really required as it is the default.
Due to JCU cluster size, requests for more than 1 chunk should/will be rejected.

Brief Details on some extra PBS/Torque directives

Directive(s)

Description of purpose

#PBS -d <PATH_TO_DIRECTORY>

Sets the working directory for you job to <PATH>.

#PBS -o <OUTPUT_FILE_PATH>

Explicit specification of file that will hold the standard output stream from you job.

#PBS -V

Export environment variables to the batch job

For full details on directives that can be used, use "man qsub" on a HPC login node or look at online documentation for Torque.

PBS/Torque Variables

The following variables can be useful within your PBS job script.  Some are present in the examples above.

Variable

Description

PBS_JOBNAME

Job name specified by the user

PBS_O_WORKDIR

Working directory from which the job was submitted

PBS_O_HOME

Home directory of user submitting the job

PBS_O_LOGNAME

Name of user submitting the job

PBS_O_SHELL

Script shell

PBS_O_JOBID

Unique PBS job id

PBS_O_HOST

Host on which job script is running

PBS_QUEUE

Name of the job queue

PBS_NODEFILE

File containing line delimited list on nodes allocated to the job (may be required for MPI jobs).

PBS_O_PATH

Path variable used to locate executables within the job script