Child pages
  • HPC PBSPro script files

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For more information about PBSPro please click to see guideFor a brief description of PBS directives provided in examples below, see the "Brief Explanation of PBS directive used in examples above" section immediately following the final example PBS script.

HPC staff should be able to assist researchers needing help with PBS scripts.


Singularity Example

The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName1
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=24:00:00
#PBS -l select=1:ncpus=1:mem=2gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load singularity
singularity run $SING/R-4.1.1.sif R

Module Examples

Section


Column
width50%

Example 1:

The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime for the running of "paup -n input.nex".

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName1
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=24:00:00
#PBS -l select=1:ncpus=1:mem=2gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load paup
paup -n input.nex

If the file containing the above content has a name of JobName1.pbs, you simply execute qsub JobName1.pbs to place it into the queueing system.

Example 3:

The following PBS script requests 20 CPU cores, 60GB of memory, and 10 days of walltime for running of an MPI job.

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName3
#PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=240:00:00
#PBS -l select=1:ncpus=20:mem=60gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load migrate
module load mpi/openmpi
mpirun -np 20 -machinefile $PBS_NODEFILE migrate-n-mpi ...

If the file containing the above content has a name of JobName3.pbs, you simply execute qsub JobName3.pbs to place it into the queueing system.


Column
width50%

Example 2:

The following PBS script requests 8 CPU cores, 32GB of memory, and 3 hours of walltime for running of 8 MATLAB jobs in parallel.

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName2
#PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=3:00:00
#PBS -l select=1:ncpus=8:mem=32gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"

module load matlab
matlab -r myjob1 &
matlab -r myjob2 &
matlab -r myjob3 &
matlab -r myjob4 &
matlab -r myjob5 &
matlab -r myjob6 &
matlab -r myjob7 &
matlab -r myjob8 &
wait    # Wait for background jobs to finish.

If the file containing the above content has a name of JobName2.pbs, you simply execute qsub JobName2.pbs to place it into the queueing system.

Example 4:

The following PBS script request uses job arrays. If you aren't proficient with bash scripting, using job arrays could be painful. The example below has each sub-job requesting 1 CPU core, 1 GB of memory, and 80 minutes of walltime.

No Format
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N ArrayJob
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=1:20:00
#PBS -l select=1:ncpus=1:mem=1gb

cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh

module load matlab
matlab -r myjob$PBS_ARRAYID

If the file containing the above content has a name of ArrayJob.pbs and you will be running 32 sub-jobs, you simply use qsub -t 1-32 ArrayJob.pbs to place it into the queueing system.

Note: I haven't done extensive testing of job arrays.


...

Consider the possibility that you may be running more than one workflow at any given time.  Using subdirectories is a good way of segregating workflows (at a storage layer).

Brief Explanation of PBS directive used in examples above

DirectiveDescription of impact
#PBS -j oeMerge STDOUT & STDERR streams into a single file
#PBS -m aeSend an Email upon job abort/exit.
#PBS -N ...Assign a meaningful name to the job (replace ... with 1 "word" - e.g., test_job).

#PBS -M ...

Email address that PBSPro will use to provide job information (if desired)
#PBS -l walltime=HH:MM:SSAmount of clock time that your job is likely to required.
#PBS -l select=1:ncpus=X:mem=YgbRequest 1 chunk of  "X CPU cores" & "Y GB of RAM".
The "select=1:" is not really required as it is the default.
Due to JCU cluster size, requests for more than 1 chunk should/will be rejected.

Brief Details on some extra PBS/Torque directives

...