...
HPC staff should be able to assist researchers needing help with PBS scripts.
Singularity Example
The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime
No Format |
---|
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName1
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=24:00:00
#PBS -l select=1:ncpus=1:mem=2gb
cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"
module load singularity
singularity run $SING/R-4.1.1.sif R |
Section |
---|
Column |
---|
| Example 1:The following PBS script requests 1 CPU core, 2GB of memory, and 24 hours of walltime for the running of "paup -n input.nex". No Format |
---|
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName1
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=24:00:00
#PBS -l select=1:ncpus=1:mem=2gb
cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"
module load paup
paup -n input.nex |
If the file containing the above content has a name of JobName1.pbs , you simply execute qsub JobName1.pbs to place it into the queueing system. Example 3:The following PBS script requests 20 CPU cores, 60GB of memory, and 10 days of walltime for running of an MPI job. No Format |
---|
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName3
#PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=240:00:00
#PBS -l select=1:ncpus=20:mem=60gb
cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"
module load migrate
module load mpi/openmpi
mpirun -np 20 -machinefile $PBS_NODEFILE migrate-n-mpi ... |
If the file containing the above content has a name of JobName3.pbs , you simply execute qsub JobName3.pbs to place it into the queueing system. |
Column |
---|
| Example 2:The following PBS script requests 8 CPU cores, 32GB of memory, and 3 hours of walltime for running of 8 MATLAB jobs in parallel. No Format |
---|
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N JobName2
#PBS -M FIRSTNAME.LASTNAME@my.jcu.edu.au
#PBS -l walltime=3:00:00
#PBS -l select=1:ncpus=8:mem=32gb
cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
echo "Job identifier is $PBS_JOBID"
echo "Working directory is $PBS_O_WORKDIR"
module load matlab
matlab -r myjob1 &
matlab -r myjob2 &
matlab -r myjob3 &
matlab -r myjob4 &
matlab -r myjob5 &
matlab -r myjob6 &
matlab -r myjob7 &
matlab -r myjob8 &
wait # Wait for background jobs to finish. |
If the file containing the above content has a name of JobName2.pbs , you simply execute qsub JobName2.pbs to place it into the queueing system. Example 4:The following PBS script request uses job arrays. If you aren't proficient with bash scripting, using job arrays could be painful. The example below has each sub-job requesting 1 CPU core, 1 GB of memory, and 80 minutes of walltime. No Format |
---|
#!/bin/bash
#PBS -j oe
#PBS -m ae
#PBS -N ArrayJob
#PBS -M FIRSTNAME.LASTNAME@jcu.edu.au
#PBS -l walltime=1:20:00
#PBS -l select=1:ncpus=1:mem=1gb
cd $PBS_O_WORKDIR
shopt -s expand_aliases
source /etc/profile.d/modules.sh
module load matlab
matlab -r myjob$PBS_ARRAYID |
If the file containing the above content has a name of ArrayJob.pbs and you will be running 32 sub-jobs, you simply use qsub -t 1-32 ArrayJob.pbs to place it into the queueing system. Note: I haven't done extensive testing of job arrays. |
|
...