Wiki Markup | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
{composition-setup | ||||||||||||||||||||||
} {deck | ||||||||||||||||||||||
:id | PBS | Card | | |||||||||||||||||||
| ||||||||||||||||||||||
Directive(s) | Description of purpose | |||||||||||||||||||||
| No checkpointing to be performed. | |||||||||||||||||||||
| Defines the working directory path to be used for the job. |
Card | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||
The variables listed in the table below are commonly used within a PBS script file.
Note: On multi-core systems, a node (line in |
label | Single 1-CPU Job |
---|
This example runs PAUP on the input file input.nex
that resides in the current working directory. A file (here we'll name it pbsjob
) is created with the contents:
{card} {card:label=PBS Variables} The variables listed in the table below are _commonly_ used within a PBS script file. ||Variable||Description|| |{{PBS_JOBNAME}} |Job name specified by the user| |{{PBS_O_WORKDIR}} |Working directory from which the job was submitted| |{{PBS_O_HOME}} |Home directory of user submitting the job| |{{PBS_O_LOGNAME}} |Name of user submitting the job| |{{PBS_O_SHELL}} |Script shell| |{{PBS_O_JOBID}} |Unique PBS job id| |{{PBS_O_HOST}} |Host on which job script is running| |{{PBS_QUEUE}} |Name of the job queue| |{{PBS_NODEFILE}} |File containing line delimited list on nodes allocated to the job| |{{PBS_O_PATH}} |Path variable used to locate executables within the job script| *Note*: On multi-core systems, a node (line in {{PBS_NODEFILE}}) will identify the hostname and a CPU core. {card} {card:label=Single 1-CPU Job} This example runs PAUP on the input file {{input.nex}} that resides in the current working directory. A file (here we'll name it {{pbsjob}}) is created with the contents: {noformat} #!/bin/bash #PBS -c s #PBS -j oe #PBS -m ae #PBS -N jobname #PBS -l pmem=5gb #PBS -l walltime=500:00:00 #PBS -M your.name@jcu.edu.au ncpu=`wc -l $PBS_NODEFILE | awk '{print $1}'` echo "------------------------------------------------------" echo " This job is allocated "$ncpu" CPU cores on " cat $PBS_NODEFILE | uniq echo "------------------------------------------------------" echo "PBS: Submitted to $PBS_QUEUE@$PBS_O_HOST" echo "PBS: Working directory is $PBS_O_WORKDIR" echo "PBS: Job identifier is $PBS_JOBID" echo "PBS: Job name is $PBS_JOBNAME" echo "------------------------------------------------------" cd $PBS_O_WORKDIR source /etc/profile.d/modules.sh module load paup paup -n input.nex
{noformat} To submit the job for execution on a HPRC compute node simply enter the command:
{noformat
}qsub pbsjob
{noformat} If you know this job will require more than 4GB but less than 8GB of RAM, you could use the command:
{noformat
}qsub -l nodes=1:ppn=2 pbsjob
{noformat} If you know this job will require more than 8GB but less than 16GB of RAM, you could use the command:
{noformat
}qsub -l nodes=1:ppn=8 pbsjob
{noformat} The reason for the special cases (latter two) is to guarantee memory resources for your job. If memory on a node is overallocated, swap will be used. Job(s) that are actively using swap (disk) to simulate memory could take more than 1000 times longer to finish than a job running on dedicated memory. In most cases, this will mean your job will never finish.
{card} {card:label=Multiple 1-CPU Jobs
} h3. Using Job Arrays
Users with a knowledge of shell scripting (e.g., {{bash}}) may choose to take advantage of job arrays. This feature significantly reduces load on our Torque/Maui server (compared to lots of individual job submissions). The example below (assume the file name is {{pbsjob}}), will only be useful as a guide
{noformat
} #!/bin/bash #PBS -c s #PBS -j oe #PBS -m ae #PBS -N jobarray #PBS -M your.name@jcu.edu.au #PBS -l pmem=2gb #PBS -l walltime=9:00:00 cd $PBS_O_WORKDIR source /etc/profile.d/modules.sh module load matlab matlab -r myjob$PBS_ARRAYID {noformat}
Issuing the command
{noformat
}qsub -S /bin/bash -t 1-8 pbsjob
{noformat} will see 8 jobs run under one major identifier. To view status of individual jobs in the array. The above example is _identical_ (in terms of what jobs would be executed) to the one in the "Do It Yourself" section below.
Chances are you may need more advanced features of the scripting language than what is shown above. HPRC staff will endeavour to provide assistance with job arrays, if requested. h3.
Do It Yourself
There are several legitimate reasons for wanting to run multiple single processor jobs in parallel within a single PBS script. For example, you may want to run 8 MATLAB jobs which require a toolbox that only has 4 licensed users. Only 1 MATLAB license is checked out if all 8 jobs are run on the same system. An example PBS script to do this task would look like
{noformat
} #!/bin/bash #PBS -c s #PBS -j oe #PBS -m ae #PBS -N jobname #PBS -M your.name@jcu.edu.au #PBS -l walltime=1000:00:00 #PBS -l nodes=1:ppn=8 #PBS -l pmem=32gb ncpu=`wc -l $PBS_NODEFILE` echo "------------------------------------------------------" echo " This job is allocated "$ncpu" CPU cores on " cat $PBS_NODEFILE | uniq echo "------------------------------------------------------" echo "PBS: Submitted to $PBS_QUEUE@$PBS_O_HOST" echo "PBS: Working directory is $PBS_O_WORKDIR" echo "PBS: Job identifier is $PBS_JOBID" echo "PBS: Job name is $PBS_JOBNAME" echo "------------------------------------------------------" cd $PBS_O_WORKDIR source /etc/profile.d/modules.sh module load matlab matlab -r myjob1 & matlab -r myjob2 & matlab -r myjob3 & matlab -r myjob4 & matlab -r myjob5 & matlab -r myjob6 & matlab -r myjob7 & matlab -r myjob8 & wait # Wait for background jobs to finish. {noformat}
To submit the job for execution on a HPRC compute node simply enter the command:
{noformat
}qsub pbsjob
Note: The echo
commands in the PBS script example above are informational only.
{noformat} *Note*: The {{echo}} commands in the PBS script example above are informational only. {card} {card:label=MPI/PVM/OpenMP Jobs
} {noformat
} #!/bin/bash #PBS -V #PBS -m abe #PBS -N migrate #PBS -l pmem=62GB #PBS -l nodes=1:ppn=24 #PBS -l walltime=240:00:00 #PBS -M your.email@my.jcu.edu.au cd #PBS_O_WORKDIR module load openmpi module load migrate mpirun -np 24 -machinefile $PBS_NODEFILE migrate-n-mpi ... {noformat} {card} {deck}