You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

    This example runs PAUP on the input file input.nex that resides in the current working directory. A file (here we'll name it pbsjob) is created with the contents:

    #!/bin/bash
    #PBS -c s
    #PBS -j oe
    #PBS -m ae
    #PBS -N jobname
    #PBS -M jc123456@jcu.edu.au
    #PBS -l walltime=2000:00:00
    
    echo "------------------------------------------------------"
    echo " This job is allocated 1 cpu on "
    cat $PBS_NODEFILE
    echo "------------------------------------------------------"
    echo "PBS: Submitted to $PBS_QUEUE@$PBS_O_HOST"
    echo "PBS: Working directory is $PBS_O_WORKDIR"
    echo "PBS: Job identifier is $PBS_JOBID"
    echo "PBS: Job name is $PBS_JOBNAME"
    echo "------------------------------------------------------"
     
    cd $PBS_O_WORKDIR
    source /etc/profile.d/modules.sh
    module load paup
    paup -n input.nex
    

    To submit the job for execution on a HPRC compute node simply enter the command:

    qsub pbsjob

    If you know this job will require more than 4GB but less than 8GB of RAM, you could use the command:

    qsub -l nodes=1:ppn=2 pbsjob

    If you know this job will require more than 8GB but less than 16GB of RAM, you could use the command:

    qsub -l nodes=1:ppn=8 pbsjob

    The reason for the special cases (latter two) is to guarantee memory resources for your job. If memory on a node is overallocated, swap will be used. Job(s) that are actively using swap (disk) to simulate memory could take more than 1000 times longer to finish than a job running on dedicated memory. In most cases, this will mean your job will never finish.

    There are several legitimate reasons for wanting to run multiple single processor jobs in parallel within a single PBS script. For example, you may want to run 8 MATLAB jobs which require a toolbox that only has 4 licensed users. Only 1 MATLAB license is checked out if all 8 jobs are run on the same system. An example PBS script to do this task would look like

    #!/bin/bash
    #PBS -c s
    #PBS -j oe
    #PBS -m ae
    #PBS -N jobname
    #PBS -M jc123456@jcu.edu.au
    #PBS -l walltime=999:00:00
    #PBS -l nodes=1:ppn=8
    
    echo "------------------------------------------------------"
    echo " This job is allocated 8 cpus on "
    cat $PBS_NODEFILE
    echo "------------------------------------------------------"
    echo "PBS: Submitted to $PBS_QUEUE@$PBS_O_HOST"
    echo "PBS: Working directory is $PBS_O_WORKDIR"
    echo "PBS: Job identifier is $PBS_JOBID"
    echo "PBS: Job name is $PBS_JOBNAME"
    echo "------------------------------------------------------"
    
    cd $PBS_O_WORKDIR
    source /etc/profile.d/modules.sh
    module load matlab
    matlab -r myjob1.m &
    matlab -r myjob2.m &
    matlab -r myjob3.m &
    matlab -r myjob4.m &
    matlab -r myjob5.m &
    matlab -r myjob6.m &
    matlab -r myjob7.m &
    matlab -r myjob8.m &
    wait    # Wait for background jobs to finish.
    

    To submit the job for execution on a HPRC compute node simply enter the command:

    qsub pbsjob

    Note: The echo commands in the PBS script example above are informational only.

    Error rendering macro 'deck'

    java.lang.NullPointerException

    • No labels