You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 30 Next »

This page is intended as a quick introduction for new users submitting their first job to the HPRC Cluster. A few things that new users should be aware of:

  • Typically, jobs are not run in an interactive manner, except when:
    • users are running small one off jobs
    • evaluating the resources required for bigger jobs
    • using graphical applications like MATLAB
    • Examples of interactive jobs:

      There is no content with the specified labels

  • HPRC Cluster software is not run in a window on their desktop, neither is it launched by clicking on it in a network drive (see HPRC Fileshares).
  • Users need to log into the cluster and inform the job scheduler about their job and it will run it when it can.

 

Logging In

The first step in using the HPRC Cluster is to log in to the login node - zodiac.hpc.jcu.edu.au.

HPRC Desktop Software - Logging into zodiac.hpc.jcu.edu.au
The HPC interactive nodes are accessible via the server named zodiac.hpc.jcu.edu.au. See the relevant tabs below of instructions on how to log in to zodiac. Zodiac is a linux based system. To learn more about the linux shell, see the Software Carpentry Unix Shell tutorials.

 

  1. Unable to render {include} The included page could not be found.

  2. Starting PuTTY will show this window:

     



  3. Enter the hostname (zodiac.hpc.jcu.edu.au) and the port 8822

     



    The default port for ssh is port 22, which you can use if you are accessing the cluster from on campus, however if you are accessing it from off campus you need to use port 8822.

  4. Then you will be prompted for your username and password, use your standard JCU credentials (username and password)

  1. Open the Terminal application:


  2. In the Terminal window run the ssh command: ssh <username>@zodiac.jcu.edu.au 
    (add -p 8822 if you are connecting from outside the JCU network) and you will be asked for your password



  3. You are now logged in to the HPC interactive node. 

 

 

Software Packages

The HPRC Cluster uses environment modules to manage the available software packages. This allows multiple versions of the same software to be installed without interfearing with each other. To enable the environment module system the following command needs to be executed on the command line:

-bash-4.1$ source /etc/profile.d/modules.sh

 

The software that is available on the HPRC clusted is listed here: HPRC User Software.  Alternately you can query the software available on the cluster with the following commands:

Command

Result

module avail

A list of available software is displayed

module help <software>

Version number and brief synopsis is displayed for <software>

-bash-4.1$ module avail
--------------------------------------------------------------------------------------- /usr/share/Modules/modulefiles ----------------------------------------------------
MPInside/3.5.1     compiler/gcc-4.4.5 module-cvs         modules            mpich2-x86_64      null               perfcatcher
chkfeature         dot                module-info        mpi/intel-4.0      mpt/2.05           perfboost          use.own
---------------------------------------------------------------------------------------------- /etc/modulefiles -----------------------------------------------------------
compat-openmpi-x86_64 openmpi-x86_64
------------------------------------------------------------------------------------------------- /sw/modules -------------------------------------------------------------
4ti2                      blast/2.2.23              crimap_Monsanto           hdf5                      migrate/3.6(default)      picard-tools              tmap/1.1
BEDTools                  blast/2.2.29(default)     dx                        hmmer                     mira                      proj                      tmhmm
EMBOSS                    bowtie                    elph                      ima2                      modeltest                 pvm                       topali
GMT                       bwa/0.7.4(default)        enmtools                  jags                      molphy                    r8s                       towhee
Macaulay2                 caftools                  fasta                     java                      mpich2                    rainbowcrack              towhee-openmpi
Python/2.7                cap3                      fastme                    jcusmart                  mrbayes                   rpfits                    trans-abyss
R/2.15.1(default)         carthagene/1.2.2(default) ffmpeg                    jmodeltest                mrmodeltest               ruby/1.9.3                tree-puzzle
R/3.0.0                   carthagene/1.3.beta       fftw2                     lagan                     msbayes                   ruby/2.0.0                trinityrnaseq
abyss                     casacore                  fftw3                     lamarc                    ncar                      samtools                  udunits
ariadne                   cernlib                   garli                     lapack                    netcdf                    scalapack                 udunits2
arlequin                  cfitsio                   gdal                      libyaml/0.1.4             netphos                   scipy                     velvet
asap                      chlorop                   glimmer                   matlab/2008b              numpy                     seadas/6.2                wcslib
atlas                     clipper                   glpk                      matlab/2012a              oases                     seg                       wise2
bayesass                  clustalw                  gmp                       matlab/2012b              octave                    signalp                   wwatch3
beagle                    cluster                   gnu/4.1.2                 matlab/2013a(default)     openbugs                  sprng                     yasm
beast                     cns                       gnu/4.4.0                 maxent                    openjdk                   ssaha2                    zonation
beast-1.5.4               coils                     gnuplot                   maxima                    openmpi                   stacks
bfast                     colony2                   grass                     merlin                    pari                      structure
blacs                     consel                    gromacs                   migrate/3.2.15            paup                      targetp
blas                      crimap                    hdf                       migrate/3.5.1             phyml                     tclreadline/2.1.0

 

Running Jobs

To run a job on the cluster create a shell script containing the PBS Directives containing the information required by the scheduler to schedule the job, and the job commands.

Example: paup witth the ML_analysis.nex sample file

Error rendering macro 'excerpt-include'

No link could be created for 'Public:Running a job on the HPRC Cluster'.

Other Examples

Unable to render {children}. Page not found: Quick Start Examples.

Job Resources

It is important to match resources requested with the PBS Directives in your script and the resource usage of your job. There can be consequences for incorrectly specifiying these resource requirements

  • Walltime: your job can be killed if it exceeds the specified wall time.
  • Memory: overusing memory can cause the compute node's memory to be pushed into swap space, slowing down all jobs on that node. This has also killed compute nodes in the past, destroying
  • CPUs: using more cpus than requested can slow down all jobs running on that node.

 

Furthor Reading

  1. HPRC Cluster Explained
  2. HPRC Cluster Job Management Explained
  3. HPRC PBS script files
  • No labels