THIS PAGE CONTAINS OLD INFORMATION - HPC staff will work on updating it soon.
This page is intended as a quick introduction for new users submitting their first job to the HPRC Cluster.
...
In
...
The first step in using the HPRC Cluster is to log in to the login node - the typical work flow the user:
- Logs into zodiac.hpc.jcu.edu.au
...
title | Logging into the Cluster |
---|
...
Software Packages
The HPRC Cluster uses environment modules to manage the available software packages. This allows multiple versions of the same software to be installed without interfearing with each other. To enable the environment module
systemthe following command needs to be executed on the command line:
No Format |
---|
-bash-4.1$ source /etc/profile.d/modules.sh |
The software that is available on the HPRC clusted is listed here: HPRC User Software. Alternately you can query the software available on the cluster with the following commands:
Command | Result |
---|
module avail
| A list of available software is displayed |
module help <software>
| Version number and brief synopsis is displayed for <software> |
Expand |
---|
title | Example "module avail" run on the Thu Mar 6 11:19:21 EST 2014 |
---|
|
No Format |
---|
-bash-4.1$ module avail
--------------------------------------------------------------------------------------- /usr/share/Modules/modulefiles ----------------------------------------------------------------------------------------
MPInside/3.5.1 compiler/gcc-4.4.5 module-cvs modules mpich2-x86_64 null perfcatcher
chkfeature dot module-info mpi/intel-4.0 mpt/2.05 perfboost use.own
---------------------------------------------------------------------------------------------- /etc/modulefiles -----------------------------------------------------------------------------------------------
compat-openmpi-x86_64 openmpi-x86_64
------------------------------------------------------------------------------------------------- /sw/modules -------------------------------------------------------------------------------------------------
4ti2 blast/2.2.23 crimap_Monsanto hdf5 migrate/3.6(default) picard-tools tmap/1.1
BEDTools blast/2.2.29(default) dx hmmer mira proj tmhmm
EMBOSS bowtie elph ima2 modeltest pvm topali
GMT bwa/0.7.4(default) enmtools jags molphy r8s towhee
Macaulay2 caftools fasta java mpich2 rainbowcrack towhee-openmpi
Python/2.7 cap3 fastme jcusmart mrbayes rpfits trans-abyss
R/2.15.1(default) carthagene/1.2.2(default) ffmpeg jmodeltest mrmodeltest ruby/1.9.3 tree-puzzle
R/3.0.0 carthagene/1.3.beta fftw2 lagan msbayes ruby/2.0.0 trinityrnaseq
abyss casacore fftw3 lamarc ncar samtools udunits
ariadne cernlib garli lapack netcdf scalapack udunits2
arlequin cfitsio gdal libyaml/0.1.4 netphos scipy velvet
asap chlorop glimmer matlab/2008b numpy seadas/6.2 wcslib
atlas clipper glpk matlab/2012a oases seg wise2
bayesass clustalw gmp matlab/2012b octave signalp wwatch3
beagle cluster gnu/4.1.2 matlab/2013a(default) openbugs sprng yasm
beast cns gnu/4.4.0 maxent openjdk ssaha2 zonation
beast-1.5.4 coils gnuplot maxima openmpi stacks
bfast colony2 grass merlin pari structure
blacs consel gromacs migrate/3.2.15 paup targetp
blas crimap hdf migrate/3.5.1 phyml tclreadline/2.1.0 |
|
- Prepares a submission script for their jobs
- Submits their jobs to the Job Scheduler
- Monitors their jobs
- Collects the output of their jobs.
A few things that new users should be aware of:
- Typically, jobs are not run in an interactive manner, except when:
- users are running small one off jobs
- evaluating the resources required for bigger jobs
- using graphical applications like MATLAB
- Examples of interactive jobs:
Content by Label |
---|
showLabels | false |
---|
showSpace | false |
---|
cql | label = "interactive-cluster-job" |
---|
labels | interactive-cluster-job |
---|
|
- HPRC Cluster software is not run in a window on their desktop, neither is it launched by clicking on it in a network drive.
- Users need to log into the cluster and inform the job scheduler about their job and it will run it when it can.
Quick Start