The following points are related to the installation of software for use on the HPC cluster:

  1. Generally, software is installed/upgraded upon request only.
  2. HPC staff will install the latest stable version of software you have require installed or upgraded.
  3. HPC staff will attempt to install all software under environment modules control.  Exceptions will only occur when new HPC images are deployed (rarely).
  4. Software will not be installed or upgraded if there is a risk of system/service failure, as assessed by HPC and/or eResearch staff. 

The most up to date version of scientific software installed on HPC cluster nodes can be obtained by logging onto zodiac.hpc.jcu.edu.au (using an SSH client) and using the following command:

module avail

The list of installed software will be quite long.  The environment for a given software package can usually be setup using one of the commands:

module load <software>
module load <software>/<version>

where <software> is replaced by the module name and, if required, <version> is replaced by the specific version desired.

Operating Systems

JCU HPC cluster nodes are built on the RedHat Enterprise Linux (RHEL 6.x) operating system.  There are two main reasons for this choice:

  1. Hardware maintenance agreements that JCU pays for require use of a commercially support operating system.
  2. JCU ICT has signed up to a RedHat CAUDIT agreement for licensing of RHEL systems and seek to maximize their return on investment.

JCU HPC also runs a small VMware ESXi cluster (2 servers) that can be used to deliver small Windows systems to satisfy eResearch requirements that cannot be solved on Linux (e.g., web services, databases, or Windows compute).  Researchers wanting to run other flavours of Linux (e.g., Ubuntu) should look to taking advantage of NeCTAR resources. 

HPC Cluster Software Catalogue

Yellow shaded shells indicate that the software version is only available through use of environment modules.

Shells

SoftwareVersion
bash4.1.2
dash0.5.5.1
ksh20120801
tcsh6.17

Compilers

SoftwareVersions
GNU4.4.7
4.9.3
5.3.0
6.1.0

Compilers / Interpreters

SoftwareVersions
java sun-1.6.0.25 , oracle-jdk-7u67 , oracle-jdk-8u20
matlab 2008b , 2013a , 2014a , 2014b , 2015a , 2016a
perl5.10.1 
python2.6.62.7 , 2.7.3 , 3.5.1
R 2.15.1 , 3.0.0 , 3.2.2
ruby1.8.7.374 
tcl8.5.7 
tk8.5.7 

Further detail on MATLAB toolboxes and R addons/plugins can be found toward the bottom of this page.

Libraries (only)

 SoftwareVersions 
atlas3.8.4 
blas3.2.13.6.0
boost1.41.01.61.0
gmp*4.3.16.1.0
lapack 3.6.0
mpc* 1.0.3
mpfr*2.4.13.1.4

* gmp, mpc, & mpfr libraries may be built into GNU compilers (not necessarily the above versions though).

 

Scientific Software

 Software Version
4ti21.3.2
abyss1.3.2
allpathslg44837
ariadne1.3
arlequin3.5
asap4.0.0
bayesass1.3
beagle1.1.0
beast1.6.1
BEDTools2.15.0
bamtools31
blast2.2.29 , 2.3.0
  
  
  
  
 Software Version
4ti21.3.2
abyss1.3.2
bayesass1.3
BamTools3
 Software Version(s)
4ti21.3.2
abyss1.3.2
bayesass1.3
BamTools3
 Software Version(s)
4ti21.3.2
abyss1.3.2
bayesass1.3
BamTools3
 Software Version(s)
targetp 
tmap 
tmhmm 
topali 
towhee 
trans-abyss 
transrate1.0.2
trinityrnaseq 
trinotate3.0.1
udunits1.12.11
udunits22.1.19
velvet1.2.10
w2rap-contigger 
wcslib4.13.4
wise22.2.0
wwatch3 
xfig3.2.5
yasm1.2.0
zlib1.2.8
zonation3.1.9 , 4.0.0

 

Software Catalogue

Please note that there is an almost endless list of scientific software that could be installed on HPC systems.  Unless a request is received, HPC staff do not try to guess what software (inc. version) you need/want to use.  While you may be able to install software yourself, generally you should avoid doing this - it is an unsustainable practice in terms of power, cost, and time (whole of JCU view).  Additionally, software installed by HPC staff will reside on a different filesystem to where users home directories are located - improving performance at times of high IO load on filesystem(s) containing home directories.   Extra information about software highlighted by a light green background colour is supplied at the end of this page.

 Access CommandVersion Access CommandVersion Access CommandVersion   
B           
module load bfast0.6.5a
 module load blacs1.1 module load blas3.2.1 module load blast2.2.29
module load blcr0.8.5
 module load bowtie 1.0.0 module load bowtie22.2.4 module load bwa0.7.4
C



module load caftools
2.0.2 module load cap3  module load carthagene 1.2.2 module load casacore1.4.0
module load cd-hit4.6.1 module load cernlib2006 module load cfitsio 3.030 module load chlorop 1.1
module load clipper2.1 module load clustalw 2.0.12 module load cluster
1.49 module load cns1.3
module load coils 2.2 module load colony2  module load consel0.1k module load crimap2.504a
module load crimap_Monsanto  module load cufflinks2.2.1      
Dmodule load dx 4.4.4         
Emodule load elph 1.0.1 module load EMBOSS 5.0.0 module load enmtools1.3 module load express 
Fmodule load fasta  module load fastme   module load fastStructure  module load ffmpeg 
module load fftw  module load fftw2  module load fftw3    
G

module load garli  module load gdal  module load glimmer  module load glpk 
module load GMT   module load gnuplot  module load gpp4  module load grass 
module load gromacs  module load gsl  
    
Hmodule load hdf 4.2.5 module load hdf51.8.5 module load hmmer     
Imodule load ima2          
Jmodule load jmodeltest  
       
Lmodule load lagan  module load lamarc  module load lapack  module load lis 
M

module load Macaulay2  module load matlab  module load maxent  module load maxima 
module load migrate   module load mira  module load molphy   module load mpich2 
module load mrbayes  module load mrmodeltest  module load msbayes    
Nmodule load ncl   module load netcdf   module load netpbm   module load netphos  
module load numpy           
Omodule load oases  module load octave  module load ogdi  module load openmpi 
Pmodule load pari  module load paup  module load proj  module load pvm 
Rmodule load R  module load r8s  module load rsem  module load rpfits 
Smodule load scalapack  module load scipy  module load seadas  module load seg 
module load signalp  module load sprng   module load ssaha2   module load structure  
module load suitesparse           

 

 

https://upc-bugs.lbl.gov/blcr/doc/html/BLCR_Users_Guide.html

https://github.com/bli25wisc/rsem

 

     

    MATLAB Components/Toolboxes

    Component

    License allows

    Toolbox

    License allows

    MATLAB

    50 user connections

    Control System

    50 user connections

    MATLAB Coder

    50 user connections

    Optimization

    50 user connections

    Simulink

    50 user connections

    Signal Processing

    50 user connections

    Simulink Coder

    50 user connections

    Symbolic Math

    50 user connections

    Simulink Control Design

    50 user connections

    System Identification

    50 user connections

     

     

    Mapping

    5 user connections

     

     

    Neural Network

    5 user connections

     

     

    Statistics

    5 user connections

     

     

    Distributed Computing

    4 user connections

     

     

    Fuzzy Logic

    4 user connections

     

     

    Global Optimization

    4 user connections

     

     

    Image Processing

    4 user connections

     

     

    MATLAB Compiler

    4 user connections

     

     

    Wavelet

    4 user connections

     

    R Packages

    A: abind, acepack, actuar, ade4, ade4TkGUI, adehabitat, AER, akima, alr3, anchors, ape
    B: base, bdsmatrix, biglm, BIOMOD, Biobase, bitops, boot, BufferedMatrix
    C: car, caTools, chron, CircStats, class, clim.pact, cluster, coda, codetools, coin, colorspace, compiler, CompQuadForm, coxme, cubature
    D: DAAG, datasets, DBI, degreenet, deldir, Design, digest, diptest, DynDoc, dynlm
    E: e1071, Ecdat, effects, ellipse, ergm, evaluate, expm
    F: fBasics, fCalendar, fEcofin, fields, flexmix, foreach, foreign, Formula, fSeries, fts, fUtilities
    G: gam, gbm, gclus, gdata, gee, geoR, geoRglm, ggplot2, gpclib, gplots, graphics, grDevices, grid, gtools
    H: hdf5, hergm, hexbin, Hmisc, HSAUR
    I: igraph, ineq, inline, ipred, iquantitator, ISwR, iterators, itertools, its
    K: kernlab, KernSmooth, kinship
    L: latentnet, lattice, leaps, limma, lme4, lmtest, locfit, logspline
    M: mapproj, maps, maptools, mAr, marray, MASS, Matrix, matrixcalc, MatrixModels, maxLik, mboost, mclust, MCMCpack, mda, MEMSS, methods, mgcv, mice, misc3d, miscTools, mitools, mix, mlbench, mlmRev, mlogit, modeltools, moments, MPV, msm, multcomp, multicore, mutatr, mvtnorm
    N: ncdf, network, networksis, nlme, nnet, nor1mix, np, numDeriv, nws
    O: oz
    P: parallel, party, PBSmapping, permute, pixmap, plm, plyr, png, prabclus, proto, pscl
    Q: qtl, quadprog, quantreg
    R: RandomFields, randomForest, RANN, RArcInfo, raster, rbenchmark, rcolony, RColorBrewer, Rcpp, RcppArmadillo, ReadImages, relevent, reshape, rgdal, rgenoud, rgeos, rgl, Rglpk, rjags, rlecuyer, rmeta, robustbase, ROCR, RODBC, rpanel, rpart, RSQLite, RUnit
    S: sampleSelection, sandwich, scatterplot3d, SDMTools, sem, sfmisc, sgeostat, shapefiles, shapes, slam, sm, sna, snow, snowFT, sp, spam, SparseM, spatial, SpatialTools, spatstat, spdep, splancs, splines, statmod, statnet, stats, stats4, stringr, strucchange, subselect, survey, survival, systemfit
    T: tcltk, tcltk2, TeachingDemos, testthat, timeDate, timeSeries, tis, tkrplot, tools, tree, tripack, truncreg, trust, TSA, tseries, tweedie
    U: urca, utils
    V: vcd, vegan, VGAM, VIM
    W: waveslim, wavethresh, widgetTools
    X: XML, xtable, xts
    Z: Zelig, zoeppritz, zoo

     

    Linux Shells

    The following Linux shells are available on HPC systems (bash is the default):

    bashcshdashkshtcshzsh

    Compression Utilities

    The following archiving/compression applications are available on HPC systems:

    7zabzip2gzippbzip2tarunzipxzzip

    Note that the versions of zip and unzip installed on HPC have an upper size limit of 2GB.  Most active HPC users consume significantly more than 2GB of disk space.  If you need assistance with using tar, please contact HPC staff.

     

      1. Environment module files are located in /sw/modules
      2. Most scientific software is installed in /sw/<software>/<version>/
      3. Components used by programmers (e.g., stand-alone libraries) are generally installed in /sw/common/
        Installation of frequently required libraries is additionally done onto local system disks (using yum), if libraries are available in repositories.
      4. Live upgrades of software should only be performed when no login/compute node is using the software. Compute nodes should be reimaged rather than have live upgrades performed. The login node may need to be live upgraded, due to jobs always running on this system.
      5. Some packages (e.g., BLCR) are rebuilt from source RPMs.  BLCR, in particular, needs to be recompiled for each new kernel installed.

      The following table provides information about operating systems used on physical servers managed (in some way) by HPC staff.

      Operating System Primary service(s) provided Typically accessed from
      RedHat Enterprise Linux 6.x

      HPC login nodes

      HPC compute nodes

      Desktop or Laptop computers

      HPC login nodes

      SUSE Linux Enterprise Server 11.x

      CIFS fileshares

      NFS fileshares

      Desktop or Laptop computers

      HPC login and compute nodes

      Windows 2012 Server CIFS fileshares Desktop or Laptop computers (Cairns)

      Vendors usually require an enterprise O/S be installed on physical servers if you have purchased maintenance.

      JCU researchers wishing to host servers/storage in a datacentre must contact ITR management before purchasing is even considered.

      All virtual machines (VMs) offered by HPC are, by default, provided as Infrastructure as a Service (IaaS).  VM owners are responsible for daily maintenance operations on their VMs.  For security reasons, all systems are registered for application of automatic updates.  HPC staff will apply patches/updates (from RedHat and EPEL repositories only) to VMs where the automatic update process fails.

      Internal (conducted by ITR) and external security audits take place on all publicly visible systems at times determined by ITR management.  VM owners are responsible for fixing any security concerns identified in these audits.  HPC staff may be consulted or be asked to provide assistance with such fixes.

       


      • No labels