Skip to end of metadata
Go to start of metadata

The following points are related to the installation of software for use on the HPC cluster:

  1. Generally, software is installed/upgraded upon request only.
  2. HPC staff will install the latest stable version of software you have require installed or upgraded.
  3. HPC staff will attempt to install all software under environment modules control.  Exceptions will only occur when new HPC images are deployed (rarely).
  4. Software will not be installed or upgraded if there is a risk of system/service failure, as assessed by HPC and/or eResearch staff. 

The most up to date version of scientific software installed on HPC cluster nodes can be obtained by logging onto (using an SSH client) and using the following commands:



module avail

A list of available software is displayed

module help <software>

Version number and brief synopsis is displayed for <software>

The environment for a given software package is generally setup using the command:

module load <software>

where <software> is replaced by the module name.

Operating Systems

All HPC cluster nodes run a RedHat Enterprise Linux 6.x (RHEL 6) operating system.  HPC provides virtual resources for requirements that cannot (or should not) be run on HPC cluster nodes - e.g., web services, databases, or Windows compute.  All HPC servers are under vendor maintenance contracts and are required to run enterprise operating systems - VMware ESXi, RHEL, or Microsoft Windows Server, when installed by JCU staff.

The software catalogue below will focus on scientific software that is, generally, compiled from source with a level of optimisation slightly greater than default.

Software Catalogue

Please note that there is an almost endless list of scientific software that could be installed on HPC systems.  Unless a request is received, HPC staff do not try to guess what software (inc. version) you need/want to use.  While you may be able to install software yourself, generally you should avoid doing this - it is an unsustainable practice in terms of power, cost, and time (whole of JCU view).  Additionally, software installed by HPC staff will reside on a different filesystem to where users home directories are located - improving performance at times of high IO load on filesystem(s) containing home directories.   Extra information about software highlighted by a light green background colour is supplied at the end of this page.

 Access CommandVersion Access CommandVersion Access CommandVersion   
4module load 4ti21.3.2         
Amodule load abyss
1.3.2 module load allpathslg44837 module load ariadne
1.3 module load arlequin3.5
module load asap
4.0.0 module load atlas3.8.4      

module load bayesass
1.3 module load beagle1.1.0 module load beast 1.6.1 module load BEDTools2.15.0
module load bfast0.6.5a
 module load blacs1.1 module load blas3.2.1 module load blast2.2.29
module load blcr0.8.5
 module load bowtie 1.0.0 module load bowtie22.2.4 module load bwa0.7.4

module load caftools
2.0.2 module load cap3  module load carthagene 1.2.2 module load casacore1.4.0
module load cd-hit4.6.1 module load cernlib2006 module load cfitsio 3.030 module load chlorop 1.1
module load clipper2.1 module load clustalw 2.0.12 module load cluster
1.49 module load cns1.3
module load coils 2.2 module load colony2  module load consel0.1k module load crimap2.504a
module load crimap_Monsanto  module load cufflinks2.2.1      
Dmodule load dx 4.4.4         
Emodule load elph 1.0.1 module load EMBOSS 5.0.0 module load enmtools1.3 module load express 
Fmodule load fasta  module load fastme   module load fastStructure  module load ffmpeg 
module load fftw  module load fftw2  module load fftw3    

module load garli  module load gdal  module load glimmer  module load glpk 
module load GMT   module load gnuplot  module load gpp4  module load grass 
module load gromacs  module load gsl  
Hmodule load hdf 4.2.5 module load hdf51.8.5 module load hmmer     
Imodule load ima2          
Jmodule load jmodeltest  
Lmodule load lagan  module load lamarc  module load lapack  module load lis 

module load Macaulay2  module load matlab  module load maxent  module load maxima 
module load migrate   module load mira  module load molphy   module load mpich2 
module load mrbayes  module load mrmodeltest  module load msbayes    
Nmodule load ncl   module load netcdf   module load netpbm   module load netphos  
module load numpy           
Omodule load oases  module load octave  module load ogdi  module load openmpi 
Pmodule load pari  module load paup  module load proj  module load pvm 
Rmodule load R  module load r8s  module load rsem  module load rpfits 
Smodule load scalapack  module load scipy  module load seadas  module load seg 
module load signalp  module load sprng   module load ssaha2   module load structure  
module load suitesparse           
Tmodule load targetp  module load tmap  module load tmhmm  module load topali 
module load towhee  module load trans-abyss  module load trinityrnaseq    
Umodule load udunits 1.12.11 module load udunits22.1.19      
Vmodule load velvet
Wmodule load wcslib
4.13.4 module load wise22.2.0 module load wwatch3
Xmodule load xfig3.2.5         
Ymodule load yasm
Zmodule load zlib
1.2.8 module load zonation
3.1.9, 4.0.0    



    MATLAB Components/Toolboxes


    License allows


    License allows


    50 user connections

    Control System

    50 user connections

    MATLAB Coder

    50 user connections


    50 user connections


    50 user connections

    Signal Processing

    50 user connections

    Simulink Coder

    50 user connections

    Symbolic Math

    50 user connections

    Simulink Control Design

    50 user connections

    System Identification

    50 user connections




    5 user connections



    Neural Network

    5 user connections




    5 user connections



    Distributed Computing

    4 user connections



    Fuzzy Logic

    4 user connections



    Global Optimization

    4 user connections



    Image Processing

    4 user connections



    MATLAB Compiler

    4 user connections




    4 user connections


    R Packages

    A: abind, acepack, actuar, ade4, ade4TkGUI, adehabitat, AER, akima, alr3, anchors, ape
    B: base, bdsmatrix, biglm, BIOMOD, Biobase, bitops, boot, BufferedMatrix
    C: car, caTools, chron, CircStats, class, clim.pact, cluster, coda, codetools, coin, colorspace, compiler, CompQuadForm, coxme, cubature
    D: DAAG, datasets, DBI, degreenet, deldir, Design, digest, diptest, DynDoc, dynlm
    E: e1071, Ecdat, effects, ellipse, ergm, evaluate, expm
    F: fBasics, fCalendar, fEcofin, fields, flexmix, foreach, foreign, Formula, fSeries, fts, fUtilities
    G: gam, gbm, gclus, gdata, gee, geoR, geoRglm, ggplot2, gpclib, gplots, graphics, grDevices, grid, gtools
    H: hdf5, hergm, hexbin, Hmisc, HSAUR
    I: igraph, ineq, inline, ipred, iquantitator, ISwR, iterators, itertools, its
    K: kernlab, KernSmooth, kinship
    L: latentnet, lattice, leaps, limma, lme4, lmtest, locfit, logspline
    M: mapproj, maps, maptools, mAr, marray, MASS, Matrix, matrixcalc, MatrixModels, maxLik, mboost, mclust, MCMCpack, mda, MEMSS, methods, mgcv, mice, misc3d, miscTools, mitools, mix, mlbench, mlmRev, mlogit, modeltools, moments, MPV, msm, multcomp, multicore, mutatr, mvtnorm
    N: ncdf, network, networksis, nlme, nnet, nor1mix, np, numDeriv, nws
    O: oz
    P: parallel, party, PBSmapping, permute, pixmap, plm, plyr, png, prabclus, proto, pscl
    Q: qtl, quadprog, quantreg
    R: RandomFields, randomForest, RANN, RArcInfo, raster, rbenchmark, rcolony, RColorBrewer, Rcpp, RcppArmadillo, ReadImages, relevent, reshape, rgdal, rgenoud, rgeos, rgl, Rglpk, rjags, rlecuyer, rmeta, robustbase, ROCR, RODBC, rpanel, rpart, RSQLite, RUnit
    S: sampleSelection, sandwich, scatterplot3d, SDMTools, sem, sfmisc, sgeostat, shapefiles, shapes, slam, sm, sna, snow, snowFT, sp, spam, SparseM, spatial, SpatialTools, spatstat, spdep, splancs, splines, statmod, statnet, stats, stats4, stringr, strucchange, subselect, survey, survival, systemfit
    T: tcltk, tcltk2, TeachingDemos, testthat, timeDate, timeSeries, tis, tkrplot, tools, tree, tripack, truncreg, trust, TSA, tseries, tweedie
    U: urca, utils
    V: vcd, vegan, VGAM, VIM
    W: waveslim, wavethresh, widgetTools
    X: XML, xtable, xts
    Z: Zelig, zoeppritz, zoo


    Linux Shells

    The following Linux shells are available on HPC systems (bash is the default):


    Compression Utilities

    The following archiving/compression applications are available on HPC systems:


    Note that the versions of zip and unzip installed on HPC have an upper size limit of 2GB.  Most active HPC users consume significantly more than 2GB of disk space.  If you need assistance with using tar, please contact HPC staff.


      1. Environment module files are located in /sw/modules
      2. Most scientific software is installed in /sw/<software>/<version>/
      3. Components used by programmers (e.g., stand-alone libraries) are generally installed in /sw/common/
        Installation of frequently required libraries is additionally done onto local system disks (using yum), if libraries are available in repositories.
      4. Live upgrades of software should only be performed when no login/compute node is using the software. Compute nodes should be reimaged rather than have live upgrades performed. The login node may need to be live upgraded, due to jobs always running on this system.
      5. Some packages (e.g., BLCR) are rebuilt from source RPMs.  BLCR, in particular, needs to be recompiled for each new kernel installed.

      The following table provides information about operating systems used on physical servers managed (in some way) by HPC staff.

      Operating System Primary service(s) provided Typically accessed from
      RedHat Enterprise Linux 6.x

      HPC login nodes

      HPC compute nodes

      Desktop or Laptop computers

      HPC login nodes

      SUSE Linux Enterprise Server 11.x

      CIFS fileshares

      NFS fileshares

      Desktop or Laptop computers

      HPC login and compute nodes

      Windows 2012 Server CIFS fileshares Desktop or Laptop computers (Cairns)

      Vendors usually require an enterprise O/S be installed on physical servers if you have purchased maintenance.

      JCU researchers wishing to host servers/storage in a datacentre must contact ITR management before purchasing is even considered.

      All virtual machines (VMs) offered by HPC are, by default, provided as Infrastructure as a Service (IaaS).  VM owners are responsible for daily maintenance operations on their VMs.  For security reasons, all systems are registered for application of automatic updates.  HPC staff will apply patches/updates (from RedHat and EPEL repositories only) to VMs where the automatic update process fails.

      Internal (conducted by ITR) and external security audits take place on all publicly visible systems at times determined by ITR management.  VM owners are responsible for fixing any security concerns identified in these audits.  HPC staff may be consulted or be asked to provide assistance with such fixes.

      • No labels