The following points are related to the installation of software for use on the HPC cluster:
- Generally, software is installed/upgraded upon request only.
- HPC staff will install the latest stable version of software you have require installed or upgraded.
- HPC staff will attempt to install all software under environment modules control. Exceptions will only occur when new HPC images are deployed (rarely).
- Software will not be installed or upgraded if there is a risk of system/service failure, as assessed by HPC and/or eResearch staff.
The most up to date version of scientific software installed on HPC cluster nodes can be obtained by logging onto
zodiac.hpc.jcu.edu.au (using an SSH client) and using the following command:
The list of installed software will be quite long. The environment for a given software package can usually be setup using one of the commands:
<software> is replaced by the module name and, if required, <
version> is replaced by the specific version desired.
JCU HPC cluster nodes are built on the RedHat Enterprise Linux (RHEL 6.x) operating system. There are two main reasons for this choice:
- Hardware maintenance agreements that JCU pays for require use of a commercially support operating system.
- JCU ICT has signed up to a RedHat CAUDIT agreement for licensing of RHEL systems and seek to maximize their return on investment.
JCU HPC also runs a small VMware ESXi cluster (2 servers) that can be used to deliver small Windows systems to satisfy eResearch requirements that cannot be solved on Linux (e.g., web services, databases, or Windows compute). Researchers wanting to run other flavours of Linux (e.g., Ubuntu) should look to taking advantage of NeCTAR resources.
HPC Cluster Software Catalogue
Yellow shaded shells indicate that the software version is only available through use of environment modules.
Compilers / Interpreters
Further detail on MATLAB toolboxes and R addons/plugins can be found toward the bottom of this page.
* gmp, mpc, & mpfr libraries may be built into GNU compilers (not necessarily the above versions though).
Please note that there is an almost endless list of scientific software that could be installed on HPC systems. Unless a request is received, HPC staff do not try to guess what software (inc. version) you need/want to use. While you may be able to install software yourself, generally you should avoid doing this - it is an unsustainable practice in terms of power, cost, and time (whole of JCU view). Additionally, software installed by HPC staff will reside on a different filesystem to where users home directories are located - improving performance at times of high IO load on filesystem(s) containing home directories. Extra information about software highlighted by a light green background colour is supplied at the end of this page.
|Access Command||Version||Access Command||Version||Access Command||Version|
abind, acepack, actuar, ade4, ade4TkGUI, adehabitat, AER, akima, alr3, anchors, ape
base, bdsmatrix, biglm, BIOMOD, Biobase, bitops, boot, BufferedMatrix
car, caTools, chron, CircStats, class, clim.pact, cluster, coda, codetools, coin, colorspace, compiler, CompQuadForm, coxme, cubature
DAAG, datasets, DBI, degreenet, deldir, Design, digest, diptest, DynDoc, dynlm
e1071, Ecdat, effects, ellipse, ergm, evaluate, expm
fBasics, fCalendar, fEcofin, fields, flexmix, foreach, foreign, Formula, fSeries, fts, fUtilities
gam, gbm, gclus, gdata, gee, geoR, geoRglm, ggplot2, gpclib, gplots, graphics, grDevices, grid, gtools
hdf5, hergm, hexbin, Hmisc, HSAUR
igraph, ineq, inline, ipred, iquantitator, ISwR, iterators, itertools, its
kernlab, KernSmooth, kinship
latentnet, lattice, leaps, limma, lme4, lmtest, locfit, logspline
mapproj, maps, maptools, mAr, marray, MASS, Matrix, matrixcalc, MatrixModels, maxLik, mboost, mclust, MCMCpack, mda, MEMSS, methods, mgcv, mice, misc3d, miscTools, mitools, mix, mlbench, mlmRev, mlogit, modeltools, moments, MPV, msm, multcomp, multicore, mutatr, mvtnorm
ncdf, network, networksis, nlme, nnet, nor1mix, np, numDeriv, nws
parallel, party, PBSmapping, permute, pixmap, plm, plyr, png, prabclus, proto, pscl
qtl, quadprog, quantreg
RandomFields, randomForest, RANN, RArcInfo, raster, rbenchmark, rcolony, RColorBrewer, Rcpp, RcppArmadillo, ReadImages, relevent, reshape, rgdal, rgenoud, rgeos, rgl, Rglpk, rjags, rlecuyer, rmeta, robustbase, ROCR, RODBC, rpanel, rpart, RSQLite, RUnit
sampleSelection, sandwich, scatterplot3d, SDMTools, sem, sfmisc, sgeostat, shapefiles, shapes, slam, sm, sna, snow, snowFT, sp, spam, SparseM, spatial, SpatialTools, spatstat, spdep, splancs, splines, statmod, statnet, stats, stats4, stringr, strucchange, subselect, survey, survival, systemfit
tcltk, tcltk2, TeachingDemos, testthat, timeDate, timeSeries, tis, tkrplot, tools, tree, tripack, truncreg, trust, TSA, tseries, tweedie
vcd, vegan, VGAM, VIM
waveslim, wavethresh, widgetTools
XML, xtable, xts
Zelig, zoeppritz, zoo
The following Linux shells are available on HPC systems (
bash is the default):
The following archiving/compression applications are available on HPC systems:
Note that the versions of
unzip installed on HPC have an upper size limit of 2GB. Most active HPC users consume significantly more than 2GB of disk space. If you need assistance with using
tar, please contact HPC staff.
- Environment module files are located in
- Most scientific software is installed in
- Components used by programmers (e.g., stand-alone libraries) are generally installed in /sw/common/
Installation of frequently required libraries is additionally done onto local system disks (using
yum), if libraries are available in repositories.
- Live upgrades of software should only be performed when no login/compute node is using the software. Compute nodes should be reimaged rather than have live upgrades performed. The login node may need to be live upgraded, due to jobs always running on this system.
- Some packages (e.g., BLCR) are rebuilt from source RPMs. BLCR, in particular, needs to be recompiled for each new kernel installed.
The following table provides information about operating systems used on physical servers managed (in some way) by HPC staff.
|Operating System||Primary service(s) provided||Typically accessed from|
|RedHat Enterprise Linux 6.x||
HPC login nodes
HPC compute nodes
Desktop or Laptop computers
HPC login nodes
|SUSE Linux Enterprise Server 11.x||
Desktop or Laptop computers
HPC login and compute nodes
|Windows 2012 Server||CIFS fileshares||Desktop or Laptop computers (Cairns)|
Vendors usually require an enterprise O/S be installed on physical servers if you have purchased maintenance.
JCU researchers wishing to host servers/storage in a datacentre must contact ITR management before purchasing is even considered.
All virtual machines (VMs) offered by HPC are, by default, provided as Infrastructure as a Service (IaaS). VM owners are responsible for daily maintenance operations on their VMs. For security reasons, all systems are registered for application of automatic updates. HPC staff will apply patches/updates (from RedHat and EPEL repositories only) to VMs where the automatic update process fails.
Internal (conducted by ITR) and external security audits take place on all publicly visible systems at times determined by ITR management. VM owners are responsible for fixing any security concerns identified in these audits. HPC staff may be consulted or be asked to provide assistance with such fixes.