Child pages
  • HPRC Cluster Hardware
Skip to end of metadata
Go to start of metadata

Server Infrastructure

Login nodes2 Intel Xeon Gold 6248 (40 cores @ 2.3GHz)
384GiB ECC~480GiB (RAID-1)2x25Gb, 2x10Gb
Compute (CPU) nodes2 Intel Xeon Gold 6148/6248 (40 cores @ 2.3GHz)
384GiB ECC~480GiB (RAID-1)2x25Gb
Accelerator (GPU) nodes2 Intel Xeon Gold 5118 (24 cores @ 2.3GHz)2 x NVIDIA Volta100 (16GiB)192GiB ECC~1.3TiB2x10Gb, 2x1Gb

Network Switch Infrastructure

Researchers do not have direct access to the devices mentioned below.

Hardware mgmt. networkJuniper EX43004x10Gb, 48x1Gb
Storage (NFS) networkJuniper EX46508x100Gb, 48x25Gb
Public networkJuniper QFX 51006x40Gb, 48x10Gb

Storage Infrastructure

Researchers do not have direct access to the systems/devices mentioned below.

Provisioned capacityFront-end connectivityFilesystem(s)
DELL SC4020600TiB8x8Gb/s FCN/A (block storage)
DDN SFA7990E516TiB4x100Gb/s Ethernet/gpfs01 
ServerService(s)FilesystemsExtra information
DELL R640NFS/home , /scratch , /sw 
DELL R640AFM & NFS/gpfs01 IBM Spectrum Scaler server license purchased 

AFM (Active File Management) is a GPFS function that JCU use to serve a local cache of research storage allocations held at UQ/QCIF (Medici project) .   The Medici project was initiated to integrate AFM and DMF (Data Migration Facility, a Hierarchical Storage Management system) to minimise operating costs of housing 20PiB+ of research data.

  • No labels