The University of Tasmania is gearing up to buy a new high performance compute cluster it hopes will be delivered later this year.
The university, which has more than 29,000 students and campuses in Hobart, Launceston and Burnie, is currently home to Australia’s fifth largest research data storage infrastructure and Nectar research node in terms of traffic.
Its computing demand is largely driven by the university’s climatic and ocean science research, including its Southern Ocean marine observation, as well as astronomy and health studies.
To keep up with increasingly data-hungry work, the university is looking to purchase an additional general purpose HPC cluster, with a budget of up to $3 million to spend on the new capacity.
The deployment has been allocated $2 million this year, with an additional $1 million in 2017 that is still subject to budgetary approval.
There is no significant increase in node count planned beyond 2017.
The HPC system will be housed at a new purpose-built research data centre facility currently under construction at the university’s Sandy Bay Campus in Hobart, with completion due in September.
The installation will be a bare-room data-centre deployment, with the vendor to supply all racks, cables, and associated accessories. Power connections, cooling, network termination and cable trays will be already in place or provided by the university.
Along with the HPC hardware, the tender asks for persistent storage and a range of professional services like system implementation, testing, user migration, training and project services.
The university is looking for new single thread and simultaneous multithreaded CPU cores but no GPU nodes, with a minimum of 16GB of memory in each node to provide a local pagepool cache for GPFS (general parallel file system).
On the software front, the new HPC cluster will need to be compatible with standard Linux distributions, and support a range of applications used by researchers.
The university has operated a number of HPC systems over the past 25 years, and currently runs around 2000 CPU cores across several HPC clusters, including two SGI ICE 8200 systems.
These existing HPC systems are connected to a common InfiniBand-connected network file system, which uses SGI’s DMF hierarchical storage management system. Two DDN GRIDscaler storage systems run a common single-namespace GPFS clustered file system.
The University of Tasmania project is the latest in a string of higher education supercomputer deployments, with the University of WA, USQ, Adelaide Uni, Monash Uni and QRIScloud all launching new systems this year.