The Pawsey Supercomputing Centre will offer graphical processing unit (GPU) compute nodes for researchers on its Nimbus cloud and is calling for trial users.
A total of twelve NVIDIA Tesla V100 GPUs with 16 gigabytes of memory each will be installed in six HPE Apollo SX40 server nodes that are to be added to the Nimbus cloud at Pawsey.
Nimbus is an Ocata OpenStack deployment that makes Ubuntu virtual machines available to researchers.
Pawsey said the GPUs will be used to accelerate artificial intelligence, high performance computing and graphics jobs by giving researchers access to VMs with "more computational power" behind them.
Compared to general-purpose central processing units (CPUs), the massively parallel GPUs with 5120 CUDA and 640 Tensor cores per card offer a significant performance boost for AI and similar workloads.
A single GPU offers the performance of up to 100 CPUs, Pawsey said.
The NVIDIA GPUs offer between 7 and 7.8 tera floating point operations per second of single-precision performance.
For double-precision performance the figure is 14-15.7 TFLOPS per processor. The Tensor cores, which can accelerate Google's Tensorflow machine learning library, manage between 112 and 125 TFLOPS in a Tesla V100 GPU.
Commercial cloud providers such as Amazon, Google, IBM already offer GPU powered computing capacity.
Microsoft is also offering GPUs for its Azure cloud, and has started trialling AI-specific field programmable gate array (FPGA) accelerators as well.
Pawsey is currently installing the HPE nodes with two GPUs in each into Nimbus.
Nimbus currently has AMD Opteron CPUs with 3000 cores in total and 288 terabytes of storage.
Researchers will be able to use the GPU nodes for free, and Pawsey has asked researchers who need access to these to apply under an early adopter program, estimated to run until August this year.