The University of Adelaide has launched its new high-performance computing system, dubbed Phoenix, in a bid to quench growing researcher demand for computational power.

Phoenix is a Lenovo System x NeXtScale HPC system that includes 3840 cores and 15,360 GB of memory, using Nvidia K-80 GPUs and Mellanox EDR interconnect, and benchmarks at 300 TFlops.
Those specifications would have placed the supercomputer at number 226 on the most recent TOP500 supercomputer list for November 2015. However, due to other HPC systems coming online at other universities, it will likely end up at a lower ranking when the next list is compiled in July.
The new system is already being used by physicists crunching data from CERN, computer scientists looking at machine learning for autonomous vehicles, applied mathematicians doing probability analysis for predictive healthcare, and mechanical engineers working on wave modelling for clean energy.
In the near future, it is likely to be used by researchers in a range of additional fields, including chemistry, bio-informatics and agricultural economics.
Perhaps the most impressive aspect of the deployment, however, was that Phoenix began taking its first research loads on February 11, just six weeks after the university took delivery of the system.
There were a number of factors that contributed to the speed of the deployment, including strong vendor relations, existing buildings, partnerships with other universities, a willing workforce, and support across the university.
Improved access to computing power
Before the decision to implement a new HPC system was made in the fourth quarter of 2015, the university had largely relied on external facilities such as eResearch SA, which is a partnership with Flinders University and the University of South Australia.
University of Adelaide chief information officer Mark Gregory told iTnews there was a strong desire by researchers in a range of fields to improve access to high-performance computing.
“Some researchers already rely on high-performance computing and had access to external infrastructure for their work. But the next level down – who perhaps don’t have funding and access to high-performance computing – were really keen,” Gregory said.
The researcher demand meant a number of departments and research centres were willing to contribute their own funds to the project.
“The really cool thing in a university context is that most of the funds were contributed from [internal university] customers that use it – 12 departments self-generated two-thirds of the funds for this project," Gregory said.
“Our deputy vice-chancellor and vice-president of research, Mike Brooks, was very enthusiastic about this project. He was very aware of the research demand, and was keen to see it happen because of the enthusiastic reaction of researchers.”
After making the decision to go ahead with the project, the university looked at other academic supercomputers, including MASSIVE at Monash University in Southeast Melbourne, Pawsey Supercomputing Centre in Western Australia, and NCI at ANU in Canberra.
Gregory credits the other teams – in particular MASSIVE – for sharing their open source software during implementation.
The university then took what Gregory describes as an “unusual approach” in getting the best price for the system.
“Because we wanted to get the maximum bang for our buck, we asked vendors if we had $x, how many teraflops could get,” he said.
“We ended up with four bids that were viable, and after further negotiations Lenovo ended up giving us an excellent price for it.”
How they did in just six weeks
Two important factors in getting a supercomputer operational in a short timeframe, according to Gregory, are having a strong relationship with a vendor and an enthusiastic workforce.
“When you buy a system like this, you want to get it operational ASAP, and [Lenovo] was able to deliver it by the end of the year, and just six weeks later it was handling research workloads,” Gregory said.
“A lot of people in our server and network team are young guys who were really excited about the project, and were willing to work around the clock to get it working as soon as possible. They were genuinely fascinated by the technology, and the implementation was done in a very agile way.”
Another factor that expedited the project was the fact that no additional building works were needed for the initial deployment.
“We had an older developer centre we converted to use on this project, although now it’s now up at its limit for cooling and electrical. So we were fortunate in that we had a location to put it, although the puzzle will be what we do in the future,” Gregory said.
“The system is very scalable, and technology is always evolving rapidly, so we’re currently working out a forward strategy for future upgrades."