
“The underlying workload placed on the data centre is going to increase power consumption over time,” said Mingay.
“Don’t agree to a long term total reduction in power in the data centre. You can agree to energy efficiency measures and short term reductions in the absolute consumption of power but be very wary of introducing long term reductions schemes unless you are moving to a renewable energy source.”
Mingay also appeared to recommend against embarking on large data centre capacity builds, which goes against some of the larger projects in Canberra and Brisbane.
“Building more than you need right now is a bad idea,” he said.
“We strongly recommend breaking any build up into modules or pods and building it out in five year time horizons. You can leave the balance of the data centre as a slab or with minimal fit out.”
A pod-based environment could also allow efficiencies from running different workloads in different areas of the data centre, depending on their requirements.
“You can look at pods to address specific needs like having one pod fitted with chilled water cooling to house really dense racks and other separate pods that are just air-cooled,” said Mingay.
“In any new data centre build or upgrade, we’d recommend putting in the plumbing for liquid cooling into at least part of the centre, since retrofitting it is much more expensive.”
A pod configuration could also allow you to run different parts of the centre at different temperatures, said Mingay.
Mingay encouraged IT and data centre managers to look at some simple measures to create efficiencies in the data centre. These include calculating the the power usage effectiveness (PUE) of the facility; blocking breaches in the floor tiles that are leaking air (and therefore wasting energy); using plastic blanking panels on racks; removing cables and other obstructions from the underfloor; and exploring hot/cold aisle containment.
“Cold aisle containment can be as crude as using sheets of plastic over the racks for a DIY job or you can go for better looking solutions from the vendors,” said Mingay.
“It can be retrofitted, its cheap and the payback is around two to four months, but it can affect the data centre aesthetics, particularly if you retrofit it.”
“Hot air containment does mean you need more sophisticated equipment. The vendors push it more because they can flog more stuff,” said Mingay.
However, he said that simply blocking holes in the floor tiles - for example, areas where cables come in or out - could result in a 10 percent energy saving per year.
A straw poll of the room revealed only three of the 40-odd attendees knew their centre's PUE measure.
Average data centre PUE measures today are said to hover between 2 and 2.2. World-class data centres are between 1.2 and 1.35.
“If you’re building or refurbishing a data centre you should target a 1.5 PUE,” advised Mingay.
“Our view is that in ten years, having a PUE of two or more will make you very uncompetitive. Not only that, if you ever wanted to get rid of that data centre or onsell it, nobody will want to buy it.”