Choosing a data centre partner is a long-term decision, and the initial stages of mapping requirements, making a selection, due diligence and contract negotiations is a complex project of considerable scale.
We have done our utmost to make this a concise, two-part iTnews guide to give you in the very least a checklist of the matters any CIO should consider when approaching such an investment.
In Part 1, we’re exploring the basics – exploring the ‘build versus buy’ argument, listing the types of agreements you might consider, and sorting through the essential considerations for why one facility might be more suitable than another.
In Part 2, we’ll take a closer look at managing risk when making the purchasing decision. It’s here you might discover several details you might otherwise not have considered until late in contract negotiations.
We anticipate some of the issues raised could save you considerable effort if addressed before a facility is selected.
Build versus buy
Why should you consider outsourcing the facilities component of your IT?
The trend towards consolidation of workloads onto fewer and fewer physical servers – owing to virtualisation of the x86 platform - is having a marked impact on the kinds of server and storage being deployed for new applications. The density of workloads means every rack weighs more and every piece of equipment runs hotter than ever before.
This trend has reached the point that operating this equipment in anything but a purpose-built data centre will soon no longer be an option. It will be especially challenging for organisations running small server rooms attached to office space.
Purpose-built facilities offer a more stable and scalable data centre solution. Customers often gain greater headroom for growth in terms of floor space and power. They also benefit from economies of scale – with shared access to redundant power, cooling and connectivity with a wider range of carriers.
For these reasons, the co-location business model is booming and space is at a premium in several Australian facilities. Only the very largest and most risk-averse organisations continue to build their own data centres.
Selecting a facility
There are six basic factors to consider when selecting space within a facility:
Please follow the links above to the relevant sections, or read on to page two to continue.
How much compute power and storage is your organisation looking to place in a facility? You’ll need to take into account current requirements but also consider the resources you might require into the future.
Contracts tend to be long-term and you’ll be looking to avoid repeating the negotiating effort any more often than you have to.
The next logical question is whether your organisation intends to retain the skill sets required to manage that IT infrastructure. If the answer is no, customers might choose a managed service. There are a multitude of local options available – some of which involves having a third party manage the server, storage and network infrastructure you own, others that involve leasing this kit as part of the monthly bill.
It’s worth noting that smaller organisations are choosing to avoid owning IT infrastructure altogether and are subscribing to a combination of online, “cloud” services – which carry their own set of risks. For the purposes of this exercise, we’re assuming you intend to deploy your own infrastructure. Your next choice comes down to scale.
Typically, a customer with only a few servers buys rack space from a co-lo service provider that resells capacity from within a larger, third party independent data centre.
Customers with a larger number of servers (a few racks worth) might choose to sign an agreement directly with the data centre owner/operator – usually on kilowatt or rack space basis.
Once a customer has more than 15-20 racks of kit, or is using more than 100kW of power, they usually have the negotiating muscle to seek a lease agreement to guarantee access to floor space or power. A caged-off dedicated space might also be sought on occasion for security reasons.
Beyond a certain scale, some of Australia’s largest organisations have found that third party data centres do not have sufficient floor space, weight and power available for their large IT infrastructure requirements.
These organisations tend to instead partner with data centre companies as anchor tenants of new builds. Consider NAB’s anchor tenancy with Digital Realty in Melbourne, the NSW Government’s deal with Metronode in Sydney and the Illawarra, Westpac and BankWest’s deal with Fujitsu in Sydney and Perth.
Location is an important consideration for many reasons.
Organisations running their own infrastructure in a third party facility may still wish for their technical staff to access the data centre. Locating the data centre in a position close to existing operations is thus desirable.
Further, delivery of certain bandwidth-hungry desktop applications from the data centre is limited by the speed of light. Although the speed of network links is constantly improving, desktop applications tend to perform best if relatively close to the users being delivered the service.
Location is also an issue when one considers business continuity and disaster recovery plans. It makes sense to avoid data centre locations subject to flooding or seismic activity.
One might also wish to mitigate the risk of a city- or state-wide power outages or other localised emergency. That might require a secondary data centre located at some distance away from the primary facility – but for real-time failover of critical applications, it also can’t be too far away: again, the speed of light sets some limitations.
Finally, your location might be an issue when one considers the climate of the area – areas with cooler climates provide opportunities for modern data centres to make significant savings on power costs, which are often passed on to the customer.
Specifically, the advent of ‘free air cooling’ takes advantage of the cooler temperature outside to be drawn in to the facility to cool the IT equipment and save the data centre operator from having to run its chillers.
A free-air cooled data centre would need to use its chillers far less often in Tasmania, for example, than in Brisbane, because the ambient air temperature in Tasmania is more often cooler than the temperature inside the data centre.
Considering direct free-air cooling also requires you to consider the quality of the air in the area. This is especially the case in areas close to chemical plants.
Some organisations are reluctant to expose IT equipment to airborne contaminants for fear of equipment failure. Thus many data centre operators use a form of ‘indirect’ free air cooling that runs the air through a series of filters before it enters the server room.
Return to page one for the table of contents, or continue to page three for power and pricing concerns.
As mentioned in the foreword, the density of today’s IT infrastructure is proving problematic for many data centres. There often isn’t enough power density available for the infrastructure coming to market.
Buyers should discuss with their vendor partners what power requirements the next generation of servers and storage are likely to demand before committing to long-term data centre plans.
Large, independent third-party facilities tend to have access to high voltage connections via multiple sub-stations. But that in itself doesn’t guarantee your share of the power. Customers should seek contractual commitments from data centre providers for a minimum level of redundant power and cooling for each rack.
The rising cost of power is driving customers to seek data centre providers that use energy efficiently. Power efficiency can be achieved via the use of alternative sources of power or simply smarter use of it.
Some modern data centres are building gas turbines on-site, using most of the fuel to power the IT load and the energy from the heat generated to power the chillers.
At the very least, there are ways to reduce the amount of power required to cool computing equipment.
Many data centres are being constructed with modular fit-outs that concentrate cool air where it is most required and isolates exhaust air (hot and cold aisle containment). Others are - as previously discussed - drawing and filtering air from outside the facility inside to cool computing equipment (free air cooling).
The ultimate measure of power efficiency with regards to cooling is PUE (power usage effectiveness), which is ascertained by dividing the total power used in the facility by the amount of power drawn specifically used to serve the IT load.
Legacy data centres are often rated above 2; the most modern are edging closer to 1.2 or 1.3.
There are, however, many differences in where or how a data centre operator might measure PUE. It can be difficult to measure accurate PUE, for example, when the same power and cooling infrastructure is shared between a data centre and office space within the same building.
Further, although PUE is a good comparison tool, it is only relevant if the two facilities being compared are like-for-like.
One consideration is varying levels of availability, and what the customer requires in terms of extra redundant infrastructure to guard against failure. Higher levels of redundancy requires a commitment to operating additional infrastructure that in turn draw power and need to be cooled.
“There is no point comparing the PUE on a Tier One data centre to a Tier Four,” advises data centre consultant Mike Andrea, a director at Strategic Directions. “And there is no point having a PUE of 1.0 if services are down three times a year.”
Such distortions, Andrea notes, can render many PUE ratings irrelevant. Customers may look to have the advertised PUE claim of a data centre subject to independent audit.
Power is equally an essential component when measuring the true price of a data centre service.
Today, Andrea notes, even with cost pressures being applied to IT and facilities groups, the “CIO doesn’t tend to be responsible for the power bill”.
Many CIO’s simply don’t know how much power their infrastructure draws and – understandably - wish for the situation to remain that way.
When a CIO purchases space in a dedicated third-party data centre, however, power costs do tend to be factored in to the quoted price. So often on paper it appears that the outsourced provider is more expensive than running the services in-house.
Andrea argues that if a CIO does include power in his or her internal costs and the relative PUE’s of both facilities is also taken into account, the dedicated facility should come out as the cheaper option.
Power costs will become an even more prominent issue in the coming months due to the introduction of the carbon tax.
Most data centres providers iTnews spoke to for this guide have noted that these increased costs will be passed on to customers – as most already charge on the basis of power consumption.
A select few said they were undecided, but none were confident they could afford to absorb the cost themselves.
It is worth noting that it isn’t necessarily great news for a customer if a data centre provider did attempt to absorb this cost. As we’ll discover in Part 2 of this guide, the profitability of your data centre provider should be a key concern when choosing a facility.
What perhaps is more important is that you know what your data centre provider is paying for power. It would be wise to seek some transparency to ensure you aren’t paying a premium on top.
Return to page one for the table of contents, or continue to page four for a look at network connectivity and weight.
Indeed, the strength of many independent third party data centres often comes down to the ‘ecosystem’ effect of having multiple carriers build a point of presence in the facility.
When it comes to telecommunications costs and performance, it pays to be at the centre of the universe rather than off in a far-flung galaxy.
The largest third party facilities have in excess of 20 and 30 carriers with a point of presence installed.
Network connectivity is also an important consideration if your organisation has already signed a national telecommunications with a telco (or power utility for that matter).
You need to check whether that telco has a point of presence in the facility you’re looking to move into – as usually an enterprise-wide agreement offers power cheaper than the list price on offer at the data centre.
The NSW Government, for example, recently stipulated that it would pay power prices negotiated under its whole-of-government arrangement with Energy Australia at two new data centres being built in partnership with Metronode, rather than the usual price Metronode would charge.
Similarly, any large organisation or government department should check what existing power and connectivity contracts are in place and assessing whether they continue to provide value. Assuming they do, seek to extend them to new data centre facilities.
A more recent consideration for many enterprise customers is whether a new data centre can handle the weight of the infrastructure being deployed.
Floor loading wasn’t much of an issue when many of Australia’s legacy data centres were constructed, but has reared its head as the density of virtualised server and storage has accelerated.
“With some of the new equipment we’re seeing racks weigh in excess of a tonne,” reports Andrea.
Without the necessary structural support in place, this has the potential to cause damage to the concrete foundations of the facility. Andrea notes that the first signs of trouble is seeing cement dust on the floor underneath the equipment.
“Luckily [I've] never seen anyone exceed this to the point of it becoming a structural issue – but I have seen slabs start to bow, and structural engineers called in to review and determine whether the slab is indeed handling the load,” Andrea said.
Clients thus need to know the weight of their infrastructure and seek the floor-loading limit from potential suppliers before signing a deal.
The better data centre providers often include a structural statement for the building in the contract, or in the very least include a floor loading rating per square metre.
One large data centre we spoke to for this study set a rack weight limit of 1000kg – and for racks in excess of this the company mandates the build of additional struts at relatively low cost.
If the contract doesn’t include floor-loading information, be sure to request it. If you’re still uncertain, consider having a formal independent mechanical engineering firm do due diligence on the structure of the building.
A customer with heavy racks tends to be a profitable one for most data centre providers, they will most probably oblige (often on the proviso that you sign a non-disclosure agreement).
Return to page one for the table of contents, or continue to page five to find out what data centre 'tiers' mean for your availability.
Ultimately, determining the price you wish to pay for a data centre service also comes down to how often you are prepared to tolerate downtime.
Most third party data centres in Australia will advertise their facility as being ‘Tier II’ or ‘Tier III’, referring either to a set of standards developed by the Uptime Institute or compliance with the ANSI/TIA-942 standard set by the Telecommunications Industry Association.
The TIA-942 standard is essentially a best practice guide to building your own data centre.
Unfortunately, it is the standard developed by the privately-owned Uptime Institute is the one that places greatest scrutiny on availability of services and is thus the standard most used – unfortunate only because the Institute has at times been accused of being secretive about its evaluation criteria.
At a high level, Uptime’s Tier I rating means that there is no redundancy built into the mechanical and electrical systems at the facility. A customer would have to tolerate close to 30 hours of downtime per year (99.67 percent availability).
Tier II adds one layer of redundancy such that some mechanical and electrical components could be swapped in and out for maintenance, but the facility nonetheless only has a single power or water-cooling path to the equipment - equipment that would not be resilient in many unplanned outages. Customers would have to tolerate around 22 hours of downtime per year (99.75 percent availability).
A Tier III rating takes the facility to a whole new level. The facility must be ‘concurrently maintainable’ - offering multiple, redundant paths (dual-power) to all equipment such that any component can be swapped in and out without interruption to services.
This expensive and complex architecture allows for only 1.6 hours of downtime per year, the magic “five nines” availability many enterprise applications demand (99.98 percent).
A truly mission-critical system – such as an online banking system, for example, might require a Tier IV facility. This means that all components are fault tolerant, and often includes a level of automation between the various paths - ensuring less than one hour of downtime per year (99.99 percent availability.)
As past iTnews investigations have shown, most Australian data centre owner/operators are not prepared to invest in gaining official certification from the Uptime Institute – the exceptions being new builds by Macquarie Telecom and Metronode.
A majority of facilities are ‘self-rated’ – which means that no independent auditor has rated the facility, let alone those certified to bestow an official tier rating on behalf of the Uptime Institute.
So be wary of those advertising to be “Tier III-compliant”, “Tier-III-level” or of “Tier III-standard”. If the facility isn’t officially certified, any such claim needs to be assessed via an independent audit.
It’s also important to remember the old adage - you don’t get something for nothing. Any investment your data centre provider makes in redundant equipment to meet higher Tier levels comes at a cost – a cost in upfront capital and in higher power bills to run more equipment.
You’ll have to bear that cost in your monthly bill. So it is vital you talk to stakeholders within the business to understand what an acceptable level of risk (downtime) is for the applications you are looking to host.
An equally difficult job is determining whether a data centre can actually deliver on its availability promise.
Most data centres will offer an availability guarantee as part of a wider SLA (service-level agreement) under your contract. This promises that the data centre provider will offer service credits for any outage beyond a specified ‘acceptable’ number of hours of downtime.
But although an SLA provides a financial incentive for the data centre provider to invest in keeping its services online, that’s no guarantee that services won’t go down or for low long, as even the largest cloud providers have demonstrated.
So how do you find out the operational history of a given facility? How do you assess the risk of failure?
There are some more formal steps you can take, which we’ll discuss in Part 2 of this guide. But aside from that it is a bit of a leap of faith based predominantly on market reputation.
A free subscription to iTnews’ bulletin – which keeps the industry in the loop on outages – is certainly a good start!
We hope Part 1 has given you a good foundation on site selection before we jump into due diligence and contract negotiations in Part 2.