Interview: Uptime Institute founder, Kenneth Brill

By

The man who coined the data centre tier system.

Kenneth Brill, visiting Sydney for the DC Strategics conference later this month, talks about the need for data centre resiliency with iTnews editor Brett Winterford.

Interview: Uptime Institute founder, Kenneth Brill

iTnews: What led you to come up with the Tier system as we know it?

KB: It goes back some 18 years. We were working for the United Postal Service as consultants. Prior to 1989, UPS had been a packaged delivery company. Computing was a back-office function for them.

But that changed when they went into the overnight delivery business. Suddenly computing became critical to the life of the company.

The brand new data centre UPS had just built became obsolete overnight. The data centre was designed for back-office functions for a package delivery company. It was inadequate for an airline business.

What you have to remember is, the worst time for an airline business to be down is at night. You absolutely must have IT operational at night. There are mainframes at almost all airlines that if some of these functions weren't done, the airline is out of business.

In the case of UPS, an overnight IT outage back then could mean that two million packages were not delivered the next day. When you consider that UPS offered a ten dollar refund in the event your package did not go to its destination the next day, you are looking at every IT downtime event potentially costing them $20 million. A couple of those events and you could have afforded to build a new data centre.

Take us back to the late eighties, business worked differently before then. You could shut IT down to do maintenance. It's amazing to think that only twenty years ago or so, that's the way things were.

We were contracted as consultants. We were just doing at the time what people do when they approach a problem - we applied new thinking. What we came up with was a standard that was set for the rest of the industry.

What we worked out was a way to make power and cooling no longer be a gate for the rest of the business.

What made you decide to publish and copyright the standard?

KB: We published the first standard in 1996. One of our clients was a guy from Hewlett Packard. He wanted a simple way to explain to his management why he was asking for US$50 million to build a data centre.

This is what is so interesting about this standard. It really has been a user-driven standard. Most standards will be driven by people who have something to sell. This is a standard driven by people that want to buy something.

How often is the standard used?

KB: There are plenty of people that use and abuse this standard.

Anybody who makes a self-claim to a tier level needs to be looked at with a jaundiced eye.

Only data centres accredited are listed on the institute's web site.

We have a process of evaluation as to what tier level a data centre is. When we look at some of the data centres that make their own claims, we find that they tend to be off by at least one tier, if not two. This can be very disappointing to the owner, who spent good money but missed the mark.

Often times, they could have fixed it had they got some help in the beginning - usually it costs no more to do it the right way.

Can you give us an example of a common mistake?

KB: A Tier 3 data centre, you should be able to do maintenance on any part of the mechanical or electrical plant without IT shutting down. You should be able to draw a circle around every part of the data centre and be able to say that you can function without it. It's as simple as that. But so many data centres that call themselves Tier III would require IT to be shut down to perform maintenance on plant. This means most of the time that the plant isn't maintained. When it fails, it will be a much more catastrophic failure.

Or another - I was reviewing a data centre not too long ago. The capacity they wanted to have was 10 megawatts (MW). Because of how they arranged things - on a concurrent maintenance they could only use 6 MW of that 10 MW - that investment in 4 MW of power was lost. Now I am not sure that most customers would even have found the problem, but the people in our company look at 50 to 100 projects a year.

Why don't more data centres engage the Uptime Institute for an official rating? Is it expensive?

KB: The cost is in the fraction of a percent of the project. We don't make any money. We serve a validation function.

It is important to note that very few businesses can justify a Tier 4 standard. Tier 4 is fault tolerant - tolerant of any equipment failure. We don't even believe that most data centres have the business failure consequences to justify Tier 4. An outage would have to impact earnings for the quarter to be Tier 4.

Many people need Tier 3, however. Google is a Tier One design and that is absolutely the right business decision for them.  Our own little computer room is Tier One - and that is the right decision for us.

We are agnostic to what tier level you should have. If you do justify Tier 4, you need to know that it's what you are getting. It will more than pay off the small amount of money you pay to do this.

Read on for more...

Is it just the plant you need accredited? Or process?

KB: Seventy percent of failures are not equipment failures. They are caused by people. If you do not have good human controls, you can be in trouble. One of the most common human failures in a Tier 4 data centre is when humans confuse the two sets of redundant equipment. Somebody plugs A into system B instead of A. Or the vendor is working in the data centre and doesn't comply with the rules, and shuts things off in an unintended way.

Are Australia's major data centres accredited?

KB: We see tremendous interest in Australia and more broadly in the Asia Pacific. We have projects underway.

Do you have resources in the region to inspect and consult with local data centres?

KB: As it happens we have been a part of the LEED - Leadership in Energy Efficient Design - an accreditation which consultants can attach to their name. A person can become an Accredited Tier Designer, at a certain standard. We have certified 184 engineers in under 12 months and the next course we are holding is in Taiwan. There is only one person in Asia in Taiwan, and we are running a course in Taiwan next week. We hope 15 or 20 come out of that.

A lot of data centres in Australia are being built in shipping containers...

KB: I'll stop you right there. Container data centres is a misnomer. They are container computer rooms. A data centre is composed of two or three different systems - the mechanical and electrical system that drives it and the computer floor and the comms gear moving information in and out. A computer room represents twenty percent or less of the total cost of a data centre. Eighty per cent of the cost is somewhere else.

Now, a computer room in a container is cheap. But it is the mechanical and electrical plant that drives it. Minimising space in a computer room doesn't save a lot of money.

So what will you be speaking about in Sydney?

KB: I will be talking about the meltdown of the benefits we have been enjoying from Moore's law. Moore's Law says everything gets cheaper and cheaper, but what we have not been paying attention to is that the energy consumption is not dropping at the rate of the performance increase. We are spending more and more on energy - the fundamental reason the cost of data centres has risen so dramatically. The reason the cost of IT is rising is sitting in a line item called the data centre.

Where are the costs? The operations cost like power bills or the capital cost of the plant?

KB: Focusing on the utility cost is significant but not as important as the capital costs. It is the cost of providing the place in the computer room, the rack where you put a server. A rack server is on average between US$1300 - US$2000 a server. Now - the capital cost to provide the real estate, power and cooling is US$16,000 per server unit. It is larger by a factor of eight. Electricity to run that server is US$500 a year. So the capital cost of that US$16,000 of rack space with a 15 year life is US$1000 a year.  That is the cost of the utilities.

The total cost is rarely appreciated, because it shows up in so many different places.

So you want to talk about data centre efficiency?

KB: I am going to give a five-step system - what I call the NNDC strategy - the No New Data Centre strategy - that could save US$120,000 a rack over four years. By NNDC I mean that you won't need to build a new data centre.

And these steps, they are relatively painless. It's not a new spending program.

Why aren't people doing it already?

KB: Efficiency has no constituency. If the average tenure of a CIO is 30 months, the CIO doesn't have the time to make IT truly efficient, that might take three or four years. Efficiency means upsetting people by eliminating waste.

Cutting every point of power consumption in a data centre in half is truly possible. But it has got to be painful.

Painful, but simple. When you hear them, you'll hit your head against the wall, they are so simple.

So can you start us off with one tip today?

KB: OK, just one. How about you turn off your comatose servers. Some of them are still consuming energy but not doing anything. If you bothered to take a look you might be able to take ten to thirty percent off the load.

There are gold nuggets lying all over the server room floor if someone is willing to bend over and pick them up.

Kenneth Brill is the keynote speaker at the DC Strategics conference in Sydney on August 13. His topic - the "meltdown of the benefits of Moore's Law", will address the increase in capital costs in most data centres.

Multi page
Got a news tip for our journalists? Share it with us anonymously here.
Tags:

Most Read Articles

NSW Police to embark on $126m IT overhaul

NSW Police to embark on $126m IT overhaul

Health signs $33m networks deal with Optus

Health signs $33m networks deal with Optus

Optus quietly delays mobile-to-satellite service launch

Optus quietly delays mobile-to-satellite service launch

CSIRO seeks fibre optic provider for WA telescope

CSIRO seeks fibre optic provider for WA telescope

Log In

  |  Forgot your password?