In several large data centres in Australia, the US and Europe, you’ll find one or more Cisco-branded switches sitting in isolation, sometimes separate from hot and cold aisles, housed in large, cumbersome cabinets built by the likes of Chatsworth or APC.
You’re most likely looking at a Cisco Nexus switch — and if you’re playing in the IT big leagues —social networks, e-commerce, banks etc – there is probably at least one of them in your data centre.
Facilities managers — as a rule — tend to be precious when it comes to floor space, and none would have planned for these switches to sit alone. Most only discovered the problem when they pulled the kit from the cardboard.
The problem is air flow. Most servers, and increasingly top-of-rack switches that connect them, take cool air in at the front of the device and exhaust hot air to the rear. It's called hot aisle/cold aisle, and this design helps facilities managers save on air conditioning costs by isolating and funnelling hot exhaust air towards the ceiling and targeting the cool air where it’s required.
But three of the four Cisco Nexus switches operate counter to this philosophy. The Cisco Nexus 7009 and 7018 both take in and exhaust air from either side of the chassis, rather than front to back, whilst the 7004 also takes air in from the side. Facilities managers report that older Nexus 2000 series switches are different again - taking cold air in from back and expelling hot air out the front.
“In extreme cases, we need to install the switch backwards and run the cables from the back of the server to the front to make it work,” one data centre manager told me. Another attempted to house a Nexus 7018 on its side. Neither is particularly ideal.
The double-edged sword
Cisco took these alternative cooling designs to market in the name of port density.
To the company’s credit, I was put in touch with the very helpful Craig Maynard, regional sales manager of Cisco’s unified fabric technology, to explain why its biggest data centre switches are increasingly cooled side-to-side.
Choosing a switch today, he explained, “comes down to planning and designing where you place these devices.
“Historically, side-to-side cooling was efficient for campus networks and environments without higher density requirements.”
But in the data centre, customers have instead flocked to top of rack switches with front-to-back cooling to gain “a consistent approach to airflow.”
Cisco is itself a big proponent of energy efficiency, and has released numerous informative whitepapers on data centre cooling.
But for many customers, Maynard explained, energy efficiency runs an unfortunate second place behind the need for greater density.
“Rack space is at a premium in most facilities,” he said. “So we need the higher density in the smallest form factor possible.”
One solution for customers chasing density was an ‘end of row’ network topology rather than ‘top of rack’ configuration. End of row switches tend to offer the server team fewer devices to manage and a lot more ports.
“It comes down to port density,” Maynard explains. “By horizontally mounting the line cards on [an end of row] switch, we gain a higher port density in a reduced form factor. And the most efficient area to cool becomes the side of the device. Whereas if we vertically mount the line cards [as common in Top of Rack switches], we restrict the number of ports across the chassis.”
Of Cisco’s four current generation Nexus data centre switches, only the Nexus 7010 offers the front-to-back cooling that fits the uniform hot aisle/cold aisle design.
The 7010 offers 285 10GB ports.
But if you’re chasing density – and many large Nexus customers are – the Nexus 7018 has 769 10GB ports – a 2.5x improvement.
For large providers that require both port density and a hot aisle/cold aisle configuration, Cisco recommends the purchase of third party cabinets or baffles that “redirect air into the hot aisle”.
That’s good news for custom rack manufacturers, but bad news for end users, who have just added a significant premium onto the price of their switch purchase.
So why are customers buying the wrong switch?
That so many large customers have purchased the wrong Nexus switch for their needs is another matter altogether.
In surveying its readers, iTnews spoke to facilities managers from one of the world’s largest social networks, one of the world’s largest travel companies and one of Australia’s largest banks, and all have buyer’s remorse. Surely these large organisations have the resources to do due diligence on the purchase of such expensive equipment?
There is a relatively simple explanation. It wasn’t through customer ignorance or a lack of choice from the vendor that the wrong unit was delivered.
One customer guided iTnews through the automated purchase order process Cisco’s top end resellers use to help inform a customer’s purchase decision. It appears to be too reliant on asking for speeds and feeds and does not take into account airflow or other considerations pertinent to the facilities manager.
“It’s like you need to know the secret code to get a correct airflow model,” one data centre manager explained. “The order process asks you about speeds and feeds and then tells you the model you should get, so I don't blame the comms guys. They entered the specs we required and got sent the box. And by the time the box arrives, it is too late for a change of heart - most projects cannot wait months for a replacement.”
The lesson here is clear – enterprise customers are clearly going to have to do more research during the purchase process. Comms and server managers are going to have to involve facilities staff during that purchase process. And none should be relying exclusively on automated product selection processes.
The jury remains out on end-of-row versus top-of-rack. Not Maynard — nor any analysts queried on the matter — are willing to place a wager on where the industry will settle in future configurations. Simultaneous to Cisco’s push for end-of-row, Facebook’s open compute effort wants more intelligence in top-of-rack.
Where are you placing your bets? Fill out our brief poll in data centre cooling and we’ll release the results at the Data Centre Strategy Summit in early 2013.