As data centres become the engine rooms of the modern enterprise, efforts are underway to blend the plant and IT infrastructure assets into a single, manageable framework.

In the past, said David Yip, data centre executive with IBM, there were “two divergent schools of thought” when it came to managing IT assets – those that had responsibility for facilities (the data centre’s physical assets) and those responsible for IT (the systems).
Developing a single management view of both of these assets is “where all the action is now,” Yip said.
Today, everything from power-rails to racks and the server hardware and comms equipment is IP-enabled.
That has made all of the status and performance of these physical assets available for monitoring and management tools that fall under the broad banner of “data centre infrastructure management” or DCIM.

Rodney Gedda, an analyst with Telsyte, said most of his clients desire this holistic view of IT and plant.
“A recent trend is for the power hardware companies to develop software to hook into the IT – into things like server and networking equipment,” he said.
“Another trend is the deployment of integrated cloud computing “stacks” or data centres-in-a-box that can feature [autonomous] monitoring of components, reporting heat and power consumption.”
Mike Andrea, director of the Strategic Directions Group, notes that some co-location providers have seen opportunity to profit from this convergence.
Strategic Directions is among those service providers taking on licenses for DCIM software and bundling this monitoring capability in with the rental of racks in their facilities.
"The cost of the software licenses can otherwise be quite prohibitive for smaller and medium sized organisations," he said.
Shifting the power dynamics
One of the key costs of operating IT equipment in the data centres is power — both for operating the IT hardware and operating the cooling infrastructure required to ensure the hardware runs at optimum performance.
Paul Tyrer, vice president of Schneider Electric IT, said the pressure to reduce power consumption is coming from the upper levels of management.
Savvy business managers are demanding better oversight of the energy and power management within the data centre and are “demanding visibility from the data centre management team,” he added.
Power is such a significant cost, it has driven two major trends in data centre design: the move towards cooler external environments with abundant hydropower, and the shift towards hardware capable of tolerating higher temperatures within the data centre facility.
The first option, open to the Googles, Apples and Amazons of this world, is to build massive facilities in places like Oregon, USA, that boast cheap power and low ambient temperatures.
Most of Australia, by contrast, is a hostile environment for IT hardware, necessitating cooling.
One fallback option for Australian IT managers is to seek out new cooling and power management features in the latest generation of server hardware.
Servers from all the major brands have become gradually more capable of dealing with higher data centre temperatures.
ASHRAE, which publishes routinely updated guidelines on acceptable temperature envelopes, has steadily revised its recommendations for data centres upwards every year for the past few years.
Yip said the average temperature of a new data centre has moved from being refrigerated at 18 degrees celsius to a positively balmy 25-27 degrees.
IT managers need to “manage the power and thermal profiles of the hardware, all the way down to the chip level,” said Andrew Cameron, data centre product lead for HP South Pacific.
Managing thermals means understanding aspects such as virtual machine performance on a server, the operation of the server itself, plus the attributes of the rack and row the server is housed in.
For many organisations it may be necessary to run a ‘thermal analysis’ of their infrastructure, moving hotter-running hardware to cooler parts of the centre.
Read on to find out how this power is being put into the hands of facilities managers...
Monitoring equals managing
Many servers today use embedded smarts that autonomously measure critical performance metrics, such as the intelligent platform management interface, or IPMI.
This IPMI platform was originally developed by chip giant Intel, and has since been adopted worldwide by more than 200 hardware manufacturers.
IPMI is key to the running of modern data centres. It’s an out of band-standard, which means control functions can be invoked before the operating system has booted.
Administrators can remotely diagnose a server or rack using IPMI, allowing BIOS or system changes, even if the system is powered down, or there has been a system failure.
IPMI can be used to control thermal loads in the event of a fan shutdown on a server.
“As the temperature rises, the usual response would be to shut down the entire server,” a representative from Cisco Systems told iTnews. “With IPMI, the clock of the server’s processor can be slowed, bringing the temperature back under control.”
In doing this, the server remains functional – if a little slower – and instrumentation with the server pings service technicians to schedule a repair or replacement of the server at an appropriate time.
Maintenance: from outside the datacentre
Most of the data DCIM systems gather from physical plant and servers can be fed into web-based dashboards.
Tyrer said a dashboard would usually offer management an overview of the entire data centre.
“There are three perspectives” he added. “There’s a rear view of what’s happened. Then there are analytics for real-time management of the facility, Finally, there are forward-looking and predictive and planning tools for the management of the facility.”
Because most devices in the data centre are now IP-enabled, it’s relatively easy to create apps to access them – so long as the APIs are exposed, noted Gedda.
“It’s also standard for alarms to be sent via e-mail and high-priority messages can be routed via an SMS gateway,” he said.

The next level of data centre monitoring is to bring alerts and dashboards to the mobile devices of facilities and IT managers.
Datacentre provider NextDC has developed its own monitoring and maintenance tool called ONEDC.
This proprietary customer portal is integrated with NextDC’s building management and ticketing systems, offering customers a “single point of access for managing their entire colocation service with us,” said NextDC chief executive Craig Scroggie.
Crucially, Scroggie said, it gives customers visibility they didn’t have in the past.
Gedda said mobility “is a hot trend in monitoring right now.”
Tyrer suspects it is because these tools truly “empower” the data centre manager.
New services and applications and required at such speed, he said, facilities and IT managers often don’t have the information they need at hand.
“It’s a people and process issue,” he said.
“The responsible organisations deploying DCIM are not just selling software, they should be consulting and advising around the people and the process as well as the tools,” he said.
“If you are deploying DCIM it has benefits to the organisation, but if you ignore the people, process and management, you will not realise the benefits.”
iTnews will lead a thinktank discussion on the next generation of data centre monitoring tools at February's Data Centre Strategy Summit on the Gold Coast.