Dimension Data has won a $2 million, three-year deal to build a new disaster recovery solution for resources sector engineering firm, Ausenco.
The disaster recovery system, due to go live in three weeks, is based on a mix of Sun X4600 M2 servers, EMC's Clariion storage and Avamar data de-duplication software, Symantec's Enterprise Vault archival software, VMware virtualisation software and Cisco's Nexus switches.
Ausenco chief information officer Paul Young told iTnews the company has until now relied on classical tape-drive based archive and recovery solutions.
He wanted to build a solution that would scale up when Ausenco deploys new applications.
"I am pretty confident we'll move to a completely different application stack for our core processes in the near future," he said. "I wanted to address speed and reliability issues in the interim to provide better disaster recovery and availability to clients and customers."
The beauty of virtualisation, Young said, is that by abstracting the hardware, he can "at at some later point drop new applications on existing hardware."
Sun server hardware was chosen for several reasons.
First, Young had experience using Sun hardware in his former role at Wotif.com.
"I got to know you get good linear performance and great access to the memory with a Sun server," he said. "It makes a lot of sense when you virtualise to buy the best equipment."
The Sun servers boast some 512 Gb of memory, according to Ronnie Altit, general manager for Data Centre Solutions at Dimension Data, which will save Ausenco considerably in terms of VMware licenses as "they can run many more virtual machines on one server."
The storage and networking parts of the puzzle use Cisco Nexus switches to provide a unified fabric between servers and the Storage Area Network. Equally important, Young says, is the use of Avamar's data de-duplication technology to minimise the amount of traffic having to be sent between primary and secondary sites, and the use of tiered storage to cut down on costs.
The solution involves a mix of expensive front-end fibre channel storage plus low-cost SATA-based back-end storage.
"You ask the technical people and they'll say we've got two terabytes of data in the SAN, for example, and it's almost running out," Young said. "But then you interrogate that SAN and perhaps 1 terabyte worth of data hasn't been accessed for a year. So you throw in some lower cost trays of SATA disk and say, anything older than three months is to be shifted to lower cost storage."
The solution then uses VMware's VMotion technology to transfer server loads between a site in Brisbane, Queensland and a site in Perth, Western Australia using twin 50 Mb dedicated links.
Young expects the first sync between the two sites to be "huge", after which time the disaster recovery system will only transfer changes made to a given file.
"It's pretty intelligent how they do these things these days," he said.
The architecture of the DR solution was drawn up well before Cisco had made available its new line of UCS servers, which can handle up to 384 Gb of memory per blade.
"We have a strong partnership with Cisco, and normally we would have worked with them," said Altit. "But when we recommend technology for our customers it has to be based on both suitability to the project and the project timeline. Cisco's Unified Computing System doesn't ship for another month or so yet, it couldn't meet the customer deadline. But it could be an option in the future."
Equally, Ausenco's solution is based on VMware's 3.5i suite rather than vSphere, which again wasn't yet available when the project was architected.
"[vSphere] wasn't available at the time, or we would have considered it," Young said.
Dimension Data and Ausenco will build a proof of concept implementation of VMware's Site Recovery Manager in the near future.
Young said virtualisation technology has even more to offer than most people realise.
"I don't think some people understand the intrinsic benefits of the virtualisation exercise," he said. "You can lock down a configuration of your server image and take it elsewhere - it is a more distributable model for system images, making them transportable across your own network.
"You also get to utilise as much of your application stack as you need to at the time. If I'm doing an upgrade and need more resources, I simply allocate more memory and CPUs to it."