The suite of technologies, collectively dubbed a virtual data centre operating system (VDC-OS), allows servers, storage and network resources to be treated as a single, giant computer.
Speaking at his company’s annual global conference in Las Vegas, VMware chief executive Paul Maritz said the software dynamically allocates computing resources to applications based on their changing workloads.
The software also blurs the line between internal IT resources and those offered by third-party service providers. Applications can be configured to automatically use external resources during times of peak demand, ensuring required performance levels are maintained.
The concept builds on the rapidly growing interest in cloud computing and its ability to reduce IT expenditure while providing access to scalable resources.
“Companies need more freedom about where they pull their computing resources from,” says Maritz. “It’s no longer about doing it all internally or all externally.”
The VDC-OS concept moves the centre of computing away from traditional operating systems to a new abstracted management software layer. This layer effectively becomes a new, broader operating system that spans all the equipment in an infrastructure.
Taking this approach also allows many of the time-consuming tasks associated with managing an IT infrastructure to be streamlined and automated. These tasks include things such as back-up and recovery, security and application patching.
In essence, VDC-OS comprises three components. The first, called Infrastructure vServices, pulls together a company’s servers, storage and networking gear into a single pool of resources that is then allocated to applications as required.
The second component, Application vServices, guarantees pre-set levels of availability, security and scalability of those resources to individual applications. The third, Cloud vServices, provides the link between in-house systems and external cloud-based resources.
“We are moving fundamentally away from a device-centric world to one that is application, information and people centric,” Moritz told the more than 12,000 people who attended his keynote presentation at the event.
“It’s about how we take infrastructure and treat it as a common substrate that allows services to be provisioned to users in a much more flexible way without having to change the underlying infrastructure.”
Analyst with technology advisory form IBRS, Kevin McIsaac likens such new infrastructures to traditional mainframe computers.
“It’s essentially mainframe 2.0,” he says. “The typical commodity IT infrastructure will be reinvented from today’s network of independent servers and storage into a unified computing resource that looks and behaves remarkably like the old mainframe.”
“This new infrastructure will blend the best attributes from each architecture to create a highly agile, robust and cost-effective environment that is based on commodity components.”
However McIsaac says that, while the key technologies needed to achieve such a vision are available today, pressures within companies will mean it could take many around seven years due to undertake.
“There can be an attitude that, if things are not broken, don’t fix them,” he says. “It takes many companies time to change their mindset. There are also factors such as sunk capital cost.”
Maritz says the linking of internal IT infrastructures with external computing clouds has ‘tremendous potential’ for businesses but at the moment the concept is being held back by technical inconsistencies and a lack of standards.
With this in mind, VMware has created a certification program for partners to ensure their offerings are compatible and customers are able to move to and from them with ease.
More than 100 companies have already joined the program and are working to provide cloud-based services such as storage, processing capacity and disaster recovery.
Ian Grayson travelled to Las Vegas as a guest of VMware.
VMware touts 'mainframe 2.0'
By Ian Grayson on Sep 17, 2008 9:15AM