
Over the last ten years, I’ve been part of a team that’s chartered with predicting the future of the systems and server business. We look at technology trends and identify disruptive changes that we think are going to be significant and then act as change agents to ensure IBM will be in a good leadership position to deliver value to customers before our competitors.
It’s an exciting world. It means you get to do a lot of research, a lot of speculating in some cases of what might happen, and then try to really identify where the high-value breakthroughs will be.
One of those is the whole play around virtualisation. That really got our attention at least six years ago. Obviously virtualisation is not a new technology, it’s been around longer than I have. I joined IBM 40 years ago and we had already invented the hypervisor when I got there. It’s taking on a much more significant role going forward.
What are the implications?
A lot of the complexity and cost today comes from the fact that data centres are relatively ad-hoc in their structure. They’ve evolved piece-wise over the years because they don’t have the benefit of having being engineered the way a car or aircraft is. They’re a quilt of stuff, and the elements are interconnected in ways where making changes very difficult, complex and subject to failure.
Virtualisation becomes valuable in this setting as a way of decoupling the elements of the data centre. You can start with the logical resources – the software – and the hardware resources, and decouple them with the right virtualisation such that the continuing changes that happen in the physical environment don’t impact the software assets.
The software assets in the data centre are the most valuable and expensive from a development viewpoint for customers. If you can help customers protect those assets so that they only really need to change the software when the time is right, there’s immense value.
Can we get your take on the hypervisor space at the moment? Are people really buying on price in the virtualisation space?
I doubt they are. My impression is that people are looking at who has the most complete and easy-to-use package. I think you’re going to find that as time goes on, Microsoft, VMWare and the open source alternatives are going to asymtopically approach the same level of functionality and ease-of-use. So while today there are differentiators – for example, this vendor is a bit ahead of that one – it won’t be very many years before all of those choices the customer has are going to be comparable in capability.
Happily for IBM, none of them really come close yet to matching what we have on mainframes and our Unix servers. We still enjoy some significant advantages in terms of scalability and availability. I/O virtualisation is also an area where the industry technologies haven’t really yet come close to delivering everything customers want.
Does the gradual mainframe-to-x86 transition for certain mission-critical workloads pose a problem for IBM? Obviously on x86 customers will use VMWare or that type of hypervisor.
I don’t think it poses any new problem. I think there’s been an increased use of x86 architecture. It’s obviously a huge fraction of data centres today.
If anything the fact that their virtualisation lags still what we have on our Power systems and mainframes, and the fact that virtualisation is becoming more important, actually gives us more of a differentiator.
In talking with customers in general I think that’s one of the significant buying criteria that motivates customers to use mainframes and our Power systems - that they have so much stronger and more mature virtualisation technology than you can get from any x86 vendor. It becomes a buying criteria and I think that’s driven a lot of [our] success.
You don’t think the mainframe market will marginalise further as x86 momentum continues?
I think that virtualisation is slowing that down as opposed to accelerating it. There’s an ongoing trend towards commoditisation and people moving to low-cost hardware, but that was happening all along. When you look at virtualisation, it actually gives the mainframe and our Power RISC systems a stronger position, and actually I think motivates customers to go with them rather than go with commodity stuff which is weaker in the area of virtualisation.
A lot supercomputer functionality is starting to come down to the workstation level. Is that something you’re seeing? Do you see Microsoft as a competitive threat?
I see supercomputers as predominantly a Linux and Unix space. From what I’ve seen, most of the consumers of that technology are gravitating towards Linux rather than Windows.
Do you think in the minds of researchers there won’t be supercomputers other than those based on Linux or Unix?
It’s hard to say never, but that seems to be where all the momentum and focus is. If you go to Universities, the fact that it’s open source and has a vibrant community around it makes Linux a very attractive platform for technical computing. I think that will continue to happen.
On Google trends, there’s quite an interesting trend unfolding in search terms. In particular, ‘grid computing’ is trending down in terms of searches, but in last year ‘cloud computing’ searches have gone through the roof. Is this a case of semantics or something more fundamental?
No, I think there’s a fundamental advance taking place. If you look at what happened with grid computing, fundamentally it was in many ways pioneering what’s needed today for cloud computing. But it didn’t have at its disposal the wealth of virtualisation technologies that are now becoming available, as well as the new application platforms that are just emerging as part of the cloud computing space.
When you put those two factors together alone – with other forces like business models that were just not there for grid computing – I think cloud computing is going to be a much bigger deal. It builds on grid computing in many ways but ultimately it’s going to be to grid computing what the web was to the Internet. That’s my view. It’s part of that evolution, but I think ultimately it’s going to be a much more successful and important phase in the evolution of the Internet.
What do you think of some of the emerging service models like Amazon’s EC2 cloud service? Is that something that will increase and that IBM will potentially try and get a slice of?
We do that sort of stuff already. We have a high performance team that has successfully deployed ‘clouds’ all over the world. I would guess in the order of 50-100 deployments. We’ve partnered with Google and other technology vendors and implemented clouds at Universities and enterprises. We’ve had significant success around these offerings.
In terms of bringing that to the enterprise, we’ve also created the IBM Research Compute Cloud, which is a showcase for how you can take one of the biggest problems in data centres today and overcome it with cloud computing technology.
The problem that many data centres face – which is true of IBM Research – is you have a bunch of departments in the organisation, each of which has their own budget and can choose their own IT resources. You don’t have to be a visionary to see where that will lead – you end up with a situation where each department chooses what’s optimal for its next project but, as time goes on, some of those projects end, some of the resources go unused, and you end up with a collection of under-utilised assets. This is common, and I hear a lot of customers pointing to this as a major problem.
What the IBM Research Compute Cloud did is pooled those resources, added to that a layer of Tivoli management software to do the monitoring, sharing, provisioning, and then most important of all added a self-service portal that makes it easy for an individual department to sign up for resources for periods of time. It’s now a very direct and painless process, they don’t have to go through a three-month approval cycle, and copious paperwork. They can just immediately go up and reserve resources from the menu.
That’s an internal thing today at IBM, but I’m getting so much interest from customers that I believe that there’s potential for this to be made available in the marketplace, whether as a service or some kind of product, I don’t know. But there’s demand. You can see the customer interest in having that kind of capability, and I think we’re intent on using it much more widely ourselves throughout IBM as well.
How do you think the utility computing model will impact virtualisation deployments? As customers look to scale resources up or down, will they lean either way?
I think you’re going to see a mix. In a lot of data centres customers are going to continue to want to provide their own IT and deploy their own applications and fully leverage some of the new virtualisation technologies. At the same time, virtualisation makes it possible to go beyond grid computing and do cloud computing where you essentially outsource software or resources as a service that someone else is providing to you.
Where’s the sweet spot for that kind of cloud model? How should organisations decide whether to virtualise the workload internally or route it through the cloud?
It varies dramatically between businesses. I recently sat next to a doctor who runs a small business, and she was lamenting the difficulty she has with her three servers. There’s a class of business for whom cloud computing would be the dream come true because they would not have to deal with that whole complexity, but get it over the web dependably, reliably, and pay as they grow and the business expands.
One of the impediments, though, is getting all the applications delivered that way. It’s one thing to have a spreadsheet but another to have an advanced medical application that comes from a sole vendor. It’ll take time for the applications to all be delivered in that fashion, but it seems that more and more applications will become services over the web, and I think that will be a good thing.