Salesforce.com has passed up traditional virtualisation software for custom technology that puts up to 10,000 customers on what is essentially a single virtual machine.

The company has a total of five production data centres in the US, Japan and Singapore to support more than 100,000 customers and their 400,000 applications.
According to Salesforce’s head of infrastructure Steve Fisher, the facilities share a common architecture, built on commodity servers, storage and switches and custom software.
“We use Dell app servers, Dell servers for our databases – all x86 stuff. We use F5 load balancers, Cisco SAN switches, storage from EMC and Hitachi,” Fisher told iTnews.
“It’s your typical [hardware] stuff. It’s really our software that’s highly differentiated.”
Virtualisation products from the likes of VMWare and Microsoft typically improve IT efficiency by allowing organisations to pool physical server resources and present them as multiple isolated environments.
Fisher said Salesforce was even more efficient by treating all customer data as part of a single application server and database, while presenting customers with the “illusion” of their own environments.
“When you’re a customer, you think you’ve created your own database table, but … that never actually shows up as a table in our database. It is all done through metadata,” Fisher said.
“The only thing that’s customer-specific anywhere in our infrastructure is just records in a database.”
“What virtualisation is great for is taking the traditional, single-tenant architecture and making it more efficient,” he said.
“We’ve basically built our platform for multi-tenancy from the ground up. From the [Salesforce] operations team’s point of view, it’s really just one application.”
Fisher said Salesforce had “well under 10,000” servers across all its production data centres worldwide – “orders of magnitude” less than if it had used traditional virtualisation.
The platform allowed Salesforce to aggregate and provide for the peak demand of tens of thousands of applications, rather than having to treat the peak demand of each application separately.
“The efficiencies are kind of unbelievable. We’re running 400,000 applications, I don’t know how many millions of users ... [and] we do that in with just a few thousand servers,” he said.
“Even in a full virtualised model, every application would [require] a couple of app servers for redundancy, a couple of database servers, and then we would have to replicate that.
“So we’re talking about eight virtual machines times 400,000. That is not going to run on a few thousand servers.”
Limiting risk
Infrastructure sharing may have economic benefits, but security experts have warned that it could expose organisations to the risk of collateral damage from cyber attacks.
A single platform could also mean that any disruptions – whether due to hardware, software, or personnel – are felt service-wide.
Salesforce has attempted to limit the effect of any disruptions by splitting its capacity into “pods” of 5000 to 10,000 customers.
The company has 15 pods in North America, two in the Asia Pacific region and three in Europe, the Middle East and Africa.
“A pod is a unit of multi-tenant capacity,” Fisher explained. “Any one customer lives on an individual pod and that pod is replicated to another data centre.
“We’re big believers in multi-tenancy … but it is nice that if something catastrophic should go wrong, it would only affect customers on that one pod.”
Fisher pointed to a storage failure in its North American ‘NA2’ pod that disrupted services for seven hours in late June.
“We had an incident on NA2 which was a failure in our storage subsystem and it did exactly what you would expect. There was a disruption on NA2 but it didn’t disrupt anybody else,” he said.
“Clearly, that was not good for customers on NA2 but it was certainly good that we had that architecture as opposed to having datacentre-wide outages or service-wide outages.”
Read on to page two to find out how Salesforce manages capacity and what it might look for in an Australian data centre.