
You have a right to be. In just 12 months it has gone from being a graphical representation of the Internet on vendor architecture diagrams to the next generation of computing.
Amazon, Google, Microsoft, IBM – some of the industry’s biggest players are already involved, and many more are circling with ideas and strategies that are high on vision, but scant on detail.
What you should be asking yourself is that by taking on these vendor marketing messages and promoting a Cloud Computing model, what else are you really advocating down the track?
These are what could be called ‘side effects’ of the cloud.
Potential issues include SLAs, contractual terms and conditions, quality-of-service, management, security, usage accounting and jurisdictional complications with the physical location of cloud-based data.
Your success in the cloud space comes down to how you negotiate these potential issues and help give your business the computing flexibility users want, at a level of risk you can live with.
Mission-critical clouds
Let’s consider the minefield that is risk assessment in an external cloud environment.
With the cloud still in its infancy, there seems to be a trend among IT managers to push only non-mission- critical workloads out to services such as Amazon EC2 and Microsoft Azure.
It’s a situation that has been likened to the take-up of virtualisation in the enterprise, according to IBM’s business development manager for System X in A/NZ, Peter Hedges
“Roll back four years and substitute the word virtualisation for cloud,” said Hedges.
“Customers were saying we’ll test [virtualisation] but we won’t put anything mission-critical on it. Now we’re at a point where they don’t do anything that isn’t virtualised.”
HP Australia’s marketing manager for enterprise servers and storage, Angus Jones, somewhat concurred: “If we look at virtualisation technology, it’s been around for 30 years, but it’s only been in the last few years that we’ve seen serious movement to it in the market.
“Even today, penetration of virtualisation for mission- critical applications is small. We’re still waiting for organisations to be comfortable with the technology,” said Jones.
Dell provides a similar though slightly more optimistic assessment.
“Four or five years ago no one put mission- critical applications on VMware,” said Justin Boyd, enterprise marketing manager A/NZ at Dell.
“Then everyone became comfortable and started deploying more to it. The same thing will happen with the cloud.”
You call that service?
Commentators appear unsure of the extent to which mission-critical workloads will be transferred into an external cloud environment.
Most believe that mission-critical workloads will move into the cloud in some form, but the rate at which they do is largely tied to the emergence of service-level agreements (SLAs) on such services.
“Putting mission-critical applications into the cloud comes down to the SLAs,” said Simon Elisha, CTO of Hitachi Data Systems.
“If something goes down, who do you call? For example, if there’s a Gmail outage, all you can do is sit and watch the forums. Until cloud service SLAs become enterprise-grade, I don’t think significant applications will move across.”
IDC Australia associate director, Linus Lai, agrees. “We’ve run many surveys and we know large enterprises aren’t about to migrate mission-critical applications onto the cloud.
“They’re likely to develop new or non-mission-critical applications on the cloud that require very little integration with existing back office systems.”
Lai cautioned that current SLAs for cloud services could be inadequate for mission-critical enterprise users because any refund from the provider stipulated under a liability clause is unlikely to cover the full cost of the downtime.
“If you pay $1000 for capacity then the liability the operator is exposed to is also $1000,” said Lai.
“That isn’t going to be anywhere near the opportunity cost of being unable to run your business applications.”
Nick Abrahams, a partner at law firm Deacons, urges both vendors and their customers to carefully examine the terms and conditions of a Cloud Computing contract to reduce both parties’ liability profiles.
“We’ve always wrestled with the concept of vendors limiting their liability [exposure], but when you’re delivering a critical part of the business through a vendor you will want a complete rethink of your liability profile,”
said Abrahams.
“In a standard license, you’ve always got a liability cap but it’s OK because you’re using the application internally and you’ve largely got control over uptime. If it doesn’t work it’s a drag but it doesn’t pull the whole business down.
“If you rely on the cloud to provide mission-critical services to the business, you will want to reconsider the issue of liability,” he said.Where’s my data?
Even though data is officially located in the cloud, the physical location of the underlying server infrastructure needs to be known. This is because different jurisdictions have varying privacy and data management rules that the information may become subject to.
For example, jurisdiction might be critical if your chosen cloud provider suddenly goes out of business, your relationship with them sours, or you become subject to some form of e-discovery process, all of which require you to be able to get your information out of them, and fast.
“The terms and conditions (T&Cs) will have a governing law clause in them, such that if you use a foreign [cloud] provider your data will more than likely be governed by the laws in that place,” explained Abrahams.
“It means enterprises could find themselves subject to laws that they aren’t familiar with.”
According to Abrahams, there are also restrictions under Australian law on the export of some personal information to other countries.
Tim Smith, marketing manager A/NZ at Hitachi Data Systems, also claims there are some ‘83 pieces of legislation around data retention and document destruction’ locally that could also come into play.
“Users need to be vigilant to ensure way that personal information is being protected in clouds outside of Australia,” said Abrahams.
Abrahams urged prospective cloud customers to think through all potential ramifications of data accessibility before signing on to a service.
“What happens if the provider were to go bust? Will the data be in a format that makes it accessible and readable?” Abrahams asked.
“You’re putting a lot of trust in the provider, and while you can try to protect access to data contractually, short of backing it up in escrow or having a bespoke agreement, you will just have to trust the provider to make the data available to you again.”
He also warned customers to prepare for the possibility of a falling out with the cloud operator.
“If the relationship sours they have a massive negotiating position on you because they could effectively stymie access to your proprietary data,” said Abrahams.
Abrahams advised potential cloud operators to pay close attention to the drafting of their standard terms and conditions for use, and potential customers to read the T&Cs carefully and negotiate a bespoke agreement in place of accepting standard terms where possible.
“For the most part, standard T&Cs are used so the customer doesn’t have a great deal of say in what goes into the contract,” explained Abrahams.
He added: “It’s important to note that none of these issues render the cloud business model ineffective. They just need to be considered as part of the risk analysis of having or using some form of hosted solution.”
Shades of grey
One of the other key side effects of today’s environment is the complete lack of standards and reference architectures available to prospective cloud operators.
It surprised even iTnews how closely some hardware vendors are keeping to themselves reference architectures that could be used as the basis for a cloud service.
For example, HDS has them – but only under non-disclosure agreement. IBM’s Business Consulting Services division and HP Australia are similar – our attempts to gain access to that knowledge fell on deaf ears.
What could be perceived as a trend towards proprietary cloud knowledge might be the impetus needed to drive development of more open reference architectures and industry standard models for Cloud Computing
moving forward.
“I think there needs to be some open standards,” said Sean Casey, enterprise business development manager for Intel A/NZ.
“Otherwise there is a danger of fragmented mini clouds forming and everyone doing their own thing.”
HP’ Angus Young concurred. “We’re relying on the industry to come up with a standard for Cloud Computing going forward,” he said.
There are a number of models that could vie for industry standard status. In July, Intel, HP and Yahoo! created an open source cloud computing testbed initiative for large-scale, global research projects.
Separately, Intel and Oracle are working together to ‘identify and drive standards to enable flexible deployment across private and public clouds’.
The latter initiative is perhaps a recognition that there may be a number of different types of ‘cloud’ in the not-too-distant future – not just external EC2-type offerings that exist today.
“We see corporates and large organisations having their own clouds within their own environments,” said Dell’s Justin Boyd.
Other hybrid models that skirt the line between public and private are also possible.
“You will see variations to the theme – it could be a private cloud, on-premise or external type service or a mix of all three,” said IDC’s Linus Lai. “There will be grey areas – it’s not going to be all black and white.”
Cloud Computing is a big topic. The flexibility it introduces could take virtualisation to the next level. Being aware of the potential side-effects will be the key to making it work for your business.
Above all, in its formative stages, remember – always read the warnings.
And if you can negotiate on the T&Cs, don’t take cloud services only as the vendor prescribes.
An extended version of this article appeared in the 24th November 2008 issue of CRN magazine.