Ten reasons to think twice about virtual desktops

By on
Ten reasons to think twice about virtual desktops

Gartner expert tears apart the business case.

Page 1 of 3  |  Single page

The business case for deploying virtual desktops was comprehensively ripped apart by analysts at the Gartner Symposium last week.

In one of the most in-depth presentations at the event, Gartner analyst Mark Margevicius cited a wealth of research and case studies to reveal the technical and business headaches involved with migrating to the latest flavour in thin client computing.

Gartner’s most optimistic estimate was that 10-12 percent of enterprise PC users would be on virtual desktops by 2015. And these, Margevicius said, would primarily be for call centre staff, sales staff, remote app developers and teleworkers.

“The economics simply don’t make sense for most enterprise cases,” he concluded. “The traditional PC will remain the primary tool on the enterprise desktop.”

iTnews has summarised Margevicius’ concerns below.

10. Server headaches

The worst assumption a desktop IT manager can make, Margevicius said, is to assume that he or she will get the same performance out of industry standard x86 servers used for applications such as web servers as it would from a dedicated fat-client computer on every desktop.

“Traditional servers are not designed for high density workloads,” he said. “Web applications are very different to how a Windows desktop behaves.”

Margevicius said IT shops first need to work out whether their server infrastructure can support a virtual desktop deployment “based on number of users per core" before making a business case.

He said most deployments can handle no more than between 7 and 9 users per core – which stacks up to a big investment in servers.

Further, users would still require between 1GB and 2GB per user of memory to match PC performance.

On most servers in data centres today, “you are looking at maxing the server out of memory,” he said.

“The limitations of today's servers make [virtual desktops] cost prohibitive,” he said. “Memory is a big deal. Processors are a big deal.”

Margevicius acknowledged that vendors such as Cisco and HP are releasing blade server configurations designed with those precise requirements in mind, so that organisations can “scale out” their infrastructure.

“There is a significant benefit in unified architectures, predominantly because of memory utilisation,” he said. “But just having the server technology isn't enough.”

9. Storage headaches

Administrators would also be wise to check if their storage kit can handle a move to virtual desktops.

Margevicius said administrators are naturally unsure of how much storage to allocate to a virtual desktop deployment.

“Storage is the biggest wildcard,” he said. “If you are going to mimic the environment of a physical PC that comes with 250GB of storage, are you really going to allocate 250GB of Tier 1 or Tier 2 storage to every virtual desktop?”

Most customers opt for 20-30GB of storage per desktop (if it’s a persistent image), he said.

Storage performance is another concern.

“The real bottleneck is the IOPS [input/output operations per second] going across the disk,” Margevicius said.

“On desktop workloads, typically the read/write ratio is biased more towards writes [compared to servers]. On most server workloads, the write is about 15 percent and read 85 percent."

The Windows operating system, he noted, is a “very chatty OS”.

“When you consolidate Windows workloads, too many users will absolutely affect the performance,” he said.

Margevicius recommended that IT managers investigate the use of de-duplication to address storage volume limitations and advanced caching techniques to address read/write IOPS limitations.

8. Network headaches

Margevicius said that less often, virtual desktop deployments also require investments in new networking kit.

He said that all the major flavours of virtual desktop technology have very efficient protocols for communicating across the network (Microsoft’s RDP, Citrix’ ICA, VMware’s PCoIP etc). Some of these can run typical sessions of only 65-85kbps, owing to thin client predecessors designed to link branch offices over 56kbps connections.

“Most network administrators love [thin client], predominantly because bandwidth is limited and therefore more predictable,” Margevicius said.

Nonetheless, Gartner has been asked to advise several deployments in which end users took these efficiencies for granted.

“Latency is a big problem for hosted desktops,” he said. “The question is how long it takes for characters a user have typed to hit servers. If the latency gets in excess of 150 milliseconds, users start to complain.”

Read on for seven more reasons to think twice about virtual desktops...

Next Page 1 2 3 Single page
Copyright © iTnews.com.au . All rights reserved.

Most Read Articles

Log In

|  Forgot your password?