With the growth of virtualization and hypervisors, there has been a move to virtualize physical servers to better take advantage of hardware and provide a higher level of availability.  With physical servers configured in a clustered environment and using shared storage, XenApp servers running in VMs can be live migrated between physical servers to maintain the application server uptime while being able to shutdown the physical server for maintenance or replacement.  In the event of a physical server failure, the virtualized servers can be restarted on another physical server in the cluster.  For server applications like Exchange, virtualized on clustered physical servers provides an excellent approach to high availability. However, how does this apply to something like Citrix XenApp?

Virtualizing XenApp servers, with Citrix best practices and design guides talking to the number of VMs per server, the number of vCPUS per VM and the memory associated with the VMs, provides performance similar to installing to bare metal physical servers.  With the clustered environment the high availability of migration is now available for the XenApp servers.  However, XenApp servers can be configured in a farm, a logical group of servers that can be managed as a single entity with load balancing managed within the farm.  If a XenApp server fails, the remaining XenApp servers will pick up the load.  If XenApp is handing the load balancing within the farm, what is the gain to have the VMs run in a clustered environment?  High availability is the primary response.  The ability to migrate or restart XenApp VMs on other servers provided an N+1 approach has been utilized in deploying the physical servers.  However, if a XenApp farm is correctly configured, the failure of a XenApp server moves the load to the remaining XenApp servers.   Again, an N+1 approach needs to be implemented in configuring the XenApp farm.  An N+1 approach is configuring 1 more server than needed to spread the workload in the event of a server failure.  If five servers are required to support the workload, then six servers are deployed, allowing for the failure of any one server.

If the farm is configured to support failures, then why cluster the physical hosts running the VMs.  Local storage of the physical server can be utilized to store the differential files required for provisioning the virtualized XenApp VMs.  This eliminates the requirement of shared storage, which can be very expensive, and removes the need to cluster the servers, reducing management complexity.  Which now leads to the original question, why virtualize XenApp?

How different is running multiple XenApp VMs on a physical server using local storage from running XenApp on a bare-metal installation?  When configuring XenApp one of the concerns is enough IOPs to handle the XenApp workload.  In a rack-mount server there are multiple drives available, for a blade environment, this was not true.  A blade normally has two drives, and SSD drives originally did not have enough space to support a full XenApp deployment.  Therefore, virtualized XenApp, shared storage, and clustering became the recommendation.  However, with improvements in SSDs and new flash-based storage cards, now a blade can be configured with enough local storage to support the XenApp VMs, and even bare-metal XenApp installations.  The SSDs and flash-based storage cards provide adequate space and more than enough IOPs.  A mirrored pair of SSDs will easily provide enough space and IOPs for a bare-metal installation of XenApp, but may not be space adequate for a virtualized deployment.  In either case, both deployments are very similar. Provisioning using MCS or PVS can be used in either scenario to deliver the OS, and a single master image can be maintained in either deployment. The other advantage in bare-metal may also be in licensing, depending on the choice of hypervisor and Microsoft licensing.

There are advantages and disadvantages to either method of deployment and no one approach works in every scenario.  With bare-metal deployments, scheduling server maintenance can be more difficult to schedule, while migrating VMs simplifies the problem.  And with the current hyper-visors, live migration can occur without the requirement of shared storage.  Another concern is around the workload.  If the XenApp workload is not enough to utilize a physical server, then virtualizing the XenAPP VM makes sense to better use the physical hardware.   There is also the difference in number of users supported on a XenApp server.  For a VM it is 50+ users, for bare-metal it is 200+.  People often point out the loss of a physical server affects many more users then a single VM, however if the failure is at the physical level, there is not difference in how many users are affected.

As hardware and software changes, the computer industry seems to become cyclical.  When I first started in the industry everyone connected to mainframe or mini-computer, yeah I know, been at this too long.  Then it migrated out to desktops, then with software the move is back to the data center with remote desktops, VDI, and even the re-introduction of dedicated “servers” for desktops in the data center again by some hardware companies.  Have we made the circle back to looking at instances where bare-metal deployment of applications like XenApp is the better answer again, from a TCO to management of the environment?  There is no one absolute answer.  It really comes down to your choice.  I have no preference one over the other; I can see scenarios for all the methods of deployment of XenApp I have mentioned but that is for another day, another blog.  Evaluate and do what works best for your environment.

The views expressed here are my own and do not necessarily reflect the views of Citrix.”

Kirk Manzer

Sr. Architect

Citrix Solutions Lab