In general, when we discuss PVS implementation during a XenDesktop or XenApp design, the focus is always on technical aspects of the PVS, e.g. PVS must be virtual or physical, one or two servers, NIC type, etc. Additional questions/answers, however, may be needed in order for a successful implementation to take place. PVS implementation is not only a technical challenge. In dealing with large enterprise companies, it becomes crucial to also explain process changes, e.g. when will PVS provide servers (XenApp) or desktops (XenDesktop)?

The introduction of PVS as OS streaming technology has an impact on all datacenter aspects. This article is not proposing a specific solution, rather it should be seen as a recommendation based on my personal field experience with suggestions on what to consider for a design and implementation to be successful.

Additional articles may include more in-depth information on each of below mentioned aspects.

Let’s take a look at the different implementation aspects:

1. Networking – The network design is a fundamental part of PVS implementations. PVS leverages PXE and DHCP to work properly and to be able to stream the Server/desktop’s OS (Yep, it’s true that an ISO image could be used instead of PXE, but it requires additional resources). It would be a good idea to check that the networking department approves of DHCP broadcasting on datacenter server networks. E.g. are IP helper addresses allowed on the current networking devices? This is not a major issue from a technical point of view, but in terms of processes and procedures it could become difficult to request exceptions when they are needed. Hence this should be taken into consideration when dealing with networking designs. Clarifying necessary configurations with the networking team can be helpful and will enable a robust PVS implementation.

 

2. Storage – Storage is another key element in datacenters. Even when the flexibility of PVS leave us free to decide whether we want to have a NAS/SAN or Local Disk storage as WriteCache, the type of IOPS generated should be discussed with the storage team. Some enterprises define the rules of their DataStore in terms of size and number of spindles based on “normal” IOPS disk activity. We know that PVS is slightly different from a normal IOPS workload. So it would be necessary to define new rules for specific DataStores for WriteCache usage, in collaboration with the storage team.

 

3. Antivirus – Working with OS’s streaming implies that a read only image is deployed to multiple servers. The image is reset to the initial state on each reboot, and therefore it does not easily become infected. If the OS is clean when the image is generated, a real time protection is enough to protect servers, and IO storms on the storage (potentially generated from massive scheduled disk scans) can be avoided. Most new antiviruses take into consideration this kind of deployment scenario, and provide tools that generate hash tables during image generation. Furthermore, they are able to determine whether files have changed from a virus infection without scanning the entire file system. These items could be discussed with the customer’s antivirus team, i.e. it would be a good idea to agree on a strategy for implementing and maintaining a secure environment, without making any radical changes.

 

4. Software deployment – In most enterprise companies, software deployments are made by ESD systems. With PVS, the approach to software deployment changes radically. Maintaining the software version or installing it massively on a set of servers simply means “maintaining” the OS’s image version. Hence ESD only makes sense when maintaining the OS’s image. There is no reason for the application to be installed on each streamed server. Again, it is important to change procedures in order to adapt to new scenarios.

 

5. CMDB – A configuration management database is the repository that keeps trace of systems and configurations. A key success factor in implementing a CMDB is the ability to automatically discover information about the CIs (auto-discovery), and track changes as they happen. The point is that in PVS environments all VMs are the same or at least a well-defined set of images. Therefore, the usual approach of installing an agent inside the VM would not produce the expected results.

Hence it becomes necessary to check other ways of getting the data. It could be found in other places such as the PVS DB or the management DB of the hypervisor.

 

6. QA Procedure – Most enterprises have a QA process in place, which hands over the system from the project/request phase to the operations department before systems are pushed to production. In a PVS environment, the VMs are based on the same image or set of images, so is it really necessary for each VM to go through the QA process? Probably the base image and each subsequent version should go through a QA. Moreover some software components like AV agent, backup agents, etc. may be missing or installed with different configurations (as discussed earlier). Any changes should be discussed with the QA team, and agreed with the operations department, which will have to support the production environment.

 

7. Security – Security is another important aspect of the PVS discussion, especially with regard to environments where customers want to have multitenant environments using only a single farm implementation. Here the problem is more technical than procedural, however, explaining different techniques that consultants are able to implement, in order to create secure environments, could help speeding up the project approval process.