It has been almost six years now since I came to the conclusion that system virtualization is a mandatory technology for addressing key challenges with the traditional computing model. Not only that, virtualization facilitates the shift towards modern, safer computing, opens doors wide-open widely for new types of end user experiences and simplifies the work of IT administrators.

The computing architecture envisioned for Personal Computers (PCs) thirty years ago assumed that entire components of the computing experience would live together by sharing the same hardware computing resources. Those computing resources could be storage, processor, memory, a mouse, a keyboard, display, network, a printer, etc. Back then, hardware would cost a fortune. There was even a lack of support for colored graphic, audio or video output. Thus, sharing software resources within the boundaries of the same hardware resources made sense from an economic standpoint. The services provided by an operating system were very limited as well and the demand for creating new applications for those PCs was very low.

The PC industry grew rapidly over the years, especially in the past ten years with great advancements in hardware, operating systems and applications capabilities supporting CD quality audio, high definition video, biometric and geo sensors, etc. Furthermore, we’re witnessing an explosion in the types of mobile and ultra-mobile devices people use on a daily basis.  A quick visit to an Apple or a Microsoft store would tell you what I mean here. Nonetheless, one thing remained the same and is almost kept forgotten: the original architecture stayed in place governing our computing evolution for most of the cases, including client and server devices. The main exception here is smart phones, given government regulations mandating the isolation of modem phone software from the rest of the user environment.  As expected too, the notion of sharing resources extended from hardware resources to software itself. So, within a web browser you can have as many plug-ins as you wish. In the next part of this blog series, I will discuss how the sharing of hardware and software resources results in security challenges for IT.

About XenClient:

Join the conversation by connecting with the Citrix XenClient team online!

About the author:

Ahmed Sallam drives technology and product strategy working with ecosystem partners for Citrix XenClient and the emerging client devices virtualization market. Prior to Citrix, he was CTO and chief architect of advanced technology at McAfee, now part of Intel Corp. He was co-inventor and architect of DeepSAFE, co-developed with Intel Labs, and co-designer of VMware’s VMM CPU security technology known as VMsafe. Prior to McAfee, Ahmed was a senior architect with Nokia’s security division and a principal engineer at Symantec. He holds 17 issued patents and has more than 40 pending patent applications. He earned a bachelor’s degree in computer science and automatic control from the University of Alexandria.

Follow Ahmed on twitter: https://twitter.com/ahmedsallam

Check ahmed public profile: www.linkedin.com/in/ahmedsallam