Last week I had the luxury of giving a keynote at LinuxCon/CloudOpen in San Diego.  Lots of great feedback after the pitch that I decided to post it as a blog. (slides shared via @slideshare through hyperlink)

———

At Citrix we’ve been obviously been thinking a lot about the cloud. Not just the cloud today, but the cloud for tomorrow and well into the future.

When looking to the future of technology, it’s helpful to take a look at existing technologies and see if patterns emerge – and if lessons from today’s ecosystem can be applied to tomorrow and beyond.

To that end, we think that there’s no better model for open cloud to study than Linux and its ecosystem. Not just the kernel, of course, though the kernel holds many fine lessons for any student of open development and community practices. But also, the larger ecosystem of open source and vendors (notably distributions) that have formed around the kernel.

Here’s a few of the things we’ve learned from the last 20-plus years of Linux.

The Cloud is Not Highlander

Great movie, bad representation of the cloud market.  One thing we’ve learned from Linux vendors – there’s plenty of room in the market for open solutions. From community distributions like Debian and Fedora to Red Hat, SUSE, and Ubuntu – to PostgreSQL, MySQL or Apache HTTPD and Nginx. Multiple solutions can and do co-exist, and even cooperate and compete simultaneously. There’s no reason that the cloud need be any different.

Be a User

I once had the luxury of joining a small dinner where I happened to sit next to Evan Williams, the technology founder behind Twitter.  As he has been quoted often saying, to be successful developers need to be passionate users of the technology as well as understand their user community.  In my experience, successful projects focus not only on the developer, but also the customers.  Linux solved for lower cost unix, optimization of x86 hardware, faster “edge” servers – Linus didn’t just build it because it was “cool” and Red Hat grew their business by building on features that were derived directly from customers needs.  At Citrix, we do take a bunch of grief for our complexity because of the options or features, but every single piece of code is tied back to a customer.  Equally, the Apache community has encouraged those customers to not only take part in the community but they have an opportunity to lead and influence – a powerful driver towards the success of a project.

Manual Software Management Doesn’t Scale

Vendors and admins have had to learn the hard way that manual software management – whether it’s admins compiling from source or vendors supplying software that needs to be tended separately from the system’s package management – does not scale. Running proprietary installers or configuring packages from source does not allow easy updates or quick deployments to multiple machines.

Likewise, at cloud scale, best practices demand that admins embrace configuration management tools like Puppet or Chef to get the most out of their environments. Configuring templates and virtual machines manually locks developers into slow, error-prone processes that do not scale to environments that encompass two or more hypervisors and thousands of guests.

Early Technology Favorites Are Often Thrown Out

Being first to market is no guarantee of long-term success, or even survival. Technology favorites can be abandoned with amazing swiftness when better technology emerges or political problems make a project problematic.

Consider the Linux distributions that have been popular over the years. Soft Landing Systems (SLS) Linux was the first Linux distro, but was quickly supplanted by Slackware because it was buggy and not updated frequently enough. Slackware, while still developed and used today, has been displaced from “mainstream” use by Red Hat, SUSE, Debian, Ubuntu and others.

At each turn, the distribution that met the needs of users best was the one that succeeded – not the distribution that had first-mover status. Lesson? You can’t assume that being an early favorite is going to ensure long-term success, or even survival.

Only Individuals Have Standing in the Community

It doesn’t matter if you work for Red Hat, IBM, Citrix, SUSE, or the Picayune Hosting Company – a developer’s standing in the community is based on their contributions and reputation alone. You can’t simply walk into the Linux kernel mailing list and expect to have a patch accepted because you work for Company A.

With well-run projects, like the Linux kernel or Debian or PostgreSQL, it is individual developers that drive the projects. Governance is designed to ensure that the health of the project comes before the interests of any given company.

Which, incidentally, is one of the chief reasons that we chose Apache.

Apache provides a well-understood and well-tested governance model, well-understood licensing, and an umbrella that gives individual and corporate contributors the confidence that they will be on equal footing when participating in CloudStack development. Citrix employees have to earn their way in the community just as much as any other contributor – which is exactly how projects should be governed.

Do Your Work in the Open

Another lesson that we’ve observed from the last 21 years of Linux development is that work needs to be done in the open. When companies or individuals hold back their changes – either for competitive purposes or to get things “just right” before submitting to public scrutiny – the community is poorer for it. Often times it means that there’s technical debt to be paid in merging code into the mainline projects. We’ve observed this time and time again, most recently with all the headaches that Linux kernel folks experienced with the various ARM trees.

We believe that open source means more than dropping code at random intervals – the work needs to be done in the open as well, so that we can benefit from the contributions of the entire community rather than those behind the corporate firewall.

Be Boring, But Useful

Once upon a time, Linux was “exciting” in the sense that the kernel and distributions were constantly adding big new features that helped Linux become competitive with proprietary Unix and/or Microsoft Windows in the enterprise and consumer market.

While Linux still adds features at an amazing pace, sometime in the mid–2000s, Linux became boring.

And that was great. It meant that Linux was mainstream, quietly doing its work in the background without too much hassle. Linux conquered the data center. It conquered the top 500 list of supercomputers. Linux has become the core of the most-used smartphone operating system in the world.

We aspire to have an open cloud that is just as boring – and necessary – as Linux.

Good Enough Wins, but Plan for the Future

You’ve heard the saying, “the perfect is the enemy of the good,” and that especially applies to technology.

We watched as projects like GNU/Hurd floundered and never quite delivered a viable operating system, while the Linux community continually shipped and got code into the hands of users and organizations that needed a robust product now.

The Linux community’s approach has allowed less elegant solutions – like ipchains and various schedulers, the original Sys-V init system, and more – to be phased out and replaced with better and more robust technology.

At the same time, we’ve learned that you have to avoid saddling yourself with so much technical debt that it’s impossible to iterate and improve.

Rome (and the Linux Kernel) Wasn’t Built in a Day

Many of the companies, projects and individuals in the market are in a race to the finish, staking claim to the market and celebrating victory. It is important to see where we are in this market.  100 Clouds, 1,000 Clouds, 10,000 clouds? We are just scratching the surface of where this technology is headed and where we are at this moment in time.  Only a small percentage of the total market has even started to understand the technology, the market and what it means for them.  In Linux chronology, we might be no further along than when the first of the distributions of linux started in 1994.  We have a long way to go in our journey to the cloud.

In 20 Years

So what does this tell us about the future of the cloud and where we’ll be in 2032? We know that there’s a lot we don’t know. 20 years ago, we didn’t imagine that Linus’ baby would be all grown up and powering huge swaths of the Internet. We didn’t expect smartphones more powerful than the entire computing power of NASA’s moon missions.

It’s unlikely that Linus imagined his hobby project powering millions of DVRs and streaming systems, or that Linux would give life to Google, Facebook, Netflix, or power the majority of the Top500 Supercomputers. It’s hard to imagine what systems in ten years will look like, much less twenty.

Today, the open cloud is run on commodity x86 systems running on top of open source operating systems and hypervisors. Tomorrow? We might see a lot more ARM in the data center, as is being developed by companies like Calxeda. The cloud may be used to manage high-density ARM machines with each core dedicated to its own bare-metal host, rather than stacking multiple guests on multi-core x86–64 systems.

We do know, though, that the future is open and the path to tomorrow lies with open cloud and not proprietary systems. Our customers and community have spoken loudly in favor of systems that they can not only manage easily, but that they can study and contribute to. Customers have learned over and over again that closed systems make for poor infrastructure.

The future of the cloud in 20 years? It’s totally open.