Today key players in the industry took a major step toward embracing an open cloud architecture. The Open Networking Foundation (ONF) has been established with the backing of many of the Great and Good in cloud computing: Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, HP, IBM, Juniper Networks, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMware, and Yahoo!

The ONF stands for the opening up of network infrastructure to permit software based control and management plane functions to be implemented independent of the underlying switching/routing infrastructure. It will develop open standards for the programming of the network “fabric” by software entities that are not necessarily delivered by the same vendor as the packet switching data plane elements. In an ONF compliant network the forwarding fabric resources will be able to be manipulated using a standards-based API/protocol named OpenFlow – the protocol supported by the Open Virtual Switch now shipping as a standard feature in Citrix XenServer. The OVS has seen extensive use within large clouds already, and the ONF will catalyze the broader adoption of what has become an infrastructural imperative in most clouds. Certainly it is not reasonable to expect every vendor to adopt OVS or for there to be a single implementation of the standard – the ONF will give the ecosystem that is forming around OVS a formal structure for the evolution of the protocols that permit the programming of network infrastructures.

I’m sure the press will be buzzing with accounts of why this is an important technology, and I’ve covered some of it before, so I’ll be brief: In large infrastructure clouds (which might be supporting SaaS, PaaS or IaaS type services) it is incredibly important to be able to define and build control plane architectures that match the problem domain of the specific service. For example, in an IaaS cloud supporting hybrid Enterprise/Service Provider implementation of tenants’ infrastructures, it is crucial to be able to seamlessly extend the enterprise data center to the cloud, without requiring the customer to re-IP or re-MAC their applications. At the cloud side the traffic needs to be injected into an isolated subset of infrastructure resources that match each tenant’s secure, resource guaranteed virtual-private overlay on the service provider’s physical resources. Again the logical view of the tenant’s overlay might bear no resemblance to the physical infrastructure underneath. Multiple VMs from different tenants might need to share the same server, with resource isolation between them, yet they might just happen to all have the same IP address. The key requirement is that the network control plane be mapped to the specific problem in hand (it’s quite different for Google, but I’ll let them explain their use of OpenFlow). Finally, it is key to realize also that in large clouds the tendency toward using standardized compute nodes (richly provided with 10Gb/s NICs) as reconfigurable compute-or-switching functional components enables the infrastructure to be more flexible and scalable. The last hop switch is on the server, and that last hop switch needs to also be programmed by the control plane of the network, to ensure seamless end-to-end communication, with SLAs and security.

It is vital that in such a critical area of technology all of the key players be represented. Clearly the networking vendors have a huge stake in this, but this is just as true for the virtual infrastructure / IaaS stack enablers such as Citrix, Microsoft and VMware. In line with our commitment to an open cloud architecture, having an open set of protocols for the cloud to program the network, and for network-centric cloud functions (virtual appliances perhaps, or cloud network controllers) to program the virtual switching layer within each server is absolutely critical. It’s especially heartening to see VMware join ONF, given that it already has proprietary interfaces in some of these areas.

Industry pundits shouldn’t be surprised to see two of the leading lights in network research, Nick McKeown of Stanford, and Scott Shenker of UC Berkeley, behind the organization. Both have through their work transformed our understanding of IP networking. I’ve been lucky enough to work with these chaps in the past, and privileged to be included in the group of individuals who have helped to get this incredibly powerful technology out of research and into real products. But ONF means much more to me: Twelve years ago I started a company called CPlane that aspired to deliver some of the powerful capabilities that the OpenFlow team has finally made real. Virtualization hadn’t really taken off, and the big .com ramp was just beginning. Three key ingredients were missing (a big lesson for me):

  • The enabling technology was not there (Virtualization and Moore’s Law are driving the separation of control from forwarding, and enabling the average server to deliver 30Gb/s of networking throughput),
  • The customer base wasn’t there (maturity of .com and the massive infrastructures of Google, AWS, RackSpace and every other successful aaS have changed that), and
  • The protocol we picked (GSMP) was not supported by enough major industry players, unlike OpenFlow today (and hence ONF has won enormous support from its charter members).

It’s good to live and learn, and fantastic to live long enough to see how things can be done right. Oh, and CPlane lives too (I hold no interest in it). The incredible team that I was lucky enough to work with at CPlane is mostly at Juniper, Cisco and Citrix.