If software is eating the world then NFV will be eating networking
Tue 29 Apr 2014

Customisation used to mean expensive bespoke systems but in the world of software defining and software implementing, things have changed particularly for the hyperscale service providers. Chris Swan asks if their innovations will come through to normal users and if it will be worth the wait?
It’s almost three years since Marc Andreesen wrote Why Software Is Eating The World, and since then the evidence has mounted up that he was right. Software has been making its impact felt in the networking world for a little while now, with lots of noise about software defined networking (SDN).
SDN is all about configuring a network through an application programming interface (API), but networks can also be implemented in software. We don’t call them ‘software implemented networks’ because the acronym isn’t marketing friendly. The term that’s caught on instead is network function virtualization (NFV) – where network functions are run as software on commodity hardware (either directly on the hardware or within virtual machines).
SDN and NFV aren’t mutually exclusive, and in many cases there’s a big overlap. It’s certainly possible to have SDN without NFV, with OpenFlow switches being an obvious example. It’s also possible to have NFV without SDN, but it makes little sense – why would anybody want to implement network functionality in software and then not also be able to configure and manage it through software?
The large public cloud providers have been using SDN and NFV for some time, but it’s only now that some light is being shed on how those networks are assembled. Google recently went public with its Andromeda networking stack, which brings together aspects of both SDN and NFV. It’s a great exemplar of contemporary network design, and shows what can be done with an architecture that works across both hardware and software (with everything being customised to work together). ‘Customised’ may have become a dirty word in IT, where it has become synonymous with expensive single purpose bespoke systems; but that doesn’t apply so much here as, like other hyperscale operators, Google is using customisation to drive capital, energy and management costs out of its enormous infrastructure.
In the past the innovations of hyperscale service providers didn’t flow down so easily to regular customers. Google talked about its novel server designs long after first using them, and Amazon is still pretty secretive about most of what happens behind its APIs. But in other places things have changed in recent years. Facebook first open sourced its server designs through the Open Compute Project (OCP), and that’s been joined now by networking equipment with the OCP switch. Just as innovation in Formula One racing finds its way to the ordinary cars we drive, innovation in the public cloud is finding its way to enterprise networks. Of course as public cloud prices drop the question of whether it’s worth the wait for that transfer to happen becomes ever more pressing.