NFV

NFV

NFV's path to Open Source and white boxes

via Flickr © opensourceway (CC BY-SA 2.0)

via Flickr © opensourceway (CC BY-SA 2.0)

  • Part two of a progress report on NFV (see:  NFV nearly five years on)
  • Why NFV hasn’t stalled, it’s just taking its time to get it right

About a year or so into the progress of the ETSI NFV ISG, an off-camera comment by one of our interviewees highlighted one of the attitudinal roadblocks the ‘movement’ was facing. “Open source software,” he was told by a senior executive at one of the telcos, “would come into his network over his dead body.”

An unusually strong response, but it did indicate an underlying uneasiness felt by many. Both open source software and ‘whitebox’ servers were still pretty ‘left of field’ to many network technical professionals, despite the fact that their colleagues on the IT side of the house were well versed in Linux (say) and had become increasingly comfortable with so called COTS (commercial off-the-shelf) platforms standing in for specialised appliances for  OSS/BSS functions, for instance.

One issue was the understanding that black boxes doing specialised tasks have traditionally come with feet that could be put to the fire should the system not work or should it develop faults. Moving to commodity hardware and open source software didn’t seem to provide this immediate vendor responsibility and in the telecoms network reliability is (nearly) all.  

So it’s taken some time, but now, in 2017, it’s pretty widely accepted that open source software and white boxes will play a pivotal role in NFV transformation.

That realisation has mostly come because the alternative approaches just didn’t work well enough - they didn’t meet the targets around agility, especially, that the white paper had set out.

So instead of those ‘monolithic’ chunks of proprietary software that the original NFV white paper promised could be virtualized and re-used, (see Part 1: NFV nearly five years on) it’s now broadly accepted that the promise of NFV will probably only be realised through microservices.

The ETSI participants tried hard to use and test the monoliths (that’s what the collaboration process is all about) but overall they just didn’t work well enough as virtual network functions, especially in terms of their being able to interwork with VNFs from different vendors.

Instead it’s fairly clear that the best overall approach is to decompose any network function software into clearly defined segments (or microservices) which can then be flexibly strung together to form the finished services in a so-called ‘cloud native’ fashion.

The important thing is to have each microservice carefully defined in terms of the function it performs -  that way it can be upgraded independently of the whole without extensive integration and testing work.

In terms of the original white paper there’s no doubt that there’s  been a significant course correction as the ETSI process has advanced.  But this should not be chalked up as a failure - the whole purpose of having an open collaborative process is to find the best path (or set of paths) through trial and error. Both the trials and the errors are valuable learning steps.

So where are we now?

We are now quite well advanced with some of the lead players setting themselves ambitious virtualisation targets.  Famously, AT&T is  targeting 2020 as the date it  expects to virtualize and software-control 75 per cent of its network. It even expects to hit 55 per cent of its network by the end of this year. And it could be that one of the industry upheavals the NFV transformation might trigger is already apparent in the AT&T strategy.

Never mind vendor lock-in. Part of AT&T’s ambition appears to be to get itself solidly back into the business of specifying and building its own networks and setting standards and frameworks for other telcos (who may want to interconnect and become affiliates of various kinds). That’s the way AT&T used to operate in the US before the Bell breakup in 1984.  

Its ECOMP initiative is an NFV framework designed to do all the good NFV things -  rapidly on-board new services (created by AT&T or third parties); provide a framework for real-time, policy-driven software automation of network management functions and so on (This blog sets out the ECOMP story).

Most interestingly, AT&T is keen for ECOMP to be propagated to other telcos and it’s even buying up software vendors. Most recently it completed its acquisition of the Vyatta network operating system and associated assets of Brocade, a deal which included the hiring of several dozen Brocade employees.

“Just as important as the Vyatta network operating system and other technology assets are the developers and other staff joining AT&T as part of the deal,” said Chris Rice, senior vice president, AT&T Labs. “They have valuable skills and experience that will complement and drive our ongoing network transformation. We’re excited to have them on the AT&T team.”

Are we witnessing a new species of vendor lock-in?

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.