Blog: Let’s drop ‘virtualization’ from NFV and move on

Here’s an idea. Why not get rid of ‘virtualization’ when we’re talking about next generation telco networks?  Not the technique, you understand, just the word.

Why? Because there are signs that its continued use is beginning to constrain our thinking. After all, the industry doesn’t want to engineer ‘virtual’ versions of what has already been committed to hardware. It wants new stuff, agile stuff. It wants a fresh start.

Four years ago the original NFV white paper introduced the revolutionary and liberating concept of Network Functions Virtualization (NFV). The idea was to emulate the clear advantages the IT industry was enjoying, having virtualized applications in the data centre and then having utilised open source software and commodity server hardware to build massive ‘web scale’ clouds which enabled them to run the vast applications (Google, Facebook, AWS) which today dominate the Internet.  

It was not just the scale economics which attracted envious looks from telcos, but the use of open source software to provide an agile, open environment where software development and operations (Devops) worked hand-in-hand on a never-ending cycle of innovation and service improvement.

If telecoms could have this sort of capability, it was thought, it could see off the challenge from the dreaded ‘OTT’ players eating the telcos’ lunch.

Four years of frenzy

The last four years have spun the traditional telecom infrastructure providers into a technical frenzy as they tried to prove that they could make telecoms virtualisation work.  Very often they took the code driving their ‘black boxes’ and made virtual network functions (VNFs) out of  it to drive the ‘white box’ environment. After all, we were told, compatibility with the legacy environment would be important - telcos weren’t just going to throw everything out and start from scratch when they had already invested many billions.

But something has gone wrong on the journey. It turns out that onboarding, integrating and managing the virtual network functions (VNFs) in practice is actually much harder to do and more time-consuming to engineer than at first thought. Rather than just load up with functions and go, telcos found that deployment took a long time and that they had to ‘dig in’ to the platform and twiddle when they were told that a ‘proper’ cloud environment meant that applications could be spun up and would ‘just work’.

It turns out that you can’t create an agile service and applications development environment by ‘virtualizing’ slabs of legacy code. That’s part of the problem, not part of the solution.  

So here’s the alternative

We think a new arrangement of words can be a good way of turning the page and we may already be halfway to exiling ‘virtualisation’ anyway. We’ve noticed at industry forums that the ungainly SDN/NFV nomenclature is already being informally shrunk to SDNFV in speech. Given that the acronym is now in the washing basket, let’s pop it into the washing machine  -  on the hottest setting  - and shrink the official written form down to Software Defined Network Functions (SDNF).

Doesn’t that sound better? So ‘virtualisation’ though still vital, becomes a technique, not a destination.

What changes?

Now that the underlying infrastructural arrangements have been agreed, the industry needs to develop a new set of tools to meet the network’s commercial objectives as it transitions from being a technically driven, bottom-up evolution that deploys tools like virtualization and vSwitches, to an economically or business model-driven evolution that will require a very different set of tools to make it work.

Tools such as redeveloped cloud native applications (applications which are not derived from legacy systems but built from scratch for the cloud) and decompositions of those  big software chunks into much smaller functions, making  the applications independent and faster to deploy.

Tapping the real benefits of open source

For the same reason it’s dangerous to think, as some do, that we need a ‘special telco cloud’ tuned for exceptional telco requirements.

This is often advanced as a cure for the telco-grade deficit that some believe is lurking in the cloud. In fact, as far as resilience and availability is concerned, the cloud provides an excellent backstop, especially with disaggregated functions. Instead of trying to add extra 9s to the 99.9x systems reliability measure, the better course is to have redundant functions available on standby, ready spin up somewhere in the cloud and shoulder an extra load if a system should fail.  

And don’t let’s forget that circumstances always change. At the network’s edge the error-free operation of millions of IoT devices may not always be that all-fired important. If there’s an error, there’s always retransmit.

So we don’t need a special telco cloud, just ‘the’ cloud - a collection of software, crowd-sourced for re-use by telecoms, IT, industry verticals, government and so on, especially as network and cloud intimately engage to provide next generation services. Speed to market and agility is everything these days and we need to move forward with the IT industry, rather than be semi-detached from it.

Conclusion

The ‘Cloud native’ approach - where functions are coded from scratch rather than ‘virtualized’ from existing code, is the preferable way forward.

And perhaps as important, we should accept the fact that there are two separate industries operating here - platform and infrastructure being one and software and applications the other.

To keep them honest and separate, there’s a need for independent actors - perhaps with new or non-standard business models -  to help choose and integrate things in a standard way without the risk of being accused of attempted ‘lock-in’.

This blog is the fruit of a discussion betwen Ian Scales, Managing Editor, TelecomTV and Tord Nilsson, Director of Global Marketing at Dell EMC.

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.