Fullscreen User Comments
Share on Twitter Share on Facebook Share on LInkedIn Share on GooglePlus

Loading…

Loading…

Loading…

Loading…

Loading…

CloudNFV announces members and its first integration partner

Telcos tend to want both but fear unreliability more than they desire innovation. The result is a reliance on standards in the hope that, in the end, they'll offer the best of both worlds: five 9s reliability with an ability to innovate, use different vendors and technologies and avoid lock-in. Easier written down than done.

Maybe this time around?

That very conundrum is currently being aired in the industry conversation over Network Functions Virtualisation (see our Main Agenda discussion, below). The advantages of NFV are well understood - less capex & opex, greater agility, more new services (see - The intelligent person's guide to Network Functions Virtualisation) but then the thought quickly moves from the innovation potential and back to the reliability issue: how do you ensure functions reliability with NFV?

Remember, these are functions within the network which were previously black-boxed: specific software atop specific hardware with a specific vendor responsible for the five nines performance and dragged over the coals if it wasn't achieved.

The whole logic of NFV is to have the vendor surrender this vertical responsibility. Under it, a function must nestle as a virtual function atop commercial off-the-shelf hardware, sharing the platform with other functions. The other functions and the software platform upon which they operate could all be the responsibility of different vendors. And that's just with a straight-foward virtualisation.

The next step is to cloudify the functions to get the full NFV benefit - so no longer just disparate functions sharing a single server, but (depending on the application) a virtual function striped across multiple servers each of which is shared with other functions. This is complex stuff.

The potential bed-blocker here is the integration of NFV applications into the cloud and integration with the orchestration software. The standards exist and are developing, but you need a test platform to prototype real NFV applications, expose the issues and test out the way all the elements are going to interact in pursuit of that critical reliability. That's where CloudNFV says it comes in.

We announced CloudNFV last month (see - NFV to get direction from the secret six)

It's a specialist vendor 'stealth' group formed to develop carrier NFV software implementations based on cloud computing tools and general-purpose hardware. It's not vying with the likes of ETSI over NFV standards, rather it is supporting the standards-generation with some real prototyping.

Last month we said we'd update when the members were announced, and now they have.

The original six were and are: Wind, which specialises in data plane software; CIMI Corporation: which originated the CloudNFV concept; Dell, which will provide the data center SDN switches, the Active Fabric Manager and OpenStack Neutron plugin - and the lab space used for testing and demonstration; EnterpriseWeb which brings its “Active Virtualisation”, the model that binds services and resources and provides the optimization and management framework; Overture Networks, which is the provider of the orchestration logic; QOSMOS offers network monitoring and optimising while its “DPI-as-a-Service” is used to illustrate the NFV concept of Service Chaining.

The seventh member, announced today, is MetaSwitch, which is to become the first integration partner. It will be providing support for Project Clearwater as part of CloudNFV's larger technology demonstration framework. Clearwater is an open source virtual IP Multimedia Subsystem (IMS) offering (see - Network innovation and transformation: by how much will SDN/NFV crush costs?).

Join The Discussion

x By using this website you are consenting to the use of cookies. More information is available in our cookie policy. OK