Telcos need to take ownership of open source or risk losing a golden opportunity
- AT&T showing how it should be done
- CSPs determined to be heard above the vendor pitches
- O-RAN the new darling of open source
- Still need to figure out what to do with the Edge
Of the 14 keynote sessions at Open Networking Summit (ONS) North America in San Jose, only two featured communications service providers. AT&T CTO Andre Fuetsch spoke about open source’s role in 5G, and China Mobile Chief Scientist Junlan Feng spoke about open source for network-based AI. This is no means a criticism of organisers The Linux Foundation and its LF Networking group, but it is a reflection of how the broader telco community has yet to fully accept the strategic importance of open source. Yes, many CSPs are involved in various open source projects, and some are heavily invested and supportive, but as yet there has been a reluctance to step up and take more control over the direction and scope of these projects. Whether it is fear or ignorance that is holding them back, CSPs must do more. After all, the majority of these projects are specifically aimed at, or relevant for, telecoms networks – ONAP, OPNFV, Akraino, Open Daylight, etc – with many others about to become essential, such as Kubernetes and the work of the CNCF. And there are many other open source foundations and groups focused on telecoms to consider.
The situation is showing signs of improving, and the news that a group of ten CSPs intend to create a common open source NFVi architecture is very positive (more on that tomorrow). We need more positive action like that, if the telecoms sector is to maximise the potential of the open source community and create a more sustainable economic infrastructure investment model. In no particular order, here are some of TelecomTV’s other key takeaways and observations from ONS:
The keynote that perhaps received the most attention last week came from the new Lean NFV group, a non-profit organisation that is led by Scott Shenker and Sylvia Ratnasamy of Berkeley university – and both of the private start-up Nefeli Networks. Shenker had previously sold his SDN company Nicira to VMware for $1.3bn. Interestingly, the third and final Lean NFV supporter to present on stage was Constantine Polychronopoulos, Telco/NFV CTO at VMware…
Technical details of the new approach suggested are a little thin (according to NFV experts we have reached out to for their opinions), although their is a white paper here. Of the other seven endorsees to the document, none are currently employed by telcos. This list of “industry veterans”, as Shenker puts it, is missing a number of key early NFV creators – and some of those we reached out to had never even heard of Lean NFV. Yes, once again we have a vendor-led approach. We honestly can’t see this going down well with telcos.
Shenker claimed in his presentation that the industry has “made two fundamental errors” in its implementation of NFV – embedding management in the compute infrastructure and requiring that new features need associated changes in the APIs. His new approach, he claims, “is the path to increasing adoption and innovation”. TelecomTV has already covered the details, which you can read here. However, there are those that think this is just another example of rearranging the deckchairs on the Titanic, and that the real answer lies with containers, Kubernetes and a cloud-native approach.
How do telcos accelerate the process of getting verified VNFs up and working on their NFV infrastructures?
“Right now, we have really no single defined reference model,” explained Amy Wheelus, VP Network Cloud at AT&T. “We have several different models and architectures that are out there for VNF certifications. There's not a single place for VNFs to come get certified, and there's not a process in place to influence and create – from a service provider perspective – a lifecycle for NFV and VNF creation.”
The industry is currently trying to support around 40 flavours of NFVi now, with this number certain to increase with time. The big question is how on earth you test and verify the rapidly increasing number of VNFs against these? The matrix of tests required becomes an increasingly complex and time consuming process. The solution, therefore, could be to reduce the number of NFVi models to single digits (three is being touted, but not yet confirmed).
Hence the task force, which is building on the work done recently between Orange and Vodafone and the GSMA. The loosely affiliated group comprises AT&T, Bell, China Mobile, Deutsche Telekom, Jio, Orange, SK Telecom, Telstra, Verizon and Vodafone. The group, which is open to other CSP to join, wants to bring together the work going on in OPNFV and the new OPNFV Verification Platform (OVP), with the initial work via the GSMA – although there is considerable behind-the-scenes reluctance to having the GSMA play a central role. They want to create a new framework for the ecosystem that will lead to reduce time and reduced costs not only for the service providers, but for the vendors as well, and one which will be flexible enough to enable the addition of cloud-native NFs at a later date.
“We are all struggling with the same problem,” said Beth Cohen, SDN Product Strategist, Verizon. “We're all working with the same vendors and we're giving them different standards. So this saves a lot of time for us and it saves a whole lot of time for vendors.”
This is, by definition, a telco-led task force, there is no seat at the table for vendors. At least not yet.
“It’s kind of like, you want your parents to have their discussion privately before they go in to talk to the kids; there's gonna be some parent discussion here,” quipped Mark Cottrell, AVP at AT&T. “So I think we've got to get those ducks in a row, before we have that conversation with a broader community.”
Lots of buzz and excitement around the Open RAN work, with AT&T (of course) leading the discussions at ONS. Hank Kafka, VP Access Architecture and Standards at AT&T, set the scene by explaining the work of 3GPP (which is not that well understood by the open source coding community) and how, be design, it stops short of actually drilling down inside the hardware and components. However, this is going to be essential if CSPs are going to crack open the proprietary hold that the traditional (and very small number of) vendors have on the RAN, open the market to new ecosystem players and improve innovation whilst increasing performance and lowering costs. Given that 5G is going to require significant new densification of the RAN, these are very important considerations.
Controlling such networks is going to require (certainly in AT&T’s case) a top-level ONAP control loop, underpinned by a near real time RAN intelligent controller (RIC). AT&T and Nokia have already developed the initial seed code for the RIC and have handed it over to the Linux Foundation to be released into open source.
The use of open source is fascinating here. It allows the O-RAN supporters to create an overall open source reference architecture that allows future proprietary software solutions (if they are better, more efficient and more optimised for specific components) to be dropped in where required. The O-RAN Alliance will sponsor an O-RAN open source community project to develop all the necessary code, under the Linux Foundation Networking umbrella group.
During his keynote at ONS, Arpit Joshipura, General Manager, Networking, at the Linux Foundation, referred to the “myth” that “standards and consortiums and open source will always compete.”
“I hate to break it to the press, right, who love writing about it, but that myth is definitely debunked,” said Joshipura. Well, we’re not sure what he’s reading, but we do object to the collective assumption that we (the press) love writing about this. He’s right in that the two are no longer seen by most as competitive, but he’s wrong to dismiss talk of any remaining questions.
Many questions around the relationship between the open source community and the telecoms standards development organisation remain. Not just from us, but from many in the industry with whom we engage on a regular basis. These are no longer negative questions, but genuine questions about how best to tweak and refine current processes to enable more agile and relevant development of standards and implementations.
In fact, Joshipura introduced a “star-studded” keynote panel on this very subject on the final day of ONS to highlight collaboration areas. However, it would have benefited from stronger involvement from ETSI, 3GPP and even the GSMA (needless to say ITU was nowhere in sight).
This is where it gets complicated. There are now an estimated 32 or so open source projects that are focused to some extent on edge networking and edge compute. Overlap, therefore, is a given. Several sessions at ONS covered edge project work and some also took a more holistic approach. It remains frustratingly fragmented and confusing.
Closing the ONS event, Arpit Joshipura called for a unified edge, one that avoids the fragmentation of the different IoT, telecoms, enterprise and cloud approaches to the edge. Naturally, he is furthering the role of the LF Edge group here, which already houses Akraino and EdgeX Foundary, but which wants to pull in relevant work from O-RAN and others, such as ETSI MEC. But a unified approach would certainly be a positive step. If we still struggle to define the edge, then how are we able as an industry to successfully capitalise on the opportunities?
It doesn’t help matters that the open source community supports so many edge projects, many of which are looking to cover every aspect rather than focus on a specific problem. This leads to the sharp bell-curve graph of elation and despondency, where an initial period of excitement attracts plenty of coders, only to then hit problems which results in many deserting the project in favour of shiny new projects elsewhere (of which there are numerous options).
“Maybe the problem is that we're actually biting off more than we can chew with these projects,” said Ian Wells, Distinguished Engineer at Cisco. “We need to do things that we know we're going to need that are not very big, rather than things that solve the whole problem as we perceive it but are enormous. We probably don't know the problem until we actually give it a go for a couple of times, so building something small and seeing if we can point it in the right direction is actually kind of useful.”
What also may help is the umbrella approach adopted by The Linux Foundation, where individual projects track through a series of key funding and resource stages and are ultimately encouraged to converge at some point. That may well shake out some of the overlap.
“Over the last several years we’ve been trying to bring projects together, like Open-O and ECOMP as a great example of two significant code bases that would have diverged the whole orchestration space for telco,” explained Phil Robb, VP Operations, Networking & Orchestration, The Linux Foundation, referring to the creation of ONAP from two projects started by China Mobile and AT&T respectively.
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.