To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/iGA23TKS5FA?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Hello, you're watching the Cloud-Native Telco Summit, part of our year-round DSP Leaders Coverage. I'm Guy Daniels and today's discussion looks at roadmaps for cloud-native transformation. How do telcos translate high level business imperatives into achievable migration strategies and how do they balance legacy assets with Cloud-Native deployments? Well, I'm delighted to say that we have a full panel for you today. So let me introduce in alphabetical order. Warren Bayek, Vice President Technology Office at Wind River. Sean Cohen, director of Product Management hybrid platforms with Red Hat Patrick Lopez, Global Telecom CTO for Pure Storage, Francisco-Javier Ramón Salguero, who is Multi-cloud Tools Manager at Telefonica and chair of the ETSI Open Source MANO, Carlos Torrentí, Presales Solution Architect Cloud at Rakuten Symphony. And Joan Triay, Deputy Director and Network Architect at DOCOMO Euro Labs and also Rapporteur for the ETSI ISG NFV. Hello everyone. Good to see you all. Thanks so much for taking part today. Well let me get straight to our first question, which concerns executive level buy-in which business drivers have proven most critical in securing board level commitment to a cloud-native journey. And Sean, can I ask for your views first?
Sean Cohen, Red Hat (02:06):
Yes, of course. Thank you Guy. So I think it boils down to two drivers either making money or saving money, and lemme unpack it. The first one on the making money is the speed to market, the need to become more service agility, which is critical. Obviously many of the civil providers are embarking the 5G 5G opportunities. So enabling the quick innovation and the customer solutions is key. Time to market basically right on the saving money part is the cost optimization is another major driver. This actually addresses both the reducing CapEx by moving from propriety hardware to more open infrastructure if you'll, and then obviously on the optic side through automation and efficiency. Other trend we're seeing in drivers is actually around the unification of the infrastructure, both from network and IT specifically due to the pressure such as the Broadcom price increases, right? So we see a lot of service providers looking basically to not just move from a old virtualization platforms are at the same time unified the networks if you'll, when it comes to new revenue streams, for example, obviously you cannot have a telco discussion these days without talking about monetization of the network APIs.
(03:31):
We had red addressed the API opportunity to basically introduce new services like enhanced enhancing ai, ai, AI driven applications. We contribute, we're a key part of the CAMARA APIs, which is an open source initiative under the Linux Foundation that it's all goal is to make it more network functions consumable for application developers to facilitate this new integrations across various networks and obviously drive new revenue streams. Additionally, obviously the regular ones are scalability and resilience obviously through open disaggregated networks, which is also key configuration, but ultimately it's the funding level. What we've seen are actually across the board pretty much remain flat. They're not growing. And so you look at how could I now put that legacy, if you will, technology on the back burner while doubling down on new funds for areas like cloud-native and AI journey. And speaking about the AI as the key takeaway here for me, for many boards we see basically cloud-native is the biggest one, but the thing to keep in mind is cloud-native is the foundation, is the foundation for becoming a AI native operator as a cloud-native. It also, you need the foundation, the foundation element is become agile and you need that agile infrastructure, basically the underlying and the agile network basically to fund it. So I think all of these trends, and obviously I would put the AI-native operator or more in the make money bucket, but all of them basically rely on the cloud-native journey.
Guy Daniels, TelecomTV (05:27):
Sean, thanks very much indeed. A big list of drivers there and there are many, many drivers that are important. Patrick, I'm going to come across to you as well. Are you seeing any different drivers? Are you seeing a different situation? What would you like to add?
Patrick Lopez, Pure Storage (05:41):
I think the drivers that were identified are certainly some of the most important. I think also that if we look outside of the telecom market, and if we look at the hyperscalers for instance, and the public clouds, we've seen extraordinary progress in terms of capacity to deploy services at scale and to automate at scale and reduce costs associated with the launch and the management of services and probably many network operators noting that capacity have been wanting to emulate that. And over the years we know that telecom networks are a little bit distinct from architectural standpoint. So there's a lot of legacy that needs to be taken into account as well. But being able to bring those networks to a level where you can deploy services not in a matter of years, but in a matter of weeks or days, and being able to not only identify any outage, any anomaly, but also being able to in near real time correct those and even predict how the network and how the demand is going to behave over time.
(07:04):
All of those are drivers for the deployment and the management of a cloud-native network. Additionally to that, I think that, so we spoke about cloud clouds and telecom network are becoming closer and closer and it is imperative for many network operator to be able to not only instantiate and manage, but also deployed workload in the public cloud, in their private cloud on premise and at the edge. And also to manage the workloads and to manage the data sets from the edge all the way to the cloud. And from that perspective, it's really important to be able to manage data mobility efficiently. And that's not possible if you do not have a cloud-native network, if you do not have a strong virtualization layer that is able basically to tag dataset and to deploy them and manage them and use them where it meets the purpose.
Guy Daniels, TelecomTV (08:11):
Absolutely. Thanks very much for those insights, Patrick. And let's go across to Francesco-Javier.
Francisco-Javier Ramón Salguero, Telefonica & ETSI (08:19):
Yeah, well I tend to agree on many on the drivers app mentioned. I would like to describe more or less what is the itinerary of the identification of some of those drivers, because I mean, it's not as immediate as identifying them at the same time. So I think that we could start by saying the cloud-native journey starts because I mean for the network you want to see the same benefits that you saw in integration of it. I mean when you adopted native in it, you saw already a huge ecosystem and some benefits. So to some extent you want to extrapolate them to the network functions domain and that is the promise and that's why it makes easier to have that buy-in on that journey, right? I mean, in the end, what you have found in it, and it's something that we are seeing the telco gloves as well, is much better interoperability and separation of concerns.
(09:31):
The applications have better expectations to be easily deployed in a human infrastructure because there is also a clearer separation as well between them. And there are much clearer rules and restriction. So that translates in the end in a much shorter integration time for a new application and in turn for new network functions and better expectation for portability of the workloads across different sites, different locations, and even among different vendors because we are going to an environment that is in going in a direction of supporting in multiple sites and in multiple vendors and different generation of the technology. So the better benefits of the better separation of concerns is that give us much better flexibility that with alternative virtualization approaches. Also, I mean there's something that is quite pragmatic as well in this front is that many of the new network functions are coming package in that way are intended to be running Kubernetes, mostly to rely even to on some cognitive services even for pass type.
(10:51):
So in any organic renewal or tactical renewal of elements, you need to address that you want it or not. So it's better to be prepared and have an strategy instead of going one by one on that renewal. So it is better to adopt it. And something that we are perceiving right now is that once that you have adopted these technologies, you have much better possibilities for automation because the cognitive environment pays the way towards the declarative operations, gisa and all the like. So it gives you a much better controlled infrastructure and you can be more flexible in the way that you manage it because you are moving away from a long choreography of steps. That is something that is quite typical of telco to something that is much flatter is that you put a configuration on a set of politics and all of them are unambiguously injected in the system, so to say. So in terms of automation, things become much simpler and more natural from the perspective of the business logic. So all that combined is giving us a clear advantages that are being perceived per use case in the operation.
Guy Daniels, TelecomTV (12:09):
Absolutely. Thanks. And also thanks for pointing out that some of these motivations go back a long way. They're very historic, long seated motivations. Joan, I'm going to come across to you next for your comments on this opening question about how we're driving the cloud-native journey.
Dr. Joan Triay, DOCOMO Euro-Labs & ETSI (12:28):
Yeah, basically I agree with all the ideas that have been already expressed and maybe my point is a little bit to emphasize at least what is happening also in our company that actually the imperatives that we receive about or that hint about a journey towards a cloud-native are not imperatives like saying that we want the network to be cloud-native. That is not the imperative that we receive from our portal management. The imperatives or the objectives are as well already said, better TCO, more flexibility in our deployment of the networks. And that is something that it is a task of the engineering teams how to translate that imperative into actual actions. And of course, with the availability of the technologies that are happening in the cloud-native ecosystem and in the virtualization, we realized that there are ways to achieve saving costs, like for example, in the infrastructure or also making the network a lot more flexible because we are managing, let's say, pieces of software instead of monolith boxes.
(13:45):
So in a sense, I would like to just emphasize the fact that maybe we need to focus a lot on the how to achieve the best networks, the most cost effective networks, the most flexible networks, and then leave it up to the engineering teams on how to design them in a way that they can achieve those goals. And one aspect to also reflect is that some of these goals and objectives are also possible, not just only making use of the very latest technology in the cloud-native ecosystem. So in our example, for example, we already started the journey of cloud-native a long time ago, and by introducing VM based technologies, we also already perceived and got a lot of merit. So it's not just only a matter of one specific technology. There is an importance to us also understand the global ecosystem and all available technologies and make good use of them.
Guy Daniels, TelecomTV (14:49):
Yeah, absolutely. Thank you Johan. Thanks for that. And we'll come across to Carlos as well. Carlos, what are your thoughts?
Carlos Torrentí, Rakuten Symphony (14:55):
Well, I agree with what has been said so far and all the panelists. I think if I may put it in Rakuten terms, in the way the experience of Rakuten, in terms of the operations launching Japan especially because this was really one of the pioneers in launching cloud-native operations in production. I think the key aspects there are, as Sean was mentioning at the beginning, the speed of innovation and the cost savings. So really cost savings come from a combination of hardware savings, automation produced in terms of deployments and also in terms of maintenance and supportability of the full production network. And I think one very interesting aspect that maybe was mentioned a little bit by the previous panelist is that as the operators transforming into more of a digital services providers, there's going to be a lot of services that require the capability to offer those services in a much faster way.
(16:04):
So some of the things we are doing at Rakuten is try to create or leverage that ecosystem of companies to make sure that customers from one of the Rakuten branches can go and receive better services if they are customers from the mobile operators. So that really creates a lot of customer stickiness and that is only possible if really the speed of innovation and the speed of launching services, for example, offering easy plans to frequent travelers, those kind of things are available through the use of cloud-native technologies in order to speed up the development cycle, have those analytic capabilities, have those APIs that allow to plug in those services together. So I think that's a very important aspect of the cloud-native operations as operators move to that sort of digital provider mode.
Guy Daniels, TelecomTV (17:02):
Thanks very much Carlos and Warren, do you want to add comments to this question as well?
Warren Bayek, Wind River (17:08):
Yeah, no, sure. I'll touch on something that hasn't been mentioned yet. While I obviously agree with everything that's been said is really apropos. And of course getting board level buy-in is largely a TCO function always, right? They're all about making money and saving money as the first panelist mentioned. But something we're hearing more and more from board level folks in terms of is it worth and why would I want to go to this cloud-native journey? And it's a risk aversion and we're hearing a lot more discussion about compliance and sovereignty as we go to a cloud-native structure in the telco space. How do we assure that our data is resident and correctly that we meet correctly the regulatory obligations in each country in the various places that we're going. So we're hearing more and more about how do we make sure that happens as an industry, we need to keep very close tabs on data serenity and assuring the board members that as we go through this cloud-native journey that we are protecting their data and meeting all the regulatory obligations that they're going to have to comply with going forward.
Guy Daniels, TelecomTV (18:15):
Yeah, thanks very much Warren. And that's an increasingly important aspect. So look, we've talked about business drivers, we've talked about securing board level commitment to cloud-native. How then do we move on to structure a phase migration blueprint that reconciles legacy network functions with cloud-native adoption? And Joan, I'm going to come across to you because I'd love to ask you your insights and views on this first
Dr. Joan Triay, DOCOMO Euro-Labs & ETSI (18:48):
Thanks Guy. Well, yeah, I'm happy to actually be in this panel with diverse operators because we can also provide our different perspectives. But the reality is also that of course many of the communication service providers worldwide, they cannot introduce changes drastically because we are on mobile network operators, for example, on life cycles of 10 years of generations. And it is unavoidable that we have to be able to keep service and reliability to our customers for a long term. So we cannot just switch from one day to another to a different implementation form of our network, which by the way, I believe that actually the use of cloud-native technologies and virtualization, maybe it will actually change this pattern that has happened in the last 30, 40 years. But maybe that is also part for another different panel. But in our case, not a very dedicated specific blueprint on a specific project, but in case of migration, we have basically played around three different perspectives.
(20:16):
One of the perspectives of making the migration, let's say in a vertical stack manner, starting from the bottom line from the infrastructure, the reason of doing this was for example, the case of when we introduced also the early deployments of VM base mobile network core. In that case, we wanted to make sure that there was no impact on our operations. So we wanted to try to keep same operational model of the network but start making changes to the network from the infrastructure perspective. That means the form of implementation of the network functions using virtualization and containers. And even though that of course cannot bring the full benefits, it already brought us quite the benefits in terms of, for example, cost savings because of the change of the infrastructure. And at the same time, we could already start leveraging certain features like scaling and reliability that were not available in previous generations.
(21:33):
The second perspective that we also use is the application of the technology in different domains. And probably this is very well known that many of the major operators, well what they have done is to actually first introduce virtualization and cloud-native into the core network and then further expand it to the access network. This is also, we say driven by two factors. One factor is the technology readiness. So it happened that already from time ago certain technologies were more suitable for the virtualization and classification of certain network functions. So there was no reason why not to assume that we could make the implementation using those technologies while waiting, whether they could be applied to other network domains. And that also brings one important aspect is that the learnings that we can get from one domain, we can also apply them to the next domain. And that also depends on what the scale of the network and the domain that needs to be tackled. So maybe start slow, start small, and then grow up from there. And the last protective or case that we have also used in our migration is on handling the end of life of our products of course, and also on trying to increase capacity into the network. So overall, I would say that none of the migrations themselves on very specific ones that I have mentioned can actually bring the whole benefits, but when we play together with all of them then gives the full merit of virtualization and cloud-native to the network operators.
Guy Daniels, TelecomTV (23:32):
Sure, thank you for talking through those strategies that you've implemented and seen success from. Well, Sean, lemme come across to you. What are your thoughts or comments on how to enable a successful migration program?
Sean Cohen, Red Hat (23:47):
Yeah, so first of all, I think I want to echo what we heard, right? This is evolution and not a revolution. You heard about the long life cycle that we have to maintain. We have all the legacy, we have all the VNFs in place, which is still the legacy we have to power 4G networks and so forth. At the same time, we need that ability to innovate and provide the new services. So there's different strategies. We've seen that work, right? Some of them are what we call the lift and shift. You can even select specific vfs that can be containerized. That's why we call it lift and shift. So you package them in containers onto Kubernetes platform. The benefits is yes, obviously it's not completely cloud-native, but it offers immediate benefits such as orchestration and management without the full potentially refactoring cost. So we see that portion as well.
(24:44):
I think it's also about the coexistence. I can tell what things we've done at Red specifically as one of the offerings we have is OpenStack platform lt. And our last versions of, we actually modernize the platform itself to help service providers to go through that evolution, as I said earlier, by actually introducing a modernized control plane already based on Kubernetes and OpenShift. So by doing it, we basically already get all the benefits. Oh, it's not a lift and ship, but it's the same concept if you think about it, where you have already the benefits of a cloud-native deployment because all the services of OpenStack to handle the CNFs are already deployed as operated cloud-natively. So the data operations are much faster deployment upgrades and sofa while the compute notes are still well based with long life cycles and all the APIs we know.
(25:38):
So that's another example of how we allow gradual move. But going back to what I said earlier about the foundation, we believe that the way to solve it is actually finding, it's like building a house, right? Or more importantly, renovating your house. You don't basically leave the house. Yeah, you can do a lift and completely bring down the house and rebuild it, but you can also just add a kitchen while you still live there. This is the same concept. So with our hybrid cloud foundation, which in theta OpenShift as a platform that allows you that open standard cloud platform, it enables you to have that foundation which allows you that gradual transition. Meaning you can run VFS either on OpenStack APIs but still maintain with a cloud-native. You can have natively VNFs running on Kubernetes with our OpenShift virtualization and you can have CNFs already cloud-native, but the thing is they all the same.
(26:40):
So you have basically a foundation that can span on premise edge potentially to public cloud. You need to burst into public cloud, but it's all managed the same as a singular modernized network fabric. So this is the practices we're seeing and again, at the end of the day, areas that you cannot avoid is to do the detailed assessments of the workload, right? As Juan mentioned earlier, not all applications are ready and you have to really go down to see which one that are going to be easier to modernize, which are going to be more like you can apply the lift and shift concept and so forth so you can actually move forward. And again, also have the dependencies of APIs and existing different ecosystem solutions you have. So obviously it's not complete revolution, but that's the strategies we seen that working with service providers in the field.
Guy Daniels, TelecomTV (27:35):
Fantastic. Sean, thank you very much. We will go in a moment to Patrick, but first of all I want to go across to Francisco-Javier. Francisco-Javier, your thoughts on migration blueprints or strategies?
Francisco-Javier Ramón Salguero, Telefonica & ETSI (27:45):
Yeah, I agree with most of the points I've been commented. I would like just to add a couple of aspects here that I believe that are important as boundary conditions for this immigration. And one is that it is no longer the immigration towards standardization. It is no longer a risky decision. Sometimes it's always, it has been traditionally the from one type of technology to the other, but it's no longer the case because in many of the network functions that we know, I mean the market is taking you to the point where the next version of the component X, in case that you want to move the message for the functionality, you are forced to go to Kubernetes. And that is an excellent driver for modernizing many applications that you have because just an organic region. And the point for that is being ready is having an infrastructure platform that you are familiar, that you have it already in place and that it's ready to scale and grow because in the end, those opportunities will come.
(29:04):
The other point that I think is relevant here is that when you enter into those new cloud-native technologies, it's a continuous journey. So you can declare that you are finished the immigration because it puts you in a mode of continued renewable of the infrastructure and the applications due to the own of NCE pace of this technology. So you need to be capable and developing the skills and the automation capabilities to be continuously upgrading both the software and the platforms below, just to pass it in continuously in opposite to some more static approaches that we could follow in the past following I would say the pragmatic and conservative approach. But here the pragmatic and conservative approach is also about moving continuously and you need to embrace that as well as part of the new thing that is coming. And the third point that I wanted to mention is that besides all these new technologies, as has been mentioned by some of the panelists before, I mean there are still some workloads that work based on virtual machines.
(30:24):
And that's not only technically possible, but sometimes technically convenient because there are many use cases where deployments based on bm, so remaining bare metal or even in appliances are the most convenient way to go. So you need to analyze that. It's not just because there is a new technology, you need to embrace it for everything. So you need to consider carefully case by case and adopt a mindset of having a toolbox instead of just one hammer, right? So I think that is the difficulty activity that we have in all these migration path plans because in the end, we need to be conscious on the decisions that we take and be pragmatism sometimes is driving you to that innovation. So what interesting times that we have ahead.
Guy Daniels, TelecomTV (31:20):
Absolutely. Thanks very much for that Francisco-Javier, and we're going to go to Carlos soon, but Patrick, I promise to come to you. So Patrick, what's your thoughts on migration?
Patrick Lopez, Pure Storage (31:29):
Thanks Guy. Just I want to bring a different angle to the migration to cloud-native. It's the angle of data very specifically at Pure Storage. We do migration day in, day out, and we do look at how networks evolve from appliance to aggregated, virtualized bare metal to cloud-native and soon to be AI native. And one of the important element of that migration is the data. I think that what we haven't discussed today is the fact that moving to cloud-native in many cases means that applications and functions that had stateful data become stateless. So it is very important to be able to have an architecture that allows to have data sets that are going to be collected from the edge, from the transfer, from the core, from the USS to be processed, end to end holistically throughout the network. And that is only possible at scale in real time kind of capabilities if you have a data infrastructure that supports it, a data infrastructure that has migrated from legacy disc storage to an all-flash storage that allows the low latency and the high performance at an accessory, particularly for those workloads that are going to be supporting, sorry, artificial intelligence and machine learning.
(33:06):
So from that perspective, it's really important when you look at the migration or the introduction of cloud-native network functions into a telecom network to look at the end-to-end infrastructure and the end-to-end architecture for the data pipeline. So from that perspective, having a holistic approach, having a data storage management infrastructure that is based on Kubernetes such as Port works by pure storage for instance, allows you to have visibility control of the end-to-end storage from the edge to the cloud.
Guy Daniels, TelecomTV (33:52):
Patrick, it's great. It's a great, another additional angle for us all to consider there. Thank you. And Carlos, let's come across to you for your thoughts on migration.
Carlos Torrentí, Rakuten Symphony (34:00):
Yeah, I was going to mention the data aspect as well. So Patrick did a little bit of a spoiler for me, but what I would add to that is that there's an important bit of while the assessment that there are different types of workloads which have been mentioned here, an important bit of that migration is the ability to sort of we as vendors try to follow or help the operators in doing that migration. And that's a process that has been said by the previous panelists. So I think it's important on one side to have the right ability to advise on those migration capabilities as well as having the tooling to be able to do migration and Kubernetes is quickly evolving to support especially virtual machines with virtualization technologies and the community's growing quite a lot in the last years to support those workloads. So I think there are enough capabilities in Kubernetes and for example in Rakuten Symphony, we have a layer that can support both virtual machines and container lines functions on the same runtime layer.
(35:17):
And that brings some operational benefits as well to the migration as well because you don't need to have different tools, you just can use a single tool to manage everything. I think that also links to the fact that for example, in those migration periods, extended migration periods, there are going to be BNFs and CNFs, there are to be specific toolings or specific managing toolings that allow those processes, those functions to coexist. And that means on the OSS layer, it's key to have the right automation processes, it's key to have the right CI CD processes that can talk the different APIs of the VFS and the CNFs to manage them because as the previous panelists said, that is a long process. So you need to have the ability to run that for a long time, right? So it's important.
Guy Daniels, TelecomTV (36:14):
Great. Absolutely Carlos, thank you. And this leads me to our next question and Carlos, I'd like to stay with you for the first answer to this next question. I want to ask about reference architectures and which ones are most used by telcos for their core edge and RAN workloads? What are you seeing, Carlos?
Carlos Torrentí, Rakuten Symphony (36:33):
Well, I would start by the example of Rakuten Mobile in there, and I think that example is relevant as being, as I said previously, one of the first completely disaggregated network production networks in the world. So I think the key aspect is that the functions are going to be deployed in many different disaggregated sites across the far edge, across the near edge, and across central data centers. And the types of workloads are going to be really, really different. So we have workloads that have been containerized from the start, things like 5G core things like the run, COD. So those are typically functions that have been containerized right from the beginning. So those workloads are typically very well suited for any cloud-native architectures because they can be deployed in containers, they can be managed, their lifecycle can be managed through Kubernetes. So it's really wordless that have been really tuned for that sort of deployment.
(37:41):
And there are also virtual functions as well because not one of the experiences we got is that not all the functions were containerized at the beginning. So having that sort of unified layer that can provide the ability to run both types of workloads is really key. I think with time organizations are quickly adopting Kubernetes as the facto standard for running workloads in these aggregated architectures and CNCF reports variety, but they go from 60 to 80% of workloads really being deployed in production. And I think we are seeing a wave of new functions that are coming to being containerized and that are relevant to be deployed at far locations. For example, inferencing for AI pre, let's say pre-B bundling data for sending it to another later analysis. So all those kinds of things are really new workloads that telcos are bringing to this disaggregated architecture, right?
(38:56):
So I think a key aspect there is the ability to be able to manage all of that disaggregated network with a single pane of glass. And that involves a lot of layers in the telco network. That involves observability of course, because you have to oversee what is happening in your network and you have to react. It involves a quick very strong S layer as well in order to being able to deploy those new workloads, manage the lifecycle of the software, manage the lifecycle of the hardware as well that gets deployed in those remote sites. So all of that are fundamental components of the cloud and of the cloud-native infrastructure that needs to support those workloads. And as we go into day two, day one, day two, then it comes with all types of additional requirements for those workloads. Things like making sure that your workloads stay secure, that you can do, make sure that you back up your data that you can restore if anything goes wrong. Being able to react to any security problem or to any sort of incident that happens in your network in the most rapid way through automation are really key factors that will help you doing that cloud-native journey. So I think those are very important factors.
Guy Daniels, TelecomTV (40:26):
Carlos, that's great. Thank you very much for those insights. And I'm going to come across to Warren because I really want to hear what you've been seeing here with workloads, Warren.
Warren Bayek, Wind River (40:35):
Yeah, I think it was mentioned that this is really a layered approach. It kind of has to be for the full telco architecture at the core as was mentioned, as more and more of the workloads become containerized, Kubernetes is becoming the dominant foundation, but there will always be this existence of VMs and we do have to support these legacy applications for a long, long time. As mentioned 4G heck, we see 3G and even two G in places. So having things like OpenStack where VM density is critical and all the complex network functions as well as qubit for a lighter weight VM handling kind of the core is handled there. As you move further out to the edge, you see requirements that are very different than generic. IT stacks in the case of Wind River with Samsung, we provided a fully virtualized containerized five GV RAN for operators like Verizon and Telus and others where you have to support a deterministic latency and a super low latency out at the far, far edge as well as obviously the five and six nines high availability that you don't necessarily need.
(41:45):
At the core though, availability and the ability of services to be up as important at every layer at the far far edge, it's a much higher bar that has to be passed. So having a layered approach like that is critical in terms of the architecture, the part that pulls it all together, though it's been mentioned a few times, which is this ability to observe and feedback models that are happening at each of the layers, the CICD, the zero touch lifecycle management for day one and day two, that is something that is very important to make sure is built into these architectures to be able to support not only the old legacy gs, but also as we move to a fully cloud-native versions of five. And as we go to six G, eventually having that ability to observe and operate and create A-C-I-C-D and zero touch lifecycle management becomes more and more important. So that layered architecture with the over overarching orchestration capabilities is what we're seeing happen in the successful network deployments.
Guy Daniels, TelecomTV (42:52):
Great. Thank you very much for those insights, Warren, and let's go across to Johan for your comments as well.
Dr. Joan Triay, DOCOMO Euro-Labs & ETSI (42:59):
Yeah, thank you. I second very well also what Carlos said. I mean it's important to take us as a reference in architecture that allows you for the variability of managing the different kinds of workloads, either VM based or container based from more logical point of view. I mean, I'm not going to go into details of Kubernetes or OpenStax, but in our case for example, what it has become always very valuable is actually making use of the reference architecture from the SNFE precisely because it has enabled us to not just only determine where the possible points of how we could integrate different solutions from different solution providers, but also because of the consideration that has been taken to embrace different kinds of technologies for the deployment like VM based and containers. So it has become quite useful because it was very easy to map, which are the actual software assets that are being developed in the open source and the cloud-native ecosystem and how to actually integrate them with the rest of the fullest stack, which is as also mentioned, very important.
(44:18):
It's just not only to look at the bottom line of the infrastructure, but also how you are going to connect it to the rest of the OSS and BSS systems. And for that matter, we have always come to the conclusion that the solution from the SNA was very useful for us. And since maybe those solutions were also targeted a little bit, first, the core network domain, the fact is that we have also complimented it with additional frameworks or reference architectures like the case of the Oran for the vRAN. In those cases, basically what we have done is to actually compliment the functionalities and the nice additions that are considered in those other reference architectures to consider the specific requirements that we need to address for the case of deployment of virtualized run, like the cases of acceleration, the radio integration controllers, and the actual open run disaggregation itself.
(45:28):
So by inserting those additional functionalities on top of our baseline of the NFV reference architecture, we have successfully actually also commercialized the V run recently this year. And one also important aspect I would like to reflect about making use of the reference architecture of the NV is that it is, as I mentioned, very easy to map what are the available open source solutions out there that we can integrate. And not only that, it has also become quite useful to integrate solutions that were also being provided by hyperscalers. The example we have recently, the use of EKSA for the deployment of the rerun, it was quite straightforward to make the map of the logical functionalities that we envision in the reference architecture with the actual offerings we were getting from the hyperscalers
Guy Daniels, TelecomTV (46:31):
Joan. Thank you very much for those comments. Sean, do you want to add comments on architectures or references for workloads?
Sean Cohen, Red Hat (46:40):
Yes, thank you Guy. So we have not only everybody knows, but our red website actually holds a great set of reference architecture that we already published. The most used reference ones are for core RAN and edge workloads. We have a large ecosystem engineering organization that this is what they do. We validate and maintain this model or reference architecture to allow customers to build the network as they adopt the cloud-native platforms. What we've seen is there's, as you see this very large range of topics that we go deeper, but for each one of them, for example, the 5G core, right? We have blueprints for deploying containerized 5G core networks and functions on premise, but at the same time mentioned we have the edge workloads as well, and that's a critical component of the architecture. So looking at how you deploy OpenShift in a compact remote locations to run applications closer to the end users, enabling low latency services, facilitating the management of automations of thousands of distributed edge sites and so forth. So this great examples actually has validated tested environments. In essence, they provide a robust open source framework to build the cloud infrastructure for the next generation with the basically the good starting point, right? Think about almost like as a ready leger break that you can now integrate to your data center that has been tested, proven, and automated. Right.
Guy Daniels, TelecomTV (48:18):
Sean, thank you very much for those additional comments. And Francisco-Javier, let's come across to you as well.
Francisco-Javier Ramón Salguero, Telefonica & ETSI (48:25):
Yeah, I think that there are many things that have been mentioned, but I want to add one comment on these architectures and the revolution. I mean, in these new architectures, we are realizing that there are new challenges and we are already identifying new components for the overall architecture initially didn't seem to be needed. As you mentioned, for instance, the CACD right now that you have the possibility of managing a lot of software, perhaps you should put a CCD in there. So we integrate it properly with all your internal workflows and ticketing, but also considering if you go farther in the refinement on your deployments and adopt the declarative deployments and go to the GitHubs way mean for AL deployments and practically located deployments could be relatively straightforward. However, when you are managing a large fleet of sites with a number of clusters with different technologies, things become more complicated.
(49:37):
So now you are evolving from that traditional management and orchestration and related to more imperative ways of managing the environment to something that instead of doing that needs to help you and support you to organize those sets of policies so they make sense together and can be as reusable as possible. So that is the type of components that we are seeing, and we are seeing that movement from choreographies as we and that layer approaches of management to something that is more or less contained and has its own behavior fed by a collection of policies. So in the end, the complexity of such policies is high and they need to be tested and pred before, right, instead of as part of a large choreography of steps. So that is something that is one of the requirements that we are identifying in order to get the most from these technologies, being able to pretest it and also to manage a current set of intents that was something that we didn't have to take into account, at least in cloud deployments in the past.
Guy Daniels, TelecomTV (51:07):
That's good to hear. Thanks for those insights, Francisco-Javier, and Patrick, let me come across to you for what is going to be our last response on this panel. We're going to have to wrap the panel up after this, but Patrick, what are your thoughts on reference architectures for workloads?
Patrick Lopez, Pure Storage (51:24):
Thanks Guy. Again, I'll go back to an infrastructure and to the data for the reference architecture. What we have observed in that field has been quite interesting over the last 24 months or so. We've seen most of our leading network operators are changing their perspective on the data infrastructure and the data architecture until recently, whether we're talking about the access, the radio access or the fixed access, the transport, the core, the U-S-S-B-S-S. Traditionally, operators bought a function, bought a service, and the vendor of that function or service came with their recommended infrastructure. And part of it was the compute, the networking, and the storage. And it was all pretty much preintegrated. And what it meant is that basically in the RAN, you had some storage for the function that would overflow into a network attack storage, eventually find its way into a data lake, and then a data warehouse or the cloud so that it could be read by the USS and provide basically some level of analytics.
(52:50):
And that was all perfect if you wanted to look at the state of the network a day ago or up to an hour ago if you want. But with the needs for automation and the needs to provide near realtime capabilities in term of being able to react to network states and even anticipate network states, it has become apparent that this infrastructure, this architecture is no longer capable of serving that need. So most network operators that have looked at that problem have been trying to reduce basically the time to insights, reducing the time to action between something happening at the edge of the network, for instance, and something needed to be done in term of configuration or optimization of that network element. And the only way to do that has been basically to extract the data as fast as possible from that network function, whether it is an appliance or whether it is a cloud-native function, and put it as fast as possible into a state that it can be read and it can be actioned upon by the algorithmics, by a ML.
(54:06):
And what that means is basically those network operators have started looking at data infrastructure, data pipeline end to end as a category and have started basically putting together a holistic data management that goes from the edge to the cloud that is relying on having one single logical data storage capability. So that's from our perspective, what we've seen in term of an overarching strategy for network operator to be able to accelerate their time to insight and to accelerate their capacity to react or anticipate network state by implementing across the access, the transport, the core, and the OSSA holistic data management infrastructure.
Guy Daniels, TelecomTV (55:05):
Thank you, Patrick, and thank you everyone. Unfortunately, we must leave it there. We will continue this debate during our audience q and a show later. I mean, there's so many areas we still to discuss security, for instance, we still want to talk about, let's talk about skills. I'm not sure they will come up in our live q and a show, but for now, thank you all so much for taking part in our discussion. And if you are watching this on day one of our cloud-native telco summit, then please do send us your questions and we will answer them in our live q and A show. And that starts at 4:00 PM UK time. The full schedule of programs and speakers can be found on the telecom TV website and that's where you'll also find the Q and A form and our poll question. For now though, thank you for watching and goodbye.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Panel Discussion
This panel explores how to translate high-level business imperatives, such as cost optimisation, service agility and new revenue streams, into achievable migration strategies and reference architectures. It will look at end-to-end roadmaps that balance legacy asset utilisation with cloud-native deployments, examine operating models and share best practices for embedding security, observability and integrations into Kubernetes-based stacks. It will consider how to align business KPIs with technical milestones and build organisational capabilities to sustain continuous innovation.
Recorded September 2025
Participants
Carlos Torrentí
Presales Solution Architect, Cloud, Rakuten Symphony
Francisco-Javier Ramón Salguero
Multicloud Tools Manager, Telefónica, Chair of ETSI Open-Source MANO (OSM)
Dr. Joan Triay
Deputy Director & Network Architect, DOCOMO Communications Lab. Europe (DOCOMO Euro-Labs), Rapporteur, ETSI ISG NFV
Patrick Lopez
Global Telecom CTO, Pure Storage
Sean Cohen
Director of Product Management, Hybrid Platforms, Red Hat
Warren Bayek
Vice President, Technology Office, Wind River