Improving network optimisation through automation

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/GzJ7IX5ty4c?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Ray Le Maistre, TelecomTV (00:00:10):
So welcome back. This is session seven, improving network optimization through automation. I mean automation has been a mantra in telecom for about a decade already and the drive for further efficiencies in the network along with data traffic growth is accelerating. Automation strategies, advances in AI are enhancing the opportunities, but further developments are still needed such as a unified approach to data collection and management along with improvements in energy efficiency at the same time of course adoption of disaggregation and open networking risks, adding complexity here. So we're going to get onto all of that in just a few moments without expert panelists. But first it's time for a quick one-to-one chat with one of our guests and I'm delighted to be joined on stage by Vish Maur, who's global head of engagement at the Telecom Infra Project. Vish. Great. Thank you very much for joining us Match. Of course, the Telecom Infra project known mostly as TIP, isn't it? Yep. That's what people talk about. Yeah, and Vish TIP has been incredibly, it's eight years, eight years Tip's been around and like the industry, the organization has evolved as well. Can you just give us a quick overview of its main areas of focus right now?

Vishal Mathur, Telecom Infra Project (00:01:39):
Yeah, absolutely. Thanks very much Ray, for the question. For those who don't know who TIP is very quickly, I know it's been around for eight years, we're also a trade association like the GSMA. We have around 150 to 200 plus members, organizations from telcos, startups, OEMs, system integrators, et cetera. The core focus and mission for TIP NOW is really trying to make sure that we can industrialize and scale the adoption of open desegregated network solutions and build on top of that so AI-based solutions or showcase the way these reference implementations can be scaled out. So what's happened now, right now in 2024 TIP is now in a place where it's just trying to create the structure to make sure we do that industrialization and do further innovation. So a few changes that have happened just very recently, our board, which has consisted of mostly emea, tier one telcos and Intel Dell and Meta, we just recently added Rob Sony from at and t to the board, which helps to generate a little bit more of a global cross section of understanding and strategy and that governance flow down is coming through into our technical committee, the IE, looking at future direction of these types of technologies and then into our programs as well.

(00:03:06):
So to answer your question there, the focus for TIP going forward is on five main areas. One which is continue to be one of the key areas is around open ran and we'll talk a little bit about how that's going at the moment as a community. So 50% of our membership is very much around open ran focus and that's a spillover into another program called Telco ai. Everyone has got AI in their language, but when you look at what we are hearing today from the first panel with a use case or demand generated ideology of services creating requirement for networks to perform, TIP is basically a community looking at what those reference architectures can look like in the telco AI space. Three other areas is around disaggregation in wifi and that's become quite a strong development path for a community of ISPs, cloud control providers as well as access point developers, hardware manufacturers on an open source code base.

(00:04:14):
So we talk about that and the other two areas is around transport, so fixed and mobile or wireless optical IP transmission. There's quite a bit of an abundance of change in that space where disaggregation between hardware and software is actually generating a lot of GA solutions in that space. And then finally not mentioned quite that much today is around neutral host and shared infrastructure. So there's a strong cohort of Tower cos and infrastructure providers, sis, and maybe the Garner example is one of those. But there is a really strong community and interest to try and leverage passive infrastructure and start to develop value chains and business models for densifying urban areas, not just rural but urban areas as well as private networks or in-building requirements. So those are the five main focal programs and a governance change et cetera

Ray Le Maistre, TelecomTV (00:05:17):
Around that. Okay. So obviously in recent years we've seen desegregation shift from in many instances from the planning and test and trial stage into actually deployments, whether that's in the RAN or in the transport side of things as well. How has TIP'S role evolved as things have shifted from PowerPoint plans, tests and into actual commercial networks?

Vishal Mathur, Telecom Infra Project (00:05:47):
Yeah, absolutely. I mean there are two, well three main things. One is there's an abundance of examples out whether it's on the ran, whether it's in transport, et cetera, where telcos or ISPs are really driving forward their own deployment plans. So they're coming to TIP to surface those and we're using those as reference points for the rest of the industry to see what is possible. So a lot of this is about trying to promote and to educate, so use TIP as a channel to do that. So our member community does that. They showcase blueprints, they showcase how vendors have come together and actually shown that there's deployment and integration and lifecycle management happening in those deployments. The second is actually trying to work out what is needed and what is the gap in the industry to build a bit more confidence. And so TIP's focus and has been for a couple of years we talked about something called Scope in MWC couple of years back, but actually our focus is on system level testing. So for that to happen, that requires a good understanding of what performance criteria or test criteria required to ensure that a multi-vendor system can be deployed, can be operated, and can be managed. And actually it's not just about feature parity, it's about performance as well. So what we tend to do is looking at the whole system in that regard. So secondly, beyond just test plans and test criteria, there needs to be capacity in the market for where testing can occur. So a lot of that can be in operator labs or vendor labs, but what we're trying to do is offer a service to those demand side players in the sense that rather than putting all the investment into their labs to have capable infrastructure and test capability, let's look at some neutral test environments to do that. So TIP is actually now focusing on developing a global federation of capable high capacity test labs and these are members of TIP who offer that as a service and then finally actually is catalyzing more deployment, et cetera. So some of the governments out there are interested in not just doing investment into labs but also seeing actual deployment. So we're working with some of the influential governments to release some catalytic funding in certain key strategic markets to drive forward more work and deployment.

Ray Le Maistre, TelecomTV (00:08:30):
Okay. And then can you just talk briefly about the importance of a harmonized approach to data structures in digital service provider networks and how TIP is working with its members here? And that's pertinent to what we're about to talk about, but this is something that you've been looking at, isn't

Vishal Mathur, Telecom Infra Project (00:08:49):
It? Yeah, absolutely. I mean there's two areas or two domains that we've been trying to look data and what sort of data structures are required. One which is a little bit more mature is around Rick and X apps and our apps and automation and optimization around workloads. Of course there are great, the big exam question is how do you get access to operator data? So through trial working with operators, we are focusing on particular use cases and applications and showing not only the ideology of integrable R platforms and applications, but also then looking at testing those in real world scenarios and ensuring that when applications are put into the network that they can be leveraged and the data makes sense in that respect that the new aspect where we are going in this telco AI project group which has just been started by Deutsche Telecom, Nvidia and Intel and a few other operators is then now looking at, and sorry, the other one, we are looking at digital twins as well.

(00:10:04):
So we're wondering whether there's a need set for digital twins capable with test labs, the telco AI project group just started up, that's where time will tell what will that look like? What are the use cases that will be generating a need to look at reference AI architectures or distributed compute architectures and what are the data requirements around that? Long short is we're an industry body, so we are looking for harmonization of approaches, so guidelines on how data should be leveraged, especially within the regulatory landscape, but also how do you actually bring enough data to start to develop out a marketplace of applications, et cetera. So we're still at the early stages on that,

Ray Le Maistre, TelecomTV (00:10:52):
But we hear so much we've heard it the last two days, but hearing a lot more about the need for real industry collaboration, the only way that a lot of these problems are going to get solved and TIP is one of those bodies that's helping that to happen. Yeah,

Vishal Mathur, Telecom Infra Project (00:11:10):
We're part of the equation. I mean we're certainly where maybe the GSA sits very much in the telecom operator space and looking at demand side requirements TIP can focus in on not looking at standards but actually looking at referenceable architectures that can be leveraged in the network to expose capabilities. So the world is moving towards this service oriented place environment and we all want a piece of the action and piece of the pie, but we need the ecosystem to come together and harmonize set of requirements.

Ray Le Maistre, TelecomTV (00:11:45):
Okay, excellent. Thanks very much Vish. Let's give Vish a round of applause please for bringing us up to speed with TIP. Now don't go anywhere ish because you are sticking around with us for our panel session now as ever, we need to bring more chairs onto the stage. So we're going to have our final video montage while we rearrange the furniture and invite our next panelists onto the stage. And this video is hot off the press, it's from last month's open RAN summit. So roll VT as they say.

Sushil Rawat, TELUS (00:12:23):
We have started deploying open, ran in 10 turbine areas, pretty much urban environment where we have highly loaded site, which gives us a real picture of how Open RAN is performing in terms of KPIs and customer experience and whatnot. And we've been pleasantly surprised to see how open RAN is at par and in some aspects even better than traditional RAN technologies.

Richard MacKensie, BT (00:12:52):
When we start using the power of AI and machine learning, the Rick is in a perfect position to exploit that power. And so as time goes on and we start launching better services and optimizing the network, that's where the R becomes essential. So right now, no we don't need it, but in the future, yes we do

Atoosa Hatefi, Orange (00:13:12):
Deployment of open RAN and its introduction in our networks will be gradual. As the Brownfield operator, we are very demanding requirements and also introduction of open RAN comes with some impacts in terms of operation, which requires also lots of upscaling on our site.

Mallik Rao, Telefonica (00:13:31):
We need to disaggregate, we need to decompose the RAN software because if you want to run AI automation, you want to run the workloads in terms of CICD DevOps, you cannot do a monolithic kind of architecture.

Petr Lédl, TelecomTV (00:13:48):
We will see the deployments scaling in the Brownfield environment as the renewal cycles are coming and of course by then it is important that we are able to move to the horizontal model of architecture and break the silos.

Ray Le Maistre, TelecomTV (00:14:18):
Okay, excellent. So time to get into our panel, but as ever before we talk about improving network optimization through automation, we need to remind everybody that this session has a poll one question, three answers. You can only pick one as ever. Here is the poll question, what will have the greatest impact on improving network operational efficiency during the next five years? As you can see three options, you can only choose one. Please go ahead and vote. You'll find all of the polls in the DSP leaders world forum section on the agenda page and we will take as ever a look at the results at the end of this session. So with that out of the way, let's welcome our co-host for this session. Diego Lopez, senior technology expert at Telefonica and an Etsy fellow Diego, you are a veteran not only of the industry but of our events here as well. You've been to a few, haven't you? Yeah, I mean that in the best possible way. Of course your experienced, but I think this is your first co-hosting session, isn't it?

Diego R Lopez, Telefónica (00:15:24):
Yeah, I think so. So well, let's see. I hope it'll be up to the expectations.

Ray Le Maistre, TelecomTV (00:15:29):
Of course, absolutely a hundred percent. And we have a great panel as ever to go with our cohost. So as ever we're going to get them to introduce themselves starting at the far end with Anita,

Anita Döhler, NGMN Alliance (00:15:41):
Anita Döhler, CEO of the NGMN Alliance

Faiq Khan, Rakuten Symphony (00:15:47):
Faiq, Khan I'm part of Rakuten Symphony leading OSS global sales

Vishal Mathur, Telecom Infra Project (00:15:51):
Vishal Mathur, Global head of engagement at TIP

Darrell Jordan-Smith, Wind River (00:15:55):
Daryl Jordan Smith, CRO at Wind River

Sadayuki Abeta, NTT DOCOMO (00:16:03):
Sadayuki Abeta, head of Open RAN, NTT DOCOMO.

Ray Le Maistre, TelecomTV (00:16:04):
Okay, excellent. Welcome everybody. So Diego, if you'd like to make your way to the lectin for the DSP leaders address

Diego R Lopez, Telefónica (00:16:11):
Please. Okay,

(00:16:17):
So well, when talking about this, my role of co-host, thank you to those who decided to bring me here. Well I was thinking about the idea of automation and what we are talking, still talking about automation when if you think about it, the ideas around automation, automatic closed loops, et cetera have been around, well basically for centuries they were considered before what at the end of the 19th century. And from that moment on, we have the transfer functions, the closed loops, et cetera. And what I think I'm thinking about it, the real difference and what is keeping us busy and when I mean us, I hope the whole industry, but at least my team in Telefonica is about our data, about how we guarantee sustained and trustworthy data flows that can be properly interpreted by the control elements and that can reach properly reached the controlled elements as well.

(00:17:20):
So the idea is that anything, you have to think about it, what is important thing that we require to have proper automation to take advantage of automation while we deploy and operator networks should be about guaranteeing that those data flows are as they should be, they should be because everything else should be from the point of view of the operators should be exchangeable. Whether we are using an algorithmic control, we are using ai, we're using whatever other technique that comes in the future, what we own, what we have and we can generate. And what we can't control is the data that we are feeding it and the data, let me say on this because control actions are no more than data. The data that we extract for the processing of the input data they do this is very important when you think about these data flows is that to consider this bidirectional nature a control action is data, data coming out that has to reach the plant, what is being controlled the same way that the data that the plant produces a product of the measurements is aggregated, combined, et cetera, for being consumed by the element that is controlling in the closed loops the same way we have to translate this outcome in something that makes sense.

(00:18:47):
You think about it, the flows are the same in different directions and the processes are totally the opposite. In one case, we are aggregating, in the other case we are somehow aggregating, but it's important that everything is the same and we are working on making sense of those data in both directions. If you think about it, these data flows define whatever it has to do with a control loop apart from sure of the loop itself. That the loop itself has this definition that as I said, has more than two centuries of age and it does define the loops and cells, what they are connecting and the lifecycle. When you activate a loop, when you put this into actions, when you connect the loop with the appropriate data flows is what the law, whatever mechanism for composition that is essential in any system that is of a reasonable complexity and networks are whatever you want but is extremely complex and heterogeneous, whatever pattern for composition of the control loops, whether you want to build a hierarchy, whether you want to build a federation, whether you want to build any kind of cooperation mechanisms, requires again flows of data of different natures.

(00:20:08):
It's not raw telemetry data. We can talk about exchange of knowledge, we can take about exchange of models themselves, of parameters, whatever, but we are talking again about the flows of data and finally they percept something that I believe that in particular with the current trend towards the use of smart controllers, AI controllers, gen ai, whatever is more and more important that is about how you control essential properties that are related to things like for example, being able to authenticate the data themselves and the sources of data authorize who can use the data and for what and providing something that is equally essential that is accounting so you can know who use the data, what the data was used for. And so sustain goals related with explainability of being able of analyzing why something happened, some decision was taken, tracing it back to the reasons and tracing it back to how data was provided during the execution or during the training is the same.

(00:21:24):
We are reusing and the idea is to trace the paths the data are traversing and the sources of those data and being able to trace them back. Equally important is for whatever we need for doing some debugging, validation, evaluation of any model of whatever, what we need again is data. Is data that we can rely on is data that we can manipulate and that is available at the right time and at the right place. With this in mind, sometime ago we started a few projects on this. We have been working on these things since we started to deal with automation and with the goals of the different generations. We started this with our 5G original projects. Right now we are running two projects that are very much focused on aspects regarding with this warranties on provenance, on access control et cetera is a project that is called robust and we have in another project we call it horse in which what we are running is and defining is how we can define what if scenarios based on evidence, based on data. So we can evaluate in advance as well which is what could be the impact of a particular automated decision on the network performance and characteristics. And with this I will close here, I think that we have, I hope I have led some ground for further discussion and just let me remember when making the notes for this. That reminds me a comment I read sometime ago about councilor of a politician about the economy and well just translating it just remind that it is the data stupid. That's all. Thank you.

Ray Le Maistre, TelecomTV (00:23:27):
Thank you Diego. Interesting project there to looking at the implications of what happens if you try to automate something that I think people must have been delving into for quite some time and still trying to figure it out. But I want to come to our panelists now, find out if anybody has a question for Diego following those opening remarks. Fine.

Faiq Khan, Rakuten Symphony (00:23:53):
So thank you Diego. This was very insightful. I'm not going to try to be controversial. Maybe if we take a step back and talk about nobody is going to go away from the importance of the data you mentioned about getting it in the right way but why automation? Why go with this whole journey? I still believe that maybe we are in other sessions also covering this point but not really coming out and I wanted to ask you, what's your view on this that what are the KPIs, what are we doing this for? I mean if it's for the sake of automation or if it's really for the sake of measuring true KPIs around how this closed loop functions or automation is really changing something, is it really about resource optimization in a meaningful KPI? Is it really the real end subscriber achieving the true result of this automation? I believe we all want to achieve that and there's no doubt about it, but are there really measurable KPIs in our organizations and operator environment is really measured on that or it goes back to the traditional KPIs we always measured on.

Diego R Lopez, Telefónica (00:25:13):
Well frankly just think about it at the end what we're talking is about another kind of data. We were talking about the telemetry data that you input, the control data that you output and then there will be some kind of effectiveness data that we had to make by comparison is another data flow and this is something I will use and not talk about only two data flows. I will if you allow me, I will use talking about the three data flows for sure. That's very important. I mean measuring what is the impact that we are achieving with automation. Taking into account that deploying automation has cost in several and taking this into account is something very quite interesting. During the lunch break I was talking with David, I don't know where he's, but we were talking about what the TM forum is starting to do on something that they call key ai, which is key, which is the E four. I have forgotten it, but it is about how you can value not only the performance indicator but the effectiveness. It's about the effectiveness that you have on the, and that's really important and again it's about, it's very, well it's not exactly metadata because metadata is about something else, but this about met information and knowing what, that's a very good point worth considering. Yes.

Ray Le Maistre, TelecomTV (00:26:41):
Okay. I'm hear about that in a couple of weeks in Copenhagen as well at the TM forum event. That sounds pretty interesting. Any other questions for Diego at this point?

Darrell Jordan-Smith, Wind River (00:26:53):
I'd be interested to get your perspective on, you have all this data, how do you decide what data to focus on and what not from a perspective of looking at, we talked a lot about ai. Are there some natural AI tools or are we not there yet with the modeling? We have to have people look at that data?

Diego R Lopez, Telefónica (00:27:15):
No. What we are doing right now and something that we are starting with this application, something, I'm not going to pretend that we have AI and growth loops fully autonomous everywhere, but one of the things that we're doing is analyzing which are the features more than the data. I mean at the end is the data flow, but what is important is that you can derive features that make sense. We have several, well we have ran several experiments about which are the features for example make sense on guaranteeing that there is no congestion or identifying congestion before it happens. Right now we are running quite interesting experiments on whether we can get some early indications of a peaking energy consumption derived from the traffic, whether that's influenced like the share bandwidth that you're using or the size of packets, the number of packets, et cetera with some results for the moments are still, how to say it, not very definite, but identifying this is very interesting.

(00:28:25):
Anyway, what I think is important is when you think about a data infrastructure is the capacity of making it flexible enough that if you for whatever the reason you need to change the feature you're focusing in is something that should be doable with a reasonable amount of effort. The data is there and it's true during this morning somebody was mentioning that the amount of data that we can collect is something that well blows your mind and the idea making this selection is essential and having the flexibility of making this selection dynamically is essential. Okay, thank

Ray Le Maistre, TelecomTV (00:29:00):
Anita?

Anita Döhler, NGMN Alliance (00:29:01):
Yeah, Diego, so Telefonica being an international company, what is your experience when you run those projects with regards to scalability and applicability to all the different markets and operations?

Diego R Lopez, Telefónica (00:29:17):
Good question regarding, no, I mean in general I would say that when it comes to the, let's say basic network behavior, and this is something that you can apply almost if you're talking about preventing congestion or protecting the network against attacks and things like that are something that is quite regular. Then you enter delicate situations when we're talking about data policies, when we are talking about rules about what is considered a personal information or not, what can be computed or not computed and that can be become tricky again and that's what calls is for. What we are trying to achieve is we refer to it as a data fabric and we are trying to apply techniques for semantic attribution of data so we can process it accordingly. It's tricky. It can be really hard when you even inside the European Union there are different rules. I know if you go to America that's something that can be really complicated. It's tricky and we are trying to build something that is flexible enough. I'm not saying that automatically will be working everywhere, but that Buddha allow people to adapt it.

Anita Döhler, NGMN Alliance (00:30:38):
Just imagine that maybe also the database is different so I don't want to ask anything what might be confidential for company information.

Diego R Lopez, Telefónica (00:30:48):
Yeah, no, no for sure. I mean there are things that we are not going to, for example, something that is for us is very important is when you are told by some vendor about I have a marvelous model, you can train it, give me your data, I will train it even with all NDA sign, et cetera. That's something that sounds a little bit and that's why for example, we are heavily working in synthetic environments. People tend to talk about now about digital twins. I prefer to talk about the synthetic environments because it's more than a twin or we try to make it, it's more than a twin or it's a twin on steroids. We are trying to generate data sets that makes sense. That can be useful for many purposes and normally are not necessarily realistic because they go beyond real situations. That's what we're trying to do. Okay. Sara Yuki? Yeah,

Sadayuki Abeta, NTT DOCOMO (00:31:45):
It might be not directly the optimization but the ones that are motivation to introduce the, there are automatic features, how to recover network in short time for the operators. So the era detection and the automatic recovery is I think one was the most important topic but the so that error during that happen so often hopefully. So that gathering the information is also that they're difficult in the case. So for that case, what point scheduling to improve that? No,

Diego R Lopez, Telefónica (00:32:25):
What we're trying to do is we are trying to identify mechanisms for early detection for identifying patterns that not necessarily mean for us and something and whatever the incident, something that when the incident is already happening and when you have already noticed this and the impact has already occurred, well everybody knows this and is well trying to recover takes time always. If you are able, and this is something that we have the impression that this makes sense is that if you're able to detect this ahead of the real impact that ahead even of the human capacity of the, and we are trying to reproduce this situation somehow. It's like having trained dogs that can smell something that is not detectable by humans but it's detectable by them. We are trying to do so because we believe that it's more equally important that this you are fast in your reaction is that you're fast in the detection the first step and that's what we are trying to do with all these synthetic stuff and these semantics. Okay.

Ray Le Maistre, TelecomTV (00:33:39):
So I mean it's obviously very clear all operators want to optimize their networks. Automation is going to help with that but you can't really automate without the right data sets and being able to manage that properly. So to come back to Daryl's point, I guess what sort of data lake for want of a better term do telcos need? How do they decide what they're going to work on? Or maybe I can put that question back to you Daryl, in terms of the experience or what you will have seen with the operators that you are working with.

Darrell Jordan-Smith, Wind River (00:34:21):
Sure. I think from a wind river perspective, we're working with a lot of operators as they deploy ran at the far edge of the network and that's a disaggregated infrastructure so there's a lot of data that comes back on deploying but more importantly sort of data operations, what is the dynamic data that's coming back that allow us to automate certain functions around power utilization? We were talking about that yesterday and what we found is you can play with the p and c state of a microprocessor to save power, but it's the entire system that makes the difference. So you've got to control the software that's controlling the radio. So if you want from an automotive perspective, you've not got your foot on the petrol or gas pedal all the time, you're chewing up a lot of power. So we're looking at ways of actually leveraging that dynamic data that's coming in in order for us to provide insights observability to driving better network functionality and managing real cost benefits associated with a disaggregated more complex by definition environment that we're working on and we're seeing that become more interesting, a lot of operators and a lot of operators trying to figure out how they can scale that into other areas of the network

Ray Le Maistre, TelecomTV (00:35:45):
And Vish. Just to come back and clarify a point we talked about just before in terms of data structures, to what extent are you getting input from TIP members into what they're doing that's allowing you then to offer something back to the community?

Vishal Mathur, Telecom Infra Project (00:36:02):
Yeah, absolutely. I mean I'm just following on from what Daryl was saying, I think obviously RAN workloads are one of the areas that the TIP community's working on and the best results that we're seeing are very specific. Well and our app use cases coming out from the community, so we have 17 odd prioritized ones in TIP, but looking at energy saving cause optimization, slicing massive MIO beams, there are a number of applications that are being developed through an O RAN architecture and from that you're actually starting to see what the RF signaling and protocols are coming through the stack and how the application is onboarded and actually starts to have an effect on that RAN performance. The only way operators feel comfortable in testing that is in sort of a test environment that is controlled with R tests from VR V or Keysight et cetera, that actually generate the use cases in the real world scenarios to see the dynamic sort of results out of that.

(00:37:11):
That's what we're sharing in the community right now, but as I might be naive here, connecting the dots between what I've heard this morning through to now, if there is a model where telcos want to be a part of a value chain where they're generating or capitalizing on their network assets and generating service revenues and it might be B2B as Lauren was saying this morning, then there's got to be very specific use cases that actually then generate requirement for specific compute power in the edge and therefore data requirements as a part of that. I think yes, fair enough, there is data available but for an industry to scale at least 60 70% of those requirements through it so that you can actually generate some real returns in the industry, we've got to find ways to actually centralize that, aggregate it, anonymize it, make structures for actually exposing it, deal with market regulation policies, et cetera. So from a TIP perspective, that's where I see our member community wanting to go but time will tell whether we actually start to manifest platforms around that.

Ray Le Maistre, TelecomTV (00:38:27):
Okay. And Faiq from the Rakuten perspective. Can

Faiq Khan, Rakuten Symphony (00:38:30):
I build on that before I lose the chain of my thoughts? There's something I want to build on top of what Darryl said and we said are we saying that for telcos without open ran there is a real challenge in the openness to achieve the true automation because I asked this question because of course I'm little biased and have the experience of seeing the benefits of this in Raku and Japan. You look at the energy saving use case, you talk about it like the most common industry practices that okay, I can get it on the server levels on a data center or COTS level, but I mean truly what I learned in Rakuten Japan and open run environment that it is about the per user level and application usage and that is only possible because we had the opportunity to of course have the data in the most correct cloud native distributor environment, but the openness and I believe that this is a fundamental challenge to achieve the true use cases of automation unless we have that openness. So while I share this example, I wanted to ask again Diego and others that, I mean how are we intending to solve this without the slow industry adoption towards certain newness, which I know it's a big topic, we want to go there, we don't want to go there. It has its pros and cons. I'm not getting into that debate, but the openness remains a challenge to get the true benefit of these use cases. I don't know if dig or anybody else has any

Diego R Lopez, Telefónica (00:40:15):
Thoughts if I may the same way that open source doesn't mean free software. I would say that talking about openness in data exchange or even knowledge changes doesn't mean that you are providing all data that you can generate or all data that you are aware. It is true that we need some models or ways of sharing data but sharing data in the same way that currently I like very much the original idea around the internet, not what the internet is or what we call the internet right now is, but what it was thought of as a cooperating autonomous systems and the principles are around this good old protocol that is called BGP and I tell you what you can find behind me if you connect to me, I tell you what you can find and the cost in whatever the terms in hops or bandwidth or whatever it tells you the cost of reaching there.

(00:41:15):
It is the same. I can't tell you what I'm consuming or how much congested I am or whatever I tell you, I'm not exposing my data but I'm telling you what would allow us to collaborate. I think that that model should be applicable. We have made some experiments for different reasons because I had to do more with security and how much we were exposing. There is a protocol that has been standardized as well in the ITF that is colo alto that has the deal about exposing data about your topology and some measurements regarding your topology. I think that doing that way probably is a way to go. We have to define it and we have to guarantee provenance and integrity and things like that, but I think we have the bricks to build it.

Ray Le Maistre, TelecomTV (00:42:10):
I just wanted, we're going to come to energy efficiency specifically a little bit later on Anita. And I know that's something obviously that the NGMN looks at a lot, but you also do the NGMN does a lot of work around cloud native as well and obviously that comes back to automation and data use. Is there anything within the confines of the NGMN? Is this something that crops up that the members are asking the NGMN, can you look at maybe models around which data to collect from where and how to use it or is that not really within the domain as it were?

Anita Döhler, NGMN Alliance (00:42:50):
Yeah, good question. So not really looking into the usage of data. So we do have a network automation project up and running. It's focused on influencing upcoming 3G PP releases, but where we currently also continue our work is to achieve cloud nativeness and there is a strong belief that without being truly cloud native it'll be hard to automate, probably hard to impossible and therefore what we are currently discussing is how can we cooperate with other organizations like Tippo team Forum with regards to providing requirements for validation centers for assessment methodologies when it comes to autonomous networks for instance with regards to cloud nativeness of platforms. So that's something we are really keen to continue to drive in the industry to make it a reality.

Ray Le Maistre, TelecomTV (00:43:53):
What we're really seeing again over these two days is that nearly everything we talk about interlinks with everything else and you can't really do one without the other. And of course data is right at the heart of this as is security and I want to come to you as well to find out what NTT DOCOMO is doing. But I want to do in the context of a slightly broader talking point around the impact of network disaggregation and whether this actually makes optimization harder. We hear continuously from various parts of the industry that disaggregation it's complex, it isn't reducing energy efficiency, there isn't a lot of gains but of course from the supporters we hear that all of these things are happening. So is from Domo's experience in its radio access network is optimization, can that be improved through disaggregation? And I also want to get your thoughts on the data lake aspect. Is there a clear way about that Doomo has maybe about what data to collect and how to manage that?

Sadayuki Abeta, NTT DOCOMO (00:45:10):
Okay, so first back to the 4G because in 4G already introduced some functionality I think most of operator already into some features and from 4G actually we deploy as much vendor network that mean that we use a different vendor Z equip in the same geographic area and the gathering information from that. So each EOB and we do that optimization. So even though this a great or this much vendor network, we achieve that optimization like road balancing and our futures are defined as CGPP. So without standardization interface or standardization data, we can optimize the network. And as for the 5G especially open ran of course that CGPP define thats features and also that alliance define the use case and they also try to define that what data is needed to support those use cases. So with that data, even though the aggregated network we're using different vendors equipment, we can achieve that optimization and especially for the other energy saving, we may need to more information about the radio unit specification itself. That part is that sometimes not defined so we can control the policy but we don't know the detail, the sleep model for each RU so that case, so if that been provided some of detail, we can do that more optimization. But if that vendor provide just the standardized interface, not a hundred percent energy, other optimization can we try to do the work with our operator even though that so different business R you we use.

Ray Le Maistre, TelecomTV (00:47:20):
Okay, shame you weren't here first thing yesterday morning because Yago Tenorio was talking about some of the data they're getting back from their RFQ but unfortunately he can't tell us the details because it's an RFQ, but he was hinting strongly at some interesting data coming back from that process. Fike in terms of what's going on with Rakuten Mobile, what's the experience there in terms of optimization and automation? What lessons or what insights can Rakuten offer to other operators?

Faiq Khan, Rakuten Symphony (00:48:03):
I think Rakuten story in itself has been revolutionary and in markets like Japan and Europe where Rakuten, Japan Mobile and with 1 0 1 corporation the biggest I would say the message is that to take the risk and accept possible consequences with it. I believe today in operator environment and C-S-P-D-S-P environment, there is a very risk averse mindset and which is the biggest limitation for industry to move as fast. We all would like it to move and that has been the cornerstone where that Rakuten was brave enough to share those experiences and bring that technology change. Again, I'll go back that we build and we were in the right privilege situation to build a fully cloud native environment which helped in the journey with the openness we had. But that has been the cornerstone in bringing the true automation and AI use cases which some environments struggle to achieve.

(00:49:21):
I'll build up on the same thing which we were talking about energy saving and it was something very interesting that the kind of percentage energy savings we were able to achieve because of the infrastructure we were able to build and the openness of the data by going on subscriber level and on application level to reduce the energy on pods rather than your traditional claims of other environments. Now as long as industry will, I appreciate what dig you said about the new protocols and standards being worked I appreciate, but I do find it challenging that with some of the proprietary infrastructure vendors we will always have this struggle because the openness is going to remain challenge. The automation results are not going to be what are truly beneficial to the true KPI needs of the industry.

Diego R Lopez, Telefónica (00:50:21):
Delta, I was talking about cooperation among operators because we are much more used cooperate with our competitors than our vendors. That's

Faiq Khan, Rakuten Symphony (00:50:33):
Fair enough. Fair enough. I think that's the biggest message that there's a lack of acceptance to change and then possible learnings or consequences around it and I think the industry needs to be braver about it.

Vishal Mathur, Telecom Infra Project (00:50:48):
Can I just try and build upon this multifaceted issue but look, there are potential, if you think about the fact that we need to be addressing the challenge that a CTO of a national operator or a tier two operator is trying to face so that CTO sitting there being told by group or by CEO said, look one, you've got to think about disaggregation, create some choice in your RAN network. Two, I want you to drive forward some level of automation to get some real value out of your network and make sure it's optimized for the future. Three, hey everyone's talking about ai, you need to be user specific, start to generate and monetize your network. Three different things, challenges which as an industry to scale that out is massive and it's big. So if anything, let's just take the positives. There's some choice in the market.

(00:51:44):
You've got Rakuten Symphony, you've got rx, you've got Wind River, Dell, Samson Stack, you've got Avenir there, they are open, ran friendly, they're driving forward cloud native automated sort of scenarios. Should that CTO go and swap out a legacy network provider and try this out. Maybe not on my public or major network, but I'll give a discreet look at a private network play or something that is off the major network. Let me learn about it. Second is, can I actually start to drive forward some level of AI and automation? Actually yes, there's choice now in the market there are some open Rick players, there are some AI X apps out there that are probably more user-centric than our apps. Still a bit more standards to develop, but at least I can start to build a bit of knowledge, understand what data protocols and signals are coming through, understand the stack, how it integrates in.

(00:52:45):
But I'll do that in my lab in a controlled environment actually there might be my incumbent vendor might be creating ITSs, not going to share its data, their data with other players, but we might create a closed ecosystem there. So you're going to start to see that manifesting going towards actually driving a distributed compute strategy, putting edge cloud sort of strategy in play. That's where the CTO wants to be going. Are they ready to do that yet? No, there's still quite a bit to be worked through in the industry. It's a great challenge to have, but I think we've got to go step by step that CTO needs to feel comfortable and that's why TIPs, they're trying to get some reference architectures, blueprints models that are working well done to yago and getting a deal with I two CAT and getting a multi-vendor management system in play. Brilliant. We need to see more of that and we need to start not just talking about the science of this but actually get on of it and show some reference

Ray Le Maistre, TelecomTV (00:53:52):
Blueprint. And before I come to Dara, I just want to very quickly ask Fish. You mentioned there the RAM intelligence control of the RICK and the X apps and some of those tests and trials have actually been going on for about two years and some of the operators and things still don't seem to be coming out into the market despite what does sound like some quite significant optimization gains from some of those apps. What's holding stuff up there is how much can you give away from what

Vishal Mathur, Telecom Infra Project (00:54:27):
I think moving from trial to actual commercial deployment and scale does need to be a little bit more aggressive. I agree with fake. I think some of operators can probably take a step forward, right,

Ray Le Maistre, TelecomTV (00:54:42):
That risk aversion you think? Okay,

Faiq Khan, Rakuten Symphony (00:54:45):
I think what the session, the first panel discussion yesterday was a testament like what we saw what's happening in Vodafone, but the way we said out of these three things, at least operator environment needs to take a leap of faith or aggressiveness at least on the two. The challenge is if you're going to keep playing safe, the true results will continue getting delayed. I know some of people will not agree with it, but that's Well the

Ray Le Maistre, TelecomTV (00:55:15):
Challenge I can feel Diego swiveling in his chair here.

Diego R Lopez, Telefónica (00:55:19):
What I'm thinking is that I would not say that this industry is or suffers more than of conversion than any other industry. I mean it is a big industry with many, many customers and many, many things that are critical. We didn't have that risk aversion when we deployed the first mobile networks. And in fact, when I joined Telefonica 10 years ago, when the mobile network architecture was already established, et cetera, to me to outsider the mobile network, it's not especially well architected if you think about it. Why? Because it was necessary to build the whole thing and the telcos were really, really bold in building this, making some mistakes for sure, but they made it right now you have to risk to a better risk because you have many important assets to keep and you have a very high price to pay if you make a big mistake. So when quickly decide ourselves, I mean in TED two, I always remember what Neils used to say that these industries has changed the world several times. Don't forget that. Well yes, when we are talking about let's try to invent a new network that is not mobile, it's not fixed, it's whatever. And then let's see if we have the risk aversion. Let's go greenfield.

Ray Le Maistre, TelecomTV (00:56:48):
Daryl, I know you wanted to come in there because you're working with some CTOs that are starting to take that leap of faith, right? Yeah,

Darrell Jordan-Smith, Wind River (00:56:57):
I was just lamenting, I'm going to date myself here. I joined the telecommunications industry after being in the Royal Navy and the reason I did that 30 years ago, so I was going to date myself, I decided to go work for at and t and the reason I did it at that time is because at and t invented just about anything innovation was there skills? Were there people that I went to school with or university with wanted to be part of that then I'm worried about now because is the talent in our businesses have we outsourced so much to the traditional radio access or infrastructure vendors so they provide a complete solution so we don't have those skills to innovate and move as quickly. And that's a generalization topic some operators do and others. And now what I see to your point is CTOs saying hey, we need to get that back. We need to innovate a little bit more. We need to do disaggregation and move ourselves into uncomfortable spaces and learn again to fail quickly. We talked about that yesterday in order for us to do things. So we are seeing CTOs and operators that typically don't take those sorts of risks, wanting to take those risks because they see the opportunity to innovate will inherently deliver them competitive advantage and optionality in their own businesses and they're trying to do things differently. Okay, that's my perspective. That's what I've noticed and seen. Okay,

Ray Le Maistre, TelecomTV (00:58:25):
Now time is actually sort of getting on a bit and I definitely want to come to the energy efficiency point. It's already cropped up a few times, but also I want to see if we've got any questions from the audience as well. So if you do have any questions now is the time to put your hand up. We have people with microphones. Yes, we have one over here. If you could state your name and your company and who your question is for.

Mojdeh Amani, BT (00:58:51):
Hello? Hello, my name is Mo. I work around data AI automation and I'm from bt. My question is from Diego, as you mentioned about your early detection of incidents and events. Where are you at the moment in your journey? Because we are dealing with data swamps more than data lakes here and there and we have similar vendors, similar platforms, similar tools, and I want to know, do you have any experience that you can share here regarding the journey that you're going through to sort out your data before going to any other automation process?

Diego R Lopez, Telefónica (00:59:31):
Sorry, what we are trying to learn, what we're talking about early detection, what we're trying to learn is is we're making experiments to generate data sets that can be used to train systems that identify those early symptoms before a human can detect it and before the actual impact happens. How we do it is, as I said, we are trying to run synthetic environments in which we replicate conditions that normally don't happen. I mean well that when they happen in the real network are bringing us real problems. So for example, I dunno typical something that is typical is a massive distribute denial of service attack. When this is happening in your network, you don't have many occasions, you don't have any time even for collective data you simply has to reply to it. What we are trying is to train systems to detect those early symptoms and start applying countermeasures, whatever they are.

(01:00:43):
I mean typically when we think about anomalies we always are thinking about security attacks et cetera. But we can be talking about, well sometime ago I read that when these gangland style B video became so famous there was a overload in many places because people were asking like crazy for the gunnan style and the CVNs were not replying fast enough. That's an incident, it's not an attack. Well maybe it's an attack to good taste, but that's another story and this is what we're trying to do. We are trying to replicate to generate those data sets that are valuable because normally they are not, I mean they're not fully realistic because they have not been generated by the real network. But can help us in identifying this,

Mojdeh Amani, BT (01:01:35):
You are experimenting with event stressing your network, right?

Diego R Lopez, Telefónica (01:01:41):
Unknown

Mojdeh Amani, BT (01:01:42):
Or unknown, are they events known or they are someone

Diego R Lopez, Telefónica (01:01:45):
That you Well the idea is that if you simply want to detect the anomaly for sure you train the whole thing with normal behavior and whatever goes out that is an anomaly. We tend to go beyond that and try to generate situations that we know are going to happen and then trying to identify things that normally even for a very experienced operators are a little bit, well they are not part of their experience of the symptoms but they are a symptom. We are trying to identify that.

Anita Döhler, NGMN Alliance (01:02:22):
Right. Thank you.

Ray Le Maistre, TelecomTV (01:02:25):
Okay, I'm going to move on to the energy efficiency point. This has cropped up a few times and essentially just basically ask can networks be optimized, automated for optimization and yet be more energy efficient because there seems like that could be a pretty tough set of things to put together. So Anita, I want to ask start with you because the energy men's been looking at sustainable network developments and trying to put some of these KPIs in place as well, which of course are very important. How can energy efficiency be improved at the same time as networks are automated and optimized?

Anita Döhler, NGMN Alliance (01:03:13):
Personally I believe yes, but we need to look into the details and what we are experiencing in our project. For instance, where different operators work together with different vendors that it's very hard to obtain reliable data points with regards to how much efficiency gains can be achieved and how much energy savings can we achieve. And there is still this big question, how much energy does the artificial intelligence consume? So if we use different models for network optimization, which also should lead of course or contribute to the target of zero what at zero load, the question is would we then just move the energy consumption to another place? So I think we really need to take a holistic approach here for being able to assess where actually we have the energy efficiency gains and where energy or in general power consumption is going down in total.

(01:04:22):
So looking at the overall picture and when it comes to KPIs, our recommendation was to link such KPIs always with quality of experience because I think that's something which is also holding operators back to launch such programs. Some are doing this very aggressively already now, but we heard also quite frequently that there is a danger that a user experience is damaged with different approaches and therefore I think quality of experience, a holistic approach with regards to energy consumption. Energy efficiency is really needed and also probably some form of KPI and source data normalization so that we can really rely on statements of different vendors, different operators, how much energy efficiency, energy improvements, energy consumption improvements can be achieved. It's very hard to assess the correctness of such data at the moment

Ray Le Maistre, TelecomTV (01:05:37):
And also which one wins out. You have to make that decision don't you? Exactly. Sad. Yuki, you mentioned earlier about some of the potential energy efficiency gains you were noting in your network. Can you talk a little bit more about that and how you see that progressing as data volumes grow? Will the efficiencies and the automation be able to mitigate that data growth against the power consumption that's required?

Sadayuki Abeta, NTT DOCOMO (01:06:11):
Yes. The topic dynamic changes, especially in the morning or daytime or so, we measure traffic and we control that radio unit accordingly. That is already operating both 4G and 5G. And so we see that. So the energy gain to introduce those futures in commercial network, because the setting KPI are little bit difficult. So not easy to estimate or not easy that the target performance because it highly depend on the traffic and also how much resource actually we have that and also it also depend on that. So are you type, so regulatory type is that the sweet mode is not so efficient? Honestly the latest one is that more efficient sweet mode, they help. So we have the characteristic of each area so we control, but basically we do that so simple operation like the town and TA and the base on the traffic and at this point we haven't achieved that.

(01:07:31):
So resource optimization especially is a basement part, but also introduce that the core server, we only introduce that more the resource pooling gain according to the traffic. Of course we share the resource among between the different south. So in that sense, some traffic is high traffic, some cell is a lot. Traffic in total seems average, but this also dynamically change according to that. So daytime. So that's actually, we already introduced in commercial. So I think that optimized energy saving is still that challenge. I mean that 30% or 40% is enough or not, is that, I can't answer. But we introducing those functionality and how we can more optimize with their additional algorithms. So that is current. So our network and I think it work even though they might be the network.

Ray Le Maistre, TelecomTV (01:08:45):
Okay. I want to come to you as well. I know that Rakuten's been looking into this pretty heavily. Of course you want to be able to take some of this experience international as Well

Faiq Khan, Rakuten Symphony (01:08:57):
Yes, I think I've beaten on this point quite a bit earlier already, so I don't want to keep repeating it, but we take a lot of pride on the results we have achieved and we are taking it out in the industry. I'm very happy that we did talk about it in other forums and we'll continue to do so. We believe that this is where we can add to point earlier, Vish was mentioning as well, not only just from an open ran perspective, but other ways of learning where via Rakuten Symphony platforms we educate the industry and happy to take the positive results of that in the market and share experiences.

Ray Le Maistre, TelecomTV (01:09:36):
And Diego going to come to you for the final word here. Sure. This is something that Telefonica looks at and reports on its and data stats as well in terms of how it's doing on its sustainability journey. How much is the energy efficiency consideration bought into the work you are doing on optimization?

Diego R Lopez, Telefónica (01:10:01):
We are trying to bring, it is one of the key goals. I want to say that we are fully centered on this. We are taking into account other considerations and in general sustainability in its widest concept. Not only in terms of consumption of energy in a certain point of time as well about the reusability of the deployments we make and the fact that we are trying to contribute in a holistic view to our footprint in terms of consumption of resources. So this is very important and we are making, as we were saying about taking pride of that, how serious we are taking this.

Ray Le Maistre, TelecomTV (01:10:51):
Okay, fantastic. So we are almost out of time for this session, so we do need to end the discussion there. Let's first, before we end, let's take a look at the poll results. So far, just a reminder of the question we asked, which was what will have the greatest impact on improving network operational efficiency over the next five years? Let's see what came out transitioning to a cloud native architecture, got the biggest school, they were 49% deploying advanced AI and ML solutions, 40%. So cloud native ahead of AI and ml. That's very interesting. And leveraging SDN for dynamic resource allocation, 11%. So that's a pretty interesting, I mean, maybe we talked so much about the impact to cloud native here in the next few days. Everybody's like, yes, cloud native every time. Now this poll, as with the others, is still open so you can continue to vote and have an input there.

(01:11:54):
But that is the end of this session seven. There is one more to go guy and I have been talking over the past few days about how we're very focused at this event. We always want to bring it back to the customers and the final session we have is focused on that. We'll be back at goodness, three 30 in 16 minutes time, we're going to be back and focusing on the customer. So it's a short break. Just enough time to grab a coffee. The last chance to get the best coffee you're going to have this week for our online audience, we're about to start another extra shot program, so don't miss it. Please stay with us for now. Thank Diego, our co-host and our panelists. Thank.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Panel Discussion

The drive for further efficiencies in the network continues and the focus now is firmly on the role that increased automation will play. Advancements in AI and ML are creating new opportunities for enhanced and deeper levels of network automation. But further developments are still needed, such as a unified approach to data collection and management along with improvements in energy efficiency. Meanwhile, adoption of disaggregation and open networking risks adding complexity. However, successful automation can lead to a more productive development environment, which will leverage cost savings into new digital service revenue generation.

Broadcast live 6 June 2024

Featuring:

CO-HOST

Diego R Lopez

Senior Technology Expert, Telefónica

Anita Döhler

CEO, NGMN Alliance

Darrell Jordan-Smith

Chief Revenue Officer, Wind River

Faiq Khan

SVP & Global Head - OSS Sales, Rakuten Symphony

Sadayuki Abeta

Chief Open RAN Strategist, NTT DOCOMO, Inc and CTO, OREX SAI

Vishal Mathur

Global Head of Engagement, Telecom Infra Project (TIP)