Next-Gen Digital Infra summit – Q&A day 2

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/ksNp33JI18U?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Guy Daniels, TelecomTV (00:24):
Hello, you are watching the Next Gen Digital Infra Summit, part of our year round DSP Leaders coverage. And it's time now for our live Q and A show. I'm Guy Daniels and this is the second of two q and a shows. It's your final chance to ask questions to our expert guests. And as part of today's summit, we featured a panel discussion that looked into new cloud architectures, private clouds, and data lakes. We covered implementation, best practice, architectural blueprints and data management. And if you miss the panel, don't worry because we will rebroadcast it straight after this live q and A show, or you can watch it anytime on demand. If you haven't yet sent in a question related to the panel, then please do so now using the q and a form on the website. Well, I'm pleased to say that joining me live on the program today are Diego Lopez, who is senior technology expert at Telefonica and an Etsy fellow Mark Gibson, VP software at Connectivity.

(01:38):
Dario Ella, chairman of Etsy MEC and VP of X flow software technology. And Joan T, deputy director and network architect at Doomo Euro Labs and Rapporteur for Etsy, I-S-G-N-F-E. Hello everyone. It's really good to see you all. Thanks for returning from the panel discussion. Let's get straight to our first audience question, and I'm going to read this one out to you. How are you maintaining cloud native pipelines across dispersed MEC nodes and central data center clusters? So a real infrastructure question here to start us with. John, are you able to comment first on this viewer's question?

Joan Triay, Docomo Eurolabs (02:28):
Yeah, sure. Thank you. Yeah, well I will maybe not focus too much on the example of the Mac nodes, but I'll maybe explain a case that we are applying in a very similar scenario, which is about the virtualization of the run. So in this case we are also talking about highly distributed environment and of course when it comes to such context, it's very important that operations are running efficiently and basically the premise that we are following is to make a clear delineation of where the different operations are taking place and from where they are being controlled. So if I go more deeper in the example, for example of the B run, in our case for example, we are distributing our vRAN on different regions and locations. So we have a layered or hierarchical distribution between cloud regions and different sites, regional, central and also edge sites.

(03:36):
And basically the premise, one of the premises that we are applying is on centralization of operations. So this is quite important when it comes to the distributed environment and one example for example is how deploying and managing the lifecycle of the clusters. So all clusters that are, for example, being deployed and operated, be it for actual network functions, workloads or be it clusters providing additional functionalities like for example, platform or OEM functionalities, they are all managed through the same systems and using the same procedures. So yeah, that's one example of applying centralized centralization of the operations. And another one is related also to what I also discussed in the recorded panel about the aspect for example of observability. In that case the premises basically to identify very clearly what are the points where data is being collected, data is being processed and data where it is being visualized.

(05:02):
So that also determines that a clear delineation of the different roles that are provided to the different elements deployed on the system. And also one of another important aspect I believe, is try to also minimize as much as possible variability. That means that basically need to manage at a scale. It implies that it's quite important to identify very clearly reusability of different elements. And for example in this area it's quite important the use for example of decorative management that enables the operator to actually make a better replication of the things that are being deployed on the system. So basically these are premises that make at the scale the operations in a very distributed manner. And one last remark I would like to make is that of course this is just initial steps that are happening nowadays in the operators domain. And of course it could also be further improved in the future by adding additional functionalities like ci cd pipelines, not just only for the infrastructure management, but also for many other aspects that the operator are managing. So in a sense, very important replicability and very clear delineation of responsibilities and roles of managing different operations. It's very important.

Guy Daniels, TelecomTV (06:43):
Great. John, thank you very much for that. And we'll go straight across if we can to Dario because we'd like to pick up on maintaining cloud native pipelines across, especially across dispersed mac nodes. So what's your take?

Dario Sabella, ETSI (06:56):
Yeah, yeah, thank you so much. I can comment on the Mac side of course as a chairman of Etsy, me, I have something to say on that, but I believe also the question can take a more generic general meaning because from the Etsy Mecca point of view, MEC acronym stands for multi-access edge computing. And this is in Etsy, the standardized set of architecture and APIs that are defined by Etsy. In this context, I believe also we can take more general meaning and consider in any case the definition of a Mac node, which is by the way, some edge node which can be closer than the remote server. So the natural question would be, okay, but where exactly is how deep is the edge, how close to the end user? So from my standardization point of view, I can say I totally agree with your previous speaker and what we said also in the session previously that the edge node edge can be everywhere at every level and where you deploy exactly the node depends on let's say a business choice, a convenience.

(08:21):
So now talking about of course having this kind of system, we need to talk more about a hybrid system because there is a sort of continuum edge to cloud continuum. And then depending on the location, it can be co-located with the run base station and the central processing and the network core edge and so on at the regional site. But in any case, whenever the are deployed, they are maybe locally collecting data from local sources and this is actually the value. So at the end of the day, makes sense to process data where the data is generated locally and perform this kind lightweight processing. If this is needed also to be farther processed to the central clusters, then of course this can need to be transferred and sync it up with data center when the network connectivity is available sufficiently available. So by the way, the point is of course how to ensure that this is done properly when there are many make nodes and data center clusters.

(09:41):
Of course there are multiple problems to take into consideration and also possibly the fact that we have different technologies. So my point is in this kind of hybrid system, of course this has to be orchestrated, but at the end, let's say allowing the communication across the diverse nodes in this hybrid cloud, it's made possible by the consumption of the data via APIs. So for me, the APIs are the key because this is a sort of contract between two software entities and allows you to take control of which data is transferred, who is allowed to consume this information and so on and so forth. So we have made a huge effort in Etsy MEC to produce Mac service APIs and not only the Etsy standard but also outside that there is a huge work, for example, from the Kamara project in Linux Foundation. And this is a complimentary effort.

(10:47):
I'm pleased to say that we have published recently a white paper between et camara and TM Forum trying to provide some guidelines and guidelines for API consumption and helping developers on that. So the key point is using APIs from various sources and this will become even more dramatically important in the agentic AI era when not only humans will consume APIs but also agents. So long story short, yes, to maintain this kind of system talking about hybrid system, I believe communication across the diverse nodes at the various levels of the deepness from the edge to remote cloud, we need data transfer and consumption via APIs and this can be also quite useful to keep control of the data governance and access policies.

Guy Daniels, TelecomTV (11:44):
Great. Thanks very much. Dario. Diego, can we come across to you for your comments? We've heard about APIs there from Dario, what's your take?

Diego Lopez, Telefonica (11:53):
Just a comment. What we're trying to do, and we have made some experiments and demonstrators, we are focusing on the idea that, well, how to say it, that we're trying to look for good all roof solutions that have been used widely used, but changing some of the usage paradigms to address the new needs of these edge cloud continuum that Dar was mentioning. Basically what we are working in, building a set of abstractions for facilitating a seamless connectivity among workloads and data sources and data consumers independently of where they are located and trying to, since it is important, sorry, one thing not only where they are located is, I mean independently of whether they are moved for whatever the reason from the edge to the cloud to achieve a higher computing power or from the cloud to the edge to deal with local data because of policies of requirements, the idea that all these happens transparently and to support this in a way that is totally transparent to the workloads, to the applications that are running.

(13:16):
So migrations and consumption of this is more oriented towards the semantics of what we want to do rather than association to the particular location or whatever they identifies for this. What we are following is, as I said, good old principles like virtual networks like message brokers, et cetera, and we are extending them with capacities that are very much related and basically related with the identity of the workloads. So once workload, a particular application agent, program function, whatever is probably identified, the ideas that it gets the right to access that share space or a particular share space with a set of rule of rights and can use that virtual network, can use that message brokering system, whatever in a transparent way. The advantage of these of this identity-based network system is that allows us well for further availability, et cetera, which we believe is an essential property to consider things like SLAs, proper accounting availability, et cetera. And well that we have run these in a couple of environments. It's not in production I must say, but it's quite promising.

Guy Daniels, TelecomTV (14:46):
Great, that's good to hear. Thanks very much Diego. Lot of information there about that. First related to that first question for our viewer there and everyone else who's interested in this in maintaining cloud native pipelines from edge to cloud, we have to move on. We've got a lot more questions we'd like to get through. So straight on with the next question, this is a data question. So Mark, I'm going to come to you if I can for this one, the question asks, with storage volumes growing fast, how do you decide what data is worth keeping and what can be thrown away so that data lakes stay both useful and affordable? Okay, what's your take on this?

Mark Gibson, ConnectiviTree (15:29):
Yeah, so I've worked in the OSS space for a long time and I think data lakes have always slightly confused me as to exactly what their purpose is for sometimes. Sometimes people just think they need a lake and just put everything in it as a matter of course and then decide what they're going to do with it. So I think my first question is really why are you storing it? Why are you storing the data in the lake in the first place? What are you intending to use it for? If you're using it for learning, are you using all of that data or are you not using all of that data and can that data be got from somewhere else? If you are for example just collecting events from another system, you can probably go back and regenerate all those events from that system because it'll put it into some Kafka queue which will be readable, which will be, should be backable from.

(16:21):
So if that's a proper term, but I think that's really my piece. If you are putting it in there as an archival vault, then you are going to make it swell to whatever size you need to because that's what you've chosen to use it for. If you are trying to use it as a place to store unstructured data for learning, then that's fine, but I would argue that certainly nowadays you want to be looking at something more like a data mesh approach to that and start to store structured data in more specific data products and actually use that process of exporting into those data products to start to be able to normalize your data as it comes. And then you'll see which data products you're using regularly, which ones are getting very little use and the ones that are getting very little use are maybe not that important and could be retired.

(17:14):
But I think moving that paradigm into rather than having 20 apps each would probably a lot of the same information in them all pushing their structured view into a data lake of all this kind of overlapping data that needs to be correlated back and all that good fun and finding common keys between the different data sources so you can match two people's view of the same thing because they called it a different thing and use a different identifier, which is tremendous fun and generates a lot of work. But actually if you could, when you push it out, push it out in a structured way and have it ready for use, you'd start to find that your data exchange queues operate or give you the value that you want to have from that data lake, but without having to have a whole separate lake. So you could, I think by thinking better about your architecture, not just having a great big hole in the metaphorical software grounds that you chuck everything into and then try and extract from later maybe think more about what do I want to learn, I want to learn it, and how do I build data streams that allow me to capture that and normalize it as I go so that when I'm exporting the data from my domain, I export it in a common manner that I can understand intra domain and actually then give it to my data scientists.

(18:49):
So my data scientist doesn't have to phone 25 different heads of department to understand what their data structure means. You kind of take that guesswork out of the trying to put structure onto differently structured chaotic data, you take that away as well. So I think in my view, it's not just a question of how do I reduce the amount of data, but how do I also reduce the time to understanding what that data actually means and thinking about it from that perspective. So I think that's what I would do.

Guy Daniels, TelecomTV (19:20):
Great. Thanks very much Mark. So think about what you're going to use that data for as well. Thanks for that. Diego, did you have anything to add to this point?

Diego Lopez, Telefonica (19:30):
If you think about it, this is somehow this concern about data and how to deal with data that is growing infinitely. It's a little bit like what young people in general has with this formal thing about the fears of missing out. It's something similar. I mean the unstructured chaotic data, as Mark was mentioning before, is something that well has a value or can have a value, but we have to be aware that has a value that is or normally is very limited in time. What is valuable, really valuable is the knowledge that we can derive from them. And the real challenge is how we structure the knowledge, how we communicate this knowledge and how we make this knowledge usable not only for the original purpose that the project was created. I believe that there is the real challenge in data processing, in doing, as mark said, in the shortest time possible so you can deal with more and more data. And on the other hand, it's making this knowledge reusable that during the previous section session we were talking about semantic models, how to structure this, how to represent this, and I believe that the real challenge is how we represent knowledge, how we change knowledge, and how we can make knowledge usable in scenarios that are not necessarily the ones we are thinking right now.

Guy Daniels, TelecomTV (21:08):
Great, thanks very much Diego. Great comments to add to that question. And Dario, you'd like to come in on this one as well?

Dario Sabella, ETSI (21:17):
Yeah, just a small comment on that. Maybe I was thinking to another angle on this. I was, let's say, captured by this question on the fact that of course we are considering the storage and then it's also about considering data retention policies. So of course we want to make the data useful and affordable, usable, but also when organization needs to decide when to keep or not data or to let's say data should be retired. So this has to do with of course data relevance but also regulatory needs for example, and business value. And sometimes of course this kind of policy has to consider the regulatory needs like for example, GDPR, so that you have to take the data, just the time necessary for the performing this the service. The service that is for the data, use it for. And also on the other hand, for example, you have some other, I dunno, audits, financial audits, and then you have to keep the data and this sometimes is overriding the possible cost concerns or keeping data. So yeah, the fear of having storage volumes that can grow faster sometimes has to do with the different constraints and different aspects and that ation policy has sometimes also to deal with the regulatory needs just to add this kind of angle. Thank you.

Guy Daniels, TelecomTV (22:54):
No, thank you Dario. It's a good additional angle for us to add to that question. Thanks very much. Well let's move on because we've got a question here. Slightly sensitive question. How compelling are the cost savings when telecom workloads move back from the public cloud to the private cloud? Obviously we're not going to talk about specific cases here, but in a more general context, Diego, have you got any thoughts generally about where the cost savings factor comes into this decision?

Diego Lopez, Telefonica (23:30):
The cost saving factors when it comes to these trends about standardizing infrastructure is something that depends very much on the costs, where the mean is on the price that the infrastructure providers put on their assets. It's quite typical and my impression is that when all these move around externalizing or cloudifying network functions, et cetera started the conditions were extremely competitive. And I guess it has been the same in many other sectors, probably in the banking or in the health sectors, it was much easier or much cheaper to move to the cloud environment because in that moment you have your providers are very much interested in gaining market share, et cetera right now with the enormous pressure that for example the generalization of ai of the use of this infrastructure for AI are putting probably the costs make sense to bring back some of the functionalities to the private environment. I think that the important challenge here is for us and for our technology providers to make those functionalities that we are choosing whether to put in the public or private cloud, having them as mobile as possible or movable as possible so we can decide to run totally or partially particular functionality in private and public clouds depending on costs or whatever other considerations. I think that that's the real challenge and that's what will really shape cost savings in the future.

Guy Daniels, TelecomTV (25:28):
Great, thanks very much Diego. I'm going to go cross to Joanne as well. Did you have any more general comments to add on this one?

Joan Triay, Docomo Eurolabs (25:38):
Yeah, I think I would say that probably the major premise is not about the cost itself and actually in terms of cost because of the change of the infrastructure that is happening nowadays in the telecom domain using actually cuts based hardware and other kinds of hardware that are basically provided on larger scale and we can actually benefit a lot of the economies of scale, the prices actually of that infrastructure is going down so compared to what was in the past. So even regardless of that aspect that might have more implication on the cost itself as I explained in the previous recorded panel that other factors actually play a more critical role and those factors including resiliency data and also service requirements. So there are definitely certain kinds of workloads in the telecon domain that they are not actually suitable to be run on the public cloud.

(26:52):
So there is still a need of running them on the private or on-prem and we cannot actually operate them. And the importance here is to ensure that we provide best servers to our customers in terms of latencies bandwidth and so on. There are actually major gains if we can actually run those workloads on our private clouds or on-prem. Besides that, I would say that it's also important to reflect the fact that telecom operators, we have also been used to be intensive in terms of capital expenditure. We know how to manage it also from our past experience on deploying networks with the legacy equipment and it is not actually something new that we cannot actually also handle with the deployment of required infrastructure resources even if we run them. So my basic premise or conclusion would be that yeah, cost savings are important, but suitability of actually running telco workloads on public cloud is probably not yet there. As I explained, we need to actually make sure that we fulfill the requirements that we are expected from the customers and the availability and service good service levels are much more important.

Guy Daniels, TelecomTV (28:29):
Great. John, thank you very much for that. Yeah, I get the sense that other factors are probably more compelling at this stage than just cost. Thank you very much indeed for those responses. And before we take our next question, we are going to check in on our audience poll for the next gen digital infra summit. And the question we are asking you this week is what are the most important investment areas to create a future-proof telco infrastructure? And there we go. You can see the real time votes right here beside me, support for AI factories and AI edge compute continuously lead the pack from yesterday closely followed though by deploying service-based architectures. And if you have yet to vote then we're running out time. So get a move on, we're keeping the polls open until the end of today and then my colleague Rayla Matra, editorial director at telecom tv, we'll then analyze the final figures on the website. Right, looking at the clock, we probably have time for a couple more questions so let's get straight to the next one that we've, from you and the question asks, as data silos and lakes keep expanding, is there any best practice to manage who owns which data and who is or is not allowed to use it? It's data. So it's Mark. Mark, I've got to come to you first if I can. What are your thoughts?

Mark Gibson, ConnectiviTree (30:03):
I mean I think this is a question I've been thinking a lot about in the last couple of years. So the traditional view of OSS software is you have a number of bigger applications and each of those are your masters, a set of data and need to share data with other of the applications. And so you build a network of integrations of APIs between those applications to share their data, use standard APIs where they exist and not what they don't. And that's kind of the way that it's been. And I'm increasingly thinking that that's sort of the wrong way around because everybody that I've talked to when I've been a vendor and now that I'm a telco operator as well, implements Kafka queues to share data between the systems. That's just kind of the way it is. And everybody comes up against a vendor's external surface area and then has to use their APIs that they provide full data into their to share with other parts of their business.

(31:17):
And if you think about that everyone's already building those shared data structures, why not kind of lean further into that and say, well rather than building all of this kind of n squared mesh of APIs between all of the OSS apps, why not push more of the data into that shared data set? Because that then becomes going back to the data lake question that becomes something like a shared common data set that you generate by exchanging the data between the applications and you sort of move more of the ownership of the data out of those, the multiple inventories that you have, the multiple emss that you have into a more kind of shared data structure that can be easily read by those very same inventories. So rather than having to having to pull everything in and then synchronize it and keep it synchronized and keep constantly looking for new events to see whether something has changed and to run those glorious day long full discovery and sync processes to make sure that everything you thought you had is what you currently have.

(32:32):
Use those common data structures as your baseline. They become your baseline and then the apps become where you do the more specialist stuff. So if you are designing a circuit, you obviously go into your network inventory and you go and design your circuit there. So the ownership of the specific operations within a domain sits within those applications, but the general data that you are running your business on becomes across domain owned general data set that you can then use to, well you use in a variety of ways. You could use that as your baseline for feeding and machine learning because you've got a common homogenous, well-formed data set or set of data sets. It wouldn't be one single data set, but you would have a data product for discovered network information. You might have a data product for assurance and event collection and you can imagine how many more there might be.

(33:27):
But by building in that way, your data becomes common shared and owned by everybody who contributes into it. But the specific data then is owned by those data domains and by hopefully normalizing it sufficiently, you can then share that data easily into data scientists or the people who want to go for patterns in the data but don't want to have to understand what each of those domains are looking at. So for me it's about thinking about who needs to own the detail of the data, but then who actually needs to understand the outcome and how do you make that as easy to, is easy to share. The other thing with data products is you can then put ownership constraints. You can have data contracts for who can access the data, who can't access the data and what the structure looks like. So even though you have these big shared big shared queues, actually they are easy enough to police and then they become a bit like your data platform that you can then build other things on top of.

(34:35):
So if I want to innovate an app, my engineering team needs to understand something about the state of report or which is quite a common thing I've run across, could just could tap into those queues. And most modern data queue processes allow you to write SQL queries onto those large scale, in some cases time series data sets to run live queries. And actually you then build something which drives innovation within your organization. And I think that's a thing that now I've seen it, I can't unsee it. And to me it seems like the way you should be thinking about how you are managing your data within your business.

Guy Daniels, TelecomTV (35:16):
Fascinating. Thanks very much. Mark And John, did you have a extra comment on this one to add?

Joan Triay, Docomo Eurolabs (35:24):
Yeah, I think I fully agree with what Mark said. I think it's quite also very interesting to make sure that everyone who contributes generating the data and then can be actually structured in a way that can be consumed as needed. But actually very important premise. I would say that we need to emphasize even more in the future and there are good initiatives of standardization around it. This is actually on standardizing the data that we manage and we manage in particular in a telco domain. Of course, many all data, maybe not all data will be able to be standardized, but that data that is critical for user control of the network and management of the network should be possible to be standardized, define its structures, their meanings so that actually then we can actually make sure that data is understandable and it's usable. If we cannot actually understand that data, it is actually impossible to understand who is going to use it, who is going to be responsible to manage it. So maybe one aspect that we should emphasize a lot more in the future, and as I said there are good initiatives of standards in Etsy and so on, is emphasizing the importance of standardizing structured data.

Guy Daniels, TelecomTV (36:54):
Great. Thanks very much John, for those additional comments. We're getting a lot more questions these days around data as we do around ai, which does lead me to the next question because it's an AI related question. This might be our last question unfortunately, but let's see, this is quite a broad question, so maybe you all want to comment and maybe we do it as a bit of a lightning round and quickly just have some short answers for this one. But the question asks, what role does AI play in telco private cloud? What one key area should I be focusing on? Well, this is from a specific viewer of course, but if we had to give some advice or things to watch out for areas to consider, maybe we can give some quick advice here. Mark, I'm able to put you on the spot and pick one.

Mark Gibson, ConnectiviTree (37:47):
You are, I know I'm going to pick two. So I think that the first thing I would do is go and look at the structured data that Joan was talking about and make sure that what data's important to you is getting standardized and get involved. Because having worked in standards in the past, I know that the best way to get what you want is to tell people and do that. So I would look at that and I would look at aligning that to, there'll be a broken record on this, but the data mesh and data product structure so that you can not only generate data which is structured that you can use to feed your machine learning, but then you can apply the machine learning back to those inline data structures as they're generated and derive deeper meanings. So I think there's a big untapped area of research into how you can use these data products to feed other data products and apply their learning back in real time. But that's where I go look.

Guy Daniels, TelecomTV (38:42):
Great. Thanks very much Mark. Good advice there. And Diego, are we able to come across to you and get an area of interest from you?

Diego Lopez, Telefonica (38:50):
Well basically probably I'm going to be very much aligned with Mark and my view of what telco infrastructures are for RP change to facilitate exchange, facilit facilitating this case exchange of data. And what they believe is that telco infrastructures in general and in particular telco private clouds, can be an excellent enablers for using AI training, AI validating ai, applying AI while complying with whatever requirements in terms of data protection, privacy, et cetera. So I believe that we telcos can be an excellent substrate for applying AI the way it should be applied.

Guy Daniels, TelecomTV (39:40):
Great. That's a nice way of putting it. Thank you very much. Diego. Dario, are we able to get an idea from you as well?

Dario Sabella, ETSI (39:48):
Yeah, I was also kind of, it's funny because when we talk about ai, everybody as in my degenerative AI actually, we know that actually I say always that AI is a new technology, old of 15, 50 years, 50. I will say when I was young I was playing with ne network in matlab. You see from my years, I'm not anymore young. So you can really agree that of course there are a lot of technologies there since a lot of time. But the question is about the future. So AI is already in the present because we're talking about classification, prediction, machine learning, AI has a lot to do with the network automation optimization. We have seen a lot of minimization of dry test in the past MDT and so on. And now with the AI network can be better configured and managed and optimized. And of course this is part already the past and the present.

(40:56):
Talking about the future, it's funny because everybody has in mind by default generative ai. Yes, in that case, Tenco private cloud can take a role, I believe always talking about telecommunication operators, what is the asset? Because of course otherwise we may lose to let's say fall into the temptation to consider again, as in the past, the sort of competition against AI factories. Tele operators should play a different role in optimizing their role and providing benefits because benefit is indeed the network. So the data coming from the network and the knowledge of the customer and also maybe the proximity. So the edge can be an enabler. So edge AI can be also a quite important role in my opinion also in the future. So in this perspective, I agree with this kind of chart where you asked the question edge was one of the top answers in the Instagram. So yeah,

Guy Daniels, TelecomTV (42:03):
Dario, thanks, very rich indeed. And John, can we come to you as well? I don't want to leave you out of this discussion. Is there an extra area of focus or are you agreeing with what you've already heard?

Joan Triay, Docomo Eurolabs (42:14):
Well, one of the areas that probably AI is going to, well in our case, whether it's actually playing or can play a key role, and it is not about whether we are going to have AI factories or not in our deployments, but it's more about the use of the AI for the actual benefit of improving operations in our domain. And when it comes to actually consider where it can be applied, our ideas is basically not to disrupt what is already being well-defined procedurally and rule-based, where it is actually important to have a low impact on the mid judgment of any processing of information and data that we have in those areas. The use of the ai, it's probably not so critical for us now. We know that networks are generating vast amounts of data. They are also getting more complex disaggregated. There are many different factors that are not controllable easily by defining rules and in those areas where actually there's very big amount of data difficulties to define rules and also we can still assume that there is a possibility of having errors being created by ai. It's where it's actually been envisioned that IT use can be a key role for the operators in the future.

Guy Daniels, TelecomTV (43:55):
Well thanks very much everyone. Unfortunately that's all the time we have. We are out of time for the program. Thank you so much indeed for joining us for this live show. And that's a wrap for this year's next Gen Digital Infra Summit. Thank you to all of you who submitted questions. I did see we had a couple of late questions that came in that unfortunately we weren't able to address, but also I'd like to thank everyone who took part in the poll and also who watched our live streams. Now you can watch all of the programs from this year's summit on demand from our website and it features this incredible group of industry experts. So thank you to all of our speakers and sponsors over the course of these past 12 months for supporting the DSP Leaders Summit series. And for those viewers watching us live, we are going to broadcast today's panel discussion immediately after this program. So do stay with us next month. We return to our in-person events with our annual great Telco debate and the brand new Digital Sovereignty Forum. Join us in the room if you can, or followers online if you can't. For now though, thank you for watching and goodbye.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Panel Discussion

This live Q&A show was broadcast at the end of day two of the Next-Gen Digital Infra summit. TelecomTV’s Guy Daniels was joined by industry guest panellists for this question and answer session. Among the questions raised by our audience were:

  • How do you maintain cloud-native pipelines across dispersed MEC nodes and central datacentre clusters?
  • With storage volumes growing fast, how do you decide what data is worth keeping and what can be thrown away?
  • How compelling are the cost savings when telecom workloads move back from public to private cloud?
  • Is there any best practice to manage who owns which data and who is allowed to use it?
  • What role does AI play in telco private cloud? What ONE key area should I be focusing on?

First Broadcast Live: November 2025

Participants

Dario Sabella

Chairman, ETSI MEC, VP, xFlow Research

Diego R. Lopez

Senior Technology Expert, Telefónica and ETSI Fellow

Dr. Joan Triay

Manager and Network Architect, DOCOMO Communications Lab. Europe (DOCOMO Euro-Labs), Rapporteur, ETSI ISG NFV

Mark Gibson

VP, Software, ConnectiviTree