To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/hTWe1g3r63U?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Guy Daniels, TelecomTV (00:24):
Hello, you are watching the NextGen Digital Infra Summit, part of our year round DSP Leaders coverage. And it's time now for our live Q and a show. I'm Guy Daniels and this is the first of two q and a shows. We have another one at the same time Tomorrow. It's your chance to ask infrastructure related questions to our guests, especially if your questions are connected to AI and inferencing. Because as part of today's summit, we featured a panel discussion that looked at how telcos can support AI factories and distributed inference. Very topical subject. Now, if you miss the panel, don't worry because we will re-broadcast it straight after this live q and A program, or you can watch it anytime on demand. If you haven't yet sent in a question, then please do so now, use the q and a form that you'll find on the website. Well, I'm delighted to say that joining me live on the program today are Andy Linham, principal strategy manager for Vodafone Group. Beth Cohen, product strategy consultant for Verizon and Kerem Arsa, senior principal analyst at Omnia.
(01:54):
Hello everyone. It's good to see you all. Thanks for returning back and coming and joining us on the live q and A show. So let's get straight to our first audience question that we've received and I'll read it out to you. The question asks is the emergence is the emerging importance of digital and data sovereignty proving to be the leading use case for telco Edge AI because autonomous vehicles feels like a dead end and other enterprise applications are being slow to emerge. Well, that's what our viewer thinks. So let me put that to our guest. Andy, I'm going to come across to you first. Do you think it's a leading use case?
Andy Linham, Vodafone Group (02:42):
Now I'm going to be really picky. I don't think it's a use case at all. I think it's a model for deployment that affects lots of different use cases. That sounds like I'm being a real kind of pernickety type of person, but I think is a very important distinction to make. So we've got a lot of interest in sovereignty. We've got a huge amount of customers now firewall than we ever have done before who are interested in where their data's located, where the teams are that manage it, where the technology comes from. All these sorts of things are now much higher up CIO's agendas than they used to be. I think a lot of it depends upon geographically where you sit. There are different priorities in different parts of, even within Europe, if you look at different countries such as Germany versus France versus the uk, we have a different perspective.
(03:30):
I think sovereignty will drive a lot of business towards telcos because we do have that sovereign capability. We do have lots of assets in country. We have lots of people in country that can monitor them and manage them. I don't think it's the use case. I think the use cases will emerge. I think there will be things like autonomous vehicles. You will have lots of computer vision and video analysis for manufacturing and distribution centers. I would agree with the question that they are slow to develop, slow to emerge, but I don't think we're at the point yet where we can say that sovereignty is the use case that's going to drive edge ai. And if I take a country like the UK for example, the UK is a realistically very, very small country, but I look at low latency. I could probably put one node in the middle of the UK and Birmingham and I could have relatively good performance through to it from just about every other corner of the country. If I look at the us, if I look at Germany big places, that's what I need regional edge based services for a lot of these different types of applications. So to summarize the answer, I think sovereignty is really important and I agree the use cases aren't there yet, but I think they will come in the next year or two.
Guy Daniels, TelecomTV (04:42):
Great, thanks very much. Andy. We do overuse the term use case a lot in this industry I think, and I do like the idea of being a model for deployment. That's something we're going to pick up on next month when we look at digital sovereignty in a lot more detail. But for now, Kerem I'm going to come across to you. Do you agree with the sentiment of the question and what Andy's saying or do you have some different views?
Kerem Arsal, Omdia (05:03):
Well, first of all, I do agree with Andy's points that sovereignty is indeed a model rather than a use case. I mean, it is an opportunity at the moment simply because in some markets, hyperscalers are not local entities, and because of that it does create some window of opportunity for telcos to create their own AI data centers and to serve enterprises. But I think again, this time I'm going to pick on a different word which is going to be edge because again, I agree with Andy in that in some markets you don't necessarily have to think of edge as these hundreds of thousands of distributed nodes that do computing, especially in the world of ai. And within the opportunity of sovereignty, probably just being able to build your own AI data centers should be a big accomplishment. And we don't see many announcements coming from some of the operators, at least in Europe for instance.
(06:02):
Some of them are very actively building, but some big ones that we expect to have heard from have not yet made such announcements. So that's one thing. And secondly, I think again, I agree with Andy that there is a continuum of sovereignty and what it means in practice. So that is something that telecom operators will need to learn, which data needs to be held, where to satisfy compliance, how do verticals differ, how the countries differ, which applications versus which data. So those sorts of formulas are things that telecom operators will slowly need to learn. And one last point I want to make about sovereignty is that in the beginning I said that hyperscalers are not local entities. In some markets like most European Union or the UK or some Asian markets, however, hyperscalers do learn very quickly and they're extremely adaptable. So this window of opportunity, if it exists, is not going to be permanently open because Hyperscaler is big public cloud players, they're learning fast, they're striking big deals, and they're also building data centers of their own in these areas as well.
Guy Daniels, TelecomTV (07:15):
Yeah, absolutely. Kerem, thanks very much for those comments. That's great. And Beth, let's come across to you and get your thoughts from this first view of question.
Beth Cohen, Verizon (07:25):
I'm going to pick up on what Andy and Kerem said. So it isn't a use case, but it is an infrastructure support or rather it's a support use case. And I think the telecoms are in a pretty good position to support this. I disagree with Kerem to say that it's a new understanding. I think that the telecoms have been supporting data sovereignty for a long time for just general use. So it's not specific to ai. And obviously telecoms are always going to have more points of presence than even the large hyperscalers just because of the nature of the business. Models are different. However, data sovereignty is not just in country. I think a lot of it relates to how the AI models are using the data and protecting the data from being used. And I think there's going to be a lot of companies and individuals are going to be moving away from letting the AI engines take, I'll use that term a little pointedly, take their data and ingest it into the engines. So I think data sovereignty as a concept is a rapidly changing concept. But I agree that the telcos do have, certainly they're in a good position to support data sovereignty today, but it's not particularly specific to ai.
Guy Daniels, TelecomTV (09:15):
Okay. Thanks very much for those points, Beth. That's terrific. Covered a lot of ground there on that opening question. And as I mentioned a bit earlier, we do have an event in December where we're looking at data sovereignty in a lot more detail. So we'll get some more nuanced views and ideas coming out of that, I hope, and you'll be able to catch up later on telecom TV with the highlights of that. So thanks very much everyone. Let's move on to our next viewer question. This question asks, as we build up compute facilities and increase power demands, which technology or operational innovations look most promising for keeping energy and cooling costs under control at these high density telco sites? Well, let's go straight back to Beth. I think for this one, you do spend quite a bit of time, Beth around at conferences looking at emerging technologies, emerging companies, and seeing what the latest innovations are. What insights have you got for this?
Beth Cohen, Verizon (10:17):
So I recently attended the Open Infra Summit in Paris a couple of weeks ago, and that was a very hot topic. There's a number of big telecoms around the globe that are investing in AI technology to address exactly the energy consumption efficiency. And I think personally, I think it's a great use for AI because AI doesn't have to be perfect and you don't have to be a hundred percent right to still reduce your energy consumption. So there was a lot of use cases and there's a number of companies that are already invested in using this, and it's a way of reducing your consumption by using traffic patterns and predictive analysis to determine when you can turn off nodes or put them in low power mode to match the actual use in live use to reduce energy consumption. So I think it's actually a great use case for the telcos and I suspect most of them will be using it over the next 12 to 18 months.
Guy Daniels, TelecomTV (11:45):
Great. Thanks very much, Beth. Yeah, we look forward to seeing more evidence of that from telcos and seeing what the actual benefits and results are. That's going to be good to watch out for Andy. Let's come across to you. What are your thoughts on this question?
Andy Linham, Vodafone Group (11:59):
There's one thing that I think is really important actually. That's where you actually put the AI data centers and the compute factories. If you put them in somewhere like Scandinavia up in Norway or Sweden where you've got access to lots of naturally cool air and lots of hugely renewable energy sources, that makes it much more likely, you can generate return and investment much, much faster because you have access to all this abundant power through hydroelectric. You've also got a much lower ambient air temperature, so your amount of cooling needs to be reduced. So as much as we can use technology to help us to reduce the costs and to reduce the implications of power that we need, I think we can also be really, really smart about where we put these data center, the news, what nature's given us to make these things as efficient as they can be with the current generation of technology.
Guy Daniels, TelecomTV (12:46):
Yeah, absolutely. Thanks very much Andy. And we're a few years away from putting our data centers in space as I've been reading about this week, but I'm sure in a few decades that might happen. Kareem, let's come across to you and your thoughts.
Kerem Arsal, Omdia (13:00):
Just one thing that I'm wondering is whether these requirements are going to change the way that certain nations or certain industries will interpret sovereignty as well. Because if data centers are indeed moving more towards tier two markets or maybe to certain countries where there are better natural cooling facilities or maybe there's cheaper electricity for instance, cheaper energy costs, when that sort of things start happening, then I wonder whether there's going to be an effort to find work around towards how maybe a data center in a market that is not currently within the definition of sovereignty can be included in that, if you know what I mean. So perhaps maybe there's going to be one or two markets that can satisfy sovereignty requirements of the Gulf region, for instance, or maybe some countries that serve Europe better than multiple countries in Europe satisfying their conditions. So I'm just thinking whether that could be a possible evolutionary path.
Guy Daniels, TelecomTV (14:07):
Yes, a good point. And if you don't mind me saying, I think it's a really good question because I've just been looking at putting together talking points for the December event and that's one of them. So I'm really hoping we're going to get some views around that, I think as we're seeing some interesting trends. Terrific. Thanks very much everyone. Let's therefore move on to the next question we've had in today. So I'll read this one out. This one's interesting. Given current GPU concentration, how are operators mitigating single vendor risk and protecting CapEx through multiple upgrade cycles? Because how long does A GPU last three years or less, which is way shorter than CPUs? Okay, so Andy, are we able to come across to you to get some thoughts on this one yet some analysis of GPUs and what operators are doing?
Andy Linham, Vodafone Group (15:04):
Yeah, of course. So yeah, we've looked at a similar sort of question as you'd have expected us to. The horror stories that you sometimes hear about GPU life cycles are almost always linked towards training workload. So that's where you've got a vast amount of data and you are processing on those GPUs 24 hours a day, multiple days on the tr. So you are just constantly hammering the GPUs and you will almost literally wear it out to a certain degree. But if you're looking at something like an inferencing workload, it's much more variable in volume where the GPU is having some idle cycles in between life cycles are going to be extended. The reality is we've not had this type of data center GPU available to us for 3, 4, 5 years yet to be able to actually realistically say this is how long it's going to last.
(15:54):
Not that kind of a scale that we could use to say that's a statistically viable set of results. Therefore I can say yes, it will be 3, 4, 5 years, whatever it happens to be. I take it the question around single vendor is talking about nvidia and there's absolutely no doubt they have got the most widely used GPU in the market. The joys of working for large companies are that you have strong relationships with lots of other technology partners. So from our perspective, we have a really strong work relationship with Nvidia, as I'm sure every single other telco does because there's a big movement from NVIDIA towards AI ran as an example. So there's a lot of conversations going around, whether you use AI to optimize frequency, drop those allocation of spectrum, that sort of stuff within the ran. So as much as there is a risk around single vendors, it is probably no different to the risk we had when every single networking router was either Cisco or Nokia.
(16:52):
We had a time when probably 95% of our entire enterprise business unit was running Cisco routers. If we'd had a similar problem with access to stock, we would've been in a similar position doing video today. So we're actually pretty good at managing these types of challenges. We have lots of processes in place. We have really highly skilled procurement teams to understand how to balance the supply and demand as effectively as they possibly can. So not something I'm worried about at the moment in terms of life cycles and CapEx. I think Beth and EM give you a really strong answer on that, but from my perspective in terms of how we approach it, it's around partnering. There are many ways that we can partner with different providers. It could be the near cloud, it could be the hyperscalers, it could be Nvidia themselves to look at how we can use their GPUs, how we can optimize their GPUs, and how we can connect their GPUs. And that's probably the best area for us to focus on as a telco.
Guy Daniels, TelecomTV (17:50):
Fantastic. Thanks very much for those insights, Andy. In a way, that question had a couple of different angles to it, so that's interesting to bring 'em up and separate them like that. Em let's come across to you next. The question asked about this single vendor reliance video and also about the longevity and usage of GPUs, what's your take?
Kerem Arsal, Omdia (18:11):
So first of all, I also think that maybe we're not actually there yet, at least the telcos for telcos, for most telcos, this is the beginning of the road yet. So the investment timing is crucial for them. And I think there's a realization that at the moment we are going to go through a phase where the economics will change the capabilities of other xus, be they EPU or CPU or other accelerators. They're going to improve. And also models themselves are also improving too. So they're getting better, they're getting smaller, they're getting more purpose specific at times. So basically probably within the next two years we are going to be looking at a richer environment when it comes to the hardware infrastructure. Obviously we already have a very fertile and a vast variety in the applications layer. So basically while today looks like a single vendor environment, I think most of the operators are actually trying to work around this by not overly investing at the moment and being careful about it.
(19:18):
So maybe that's also one of the reasons why, as I mentioned earlier, some of the operators have not committed or do not necessarily want to commit to large AI data centers and want to partner. To me, this question would probably apply more to a company like what's called a Neo cloud these days where they've already invested quite heavily in the GPUs and they're probably very interested in how the economics and how the demand and supply is going to pan out. So for telcos, in my opinion that really want to work and invest in GPU as a service, they probably would benefit from a lot of partnerships. And Beth, maybe you can have something to say on that regarding Verizon's partnership. But also if they want to do their own GPU as a service, I think it would be very important for them to find some anchor tenant anchor tenants, maybe like a big AI SaaS company or a big AI model developer that can commit to a volume of demand before they invest.
Guy Daniels, TelecomTV (20:27):
Great, thanks very much Kiem. And perfect link there to Beth and we'll come straight to Beth and Beth will have your views on that. And as Kiem was saying, the GPU as a service angle is one of them.
Beth Cohen, Verizon (20:39):
Yeah, I want to kind of turn the question a little bit on its head. GPU as a service I think is something that the are pretty well positioned to help with. And of course Verizon is AI connect, GPU as a services. It has been announced, so we're partnering with a third party vulture to provide those GPUs. So we've our investment obviously. But I think the question is more not necessarily just the telco investment in ai, AI ran, which I think is still pretty close to being an emerging technology that people are still working on, but I'd like to focus on our customer's need for GPU as a service. GPUs are expensive and so therefore companies want to optimize the use of the GPUs so that the GPUs, getting back to what Andy said about hammering the GPUs, even if you're only doing inference, if you're using a Clouds vendor or a telco to use those GPUs, there's a lot of temptation to use it as optimize the usage of those GPUs since they're so expensive. So I'd say that even if you're using a inference engine, you're probably going to want to be using it in a cloud way.
(22:33):
And I think the telcos are in a good position to support this, at least in the short term because these GPUs can be put in not just the core data centers that the cloud providers are in, but distributed across the network to minimize the workload and the latency based on where the data is coming from. But I think the GPUs are always going to want the providers or whoever's buying the GPUs and investing in them is always going to want to optimize their use. So they're always going to be being run as hard as possible. I cannot speak to the longevity of these GPUs. I actually have no knowledge about that. And I think because they're relatively new, we don't really know.
Guy Daniels, TelecomTV (23:31):
No, absolutely. And thanks very much for those points, Beth. That's great. And em, did you want to come back in on this one?
Kerem Arsal, Omdia (23:38):
Yeah, just a little bit to add on how the approach to GPU within the context of RAN has changed quite a lot within the last two years. And again, this is also, I think NVIDIA's approach has been also this way too because when they first began the journey, the ARC one compute was the main flagship server there server, and that computes promise was basically, I can do a lot of GPU work for you, but at the same time I can work as a GPU as a service for third parties. So you can monetize that. But I think again, because of these high prices and because of the inability to basically deploy that sort of compute platform in certain RAN architectures, I think they also kind of turn their attention more towards the RAN specific use cases or what the AI for RAN as the AI ran Alliance would call. So now also the developments that are coming from the NVIDIA side that apply to RAN are also becoming increasingly more realistic with ARC Compact, with the more recent ARC Pro. So they're actually becoming more AI for RAN hardware rather than GPU as a service. So I just wanted to make sure that the audience is aware that the GPU as a service on the RAN equipment can happen, but we are still a long way away from that. Not necessarily technically, but commercially for sure.
Guy Daniels, TelecomTV (25:17):
Yeah, absolutely. Thanks very much for pointing that out. That's good to add. Well, before we move on to another question, it is time now to check in on our audience poll for the next Gen Digital Infra Summit. And the question we are asking this week is what are the most important investment areas to create a future proof telco infrastructure? And there you go. You can see the real time votes appearing to my right there. Well, very reassuring to see support for AI factories and AI edge compute getting a lot of attention. That's a relief. If you have yet to vote, then please do so. These figures may well change and we'll take a look, a final look at the voting during tomorrow's live q and A show. Right, we still have time for a few more questions. There's been a lot of interest recently from telcos in, oh, this is the question. There's been a lot of interest recently from telcos in timing optimized LLMs. What does this mean in terms of infrastructure and what changes to today's timing protocols or standards may be required? This is a little tricky one, and then we certainly saw this emerge at our Dusseldorf event last month. Very briefly. Beth, can I come across to you first on this one and maybe try and get a good understanding of maybe the issues involved here with lms?
Beth Cohen, Verizon (26:47):
Yeah, I'm going to interpret this as telco use of AI for latency to solve latency issues and to optimize networks. And I'm not necessarily sure that LLMs are the right answer. Certainly the telcos have been more relying on machine learning for optimizing the performance of the traffic to reduce the latency because obviously customers are always looking to reduce latency. And of course many of the use cases rely on reduced latency as well, particularly drones let's to beat a dead horse, the self-driving cars and other types of applications that are latency sensitive. So that's my interpretation and I don't think LLMs are particularly the right technology to apply to that particular use case.
Guy Daniels, TelecomTV (27:55):
No. Well thanks very much for those comments observations, Beth. It's an area that I think we're going to have to wait and follow and get more industry development in the months ahead and see how it all pans out. But we will be watching this area with some interest. Well, here's another question that we receive today. And Andy, it's one specifically addressed to you and the question asks, you recently outlined a core edge access, cloud endpoint vision for connectivity. How would you reconcile multi-vendor ecosystems with the traditional stability, performance and security needs of telcos?
Andy Linham, Vodafone Group (28:37):
So this is obviously someone that had the absolute pleasure of watching me give a presentation previously. So we talk around the future of connectivity quite a lot within Vodafone and we kind of break it out in those five key areas and what's going to happen around core, what's going to happen around edge and accessing cloud endpoint? So to come to the question, if you think back a decade ago, we had effectively in the fixed network, three vendors to worry about Cisco, Juniper, Nokia, the RAN side, how we had Ericsson and Huawei. It's a really, really small ecosystem of technology partners that we have to work with. And because of that, you build incredibly strong relationships with teams of people on either side working together to perform development tasks, to perform deployment, to perform architectural blueprints or this type of stuff. We're now, ever since SD-WAN has kind of entered the enterprise networking space in a much more diverse supplier environment.
(29:36):
So the three VS of sd-wan. So tel versa, VeloCloud entered the market with technologies that were very different to what Cisco and Juniper had in particular. So yeah, we had to diversify at that point, but that was almost 10 years ago now. So we're pretty good at managing lots of different partners, lots of different suppliers and working across them to get the best for our customers. I think in terms of security, I'm assuming this is linked to things like patches and maintenance upgrades and making sure you've got heterogeneous policies apply across all of your different types of devices because they're all different. And yes, there is a small element of risk there because you're managing multiple different types of configuration. But one of the things we've invested quite a lot of time and effort in within Vodafone is to build our own orchestration layer. So we have a specific set of tools that we've built that run across every single SD WAN vendor that we have on portfolio.
(30:35):
So it works with Fortinet, with Cisco, with Meraki, with VeloCloud, and it enables us to have a single kind of configuration template that we can push out across any different type of sdwan controller. So from that perspective, we tend to de-risk things by using an orchestration layer trap track that level of differentiation between the different vendors so that some of the engineers don't need to see that in a lot of ways, we shouldn't need to know what the underlying technology that sits at the very edge of the customer network because the orchestration layer that we have says, you just tell me what you want to allow, what you want to block source and destination, these types of things and applications to allow to pass applications to block users, to allow users to block that type of stuff. So I guess my summary would be that we're pretty good at managing multiple vendors now. We've been doing it for, like I said, almost a decade since SD-WAN has reared its head. So I don't think it's a big risk for us right now.
Guy Daniels, TelecomTV (31:30):
Great. Well thanks for handling that one for us, Andy. Appreciate it. And there's a follow-up question, there's a second part to this question. There's a follow-up for the en of our other panelists who want to address this one. And Beth, I think I might come to you if that's okay. It's what major trade-offs have you already seen or anticipate between openness and disaggregation and the assurance of performance security and operability for infrastructure?
Beth Cohen, Verizon (31:58):
The old open source versus proprietary systems question that always rears its head. I'm a big proponent of open source and I think many of the telcos have embraced it wholeheartedly in the industry. Everybody's using OpenStack, for example, Kubernetes has been making big inroads to support container type applications. And of course the companies are investing, many of the telcos are supporting the O ran initiatives through O ran alliance. So again, that helps de-risk relying on a single vendor. Although as Andy mentioned with the advent of SD lan, the vendor, even though there's been a fair amount of consolidation in sd-wan, there's still a few big vendors out there that the majority of the customers are using. But the telcos we're used to having a pretty heterogeneous vendor environment. So yes, our BSS and OSS systems and orchestration systems do cross over those multiple vendors as much as possible. And so again, that de-risks our investments in supporting these systems and managing these systems.
Guy Daniels, TelecomTV (33:36):
Great. Thanks very much, Beth. Yeah, it is a question that crops up quite regularly and it has been for a number of years and maybe we as an industry haven't quite answered it correctly or sufficiently enough or made it sufficiently clear it does come up on a regular basis. Well, we've probably just got time for another question. We've just got a question in a few minutes ago actually, and I think this is really probably a follow up to the question three. We asked earlier, I'm talking about AI and what we were talking about. Question three was it was about GPUs and we got onto the subject of the AI ran in, ran and ran for ran. Anyway, this question asks, and I'll just read it out, ran as a service. What do you think? Or do you think some big tech will finally make this a commodity? Do we think ran as a service may maybe seen or may come to market at some point and be a viable business proposition? Do we have a break? So Kare, thank you very much. I was hoping you'd say yes to this one to have a stab at this answer. What do you think?
Kerem Arsal, Omdia (34:44):
I think not in the foreseeable future, but I think it's a reasonable suggestion because this sort of suggestion again also rears its head. We are used to this where especially like disaggregation openness, all of these do drive this possibility that somebody who's more software oriented can actually take care of and orchestrate multiple players. And also in fairness, there are these examples in private network environments, private 5G environments for instance, where the whole service managed, right? So it's not like it doesn't exist in any form, but the amount of performance, security, reliability demands from the governments and the role of the critical infrastructure. That just sounds to me like too important to have one centralized technology player leasing out if this is what we're talking about. Of course, like this more wholesale approach.
Guy Daniels, TelecomTV (35:56):
Thanks Corm for addressing that one. That's terrific. Yes, I think we first heard it a few years ago, but we didn't really get much traction back then. Beth, what are your thoughts on this one?
Beth Cohen, Verizon (36:09):
There are some precedents related to towers, the microwave towers and the cell phone towers in the US at least where there's a number of players that manage the towers for the telcos. So the telcos have kind of gotten out of the management of tower business. On the other hand, towers are typically shared resources. So if you've ever seen a telco tower, there's usually a ton of antennas hanging off it and it's all the different major players are putting their antennas on the same tower. So in that sense, that makes more sense. Ran as a service, just doesn't make any sense. I think from, as Kerem says, reliability management, I don't even see it as a viable business model because it's really just part of the infrastructure of the telco. And particularly when telcos are going to be getting into network slicing and segmentation and other sort of advanced technologies that are coming down the pike to support the telcos and support new services out to our customers, I think the telcos are going to want to hold on to managing those ran the hardware itself. Obviously we're dependent upon the vendors. I should point out the O Ran alliance came out of the idea that the telcos wanted to move away from a completely proprietary approach to the rans, the ran hardware and move toward more software-based and more open source based models to eliminate the need to support six different proprietary systems, which is always expensive for the telcos. So yeah, I think in general it's just not a viable business model.
Guy Daniels, TelecomTV (38:19):
Okay, thanks very much Beth, and thanks for the question. It's a good question. It's sort of thing we love to throw to our guests on these summit programs. So many thanks for sending that in. Well, we are out of time now, so just like to thank all of our guests who returned and joined us for this live program. Do remember to send in your questions for tomorrow's live q and a show as soon as you can. Don't leave it too late. Get them in now and please do take part in the poll. There is still time for you to have your say and you can find the full agenda for day two of the summit on the telecom TV website. And it includes a panel discussion on new cloud architectures, private clouds and data lakes. And remember you can watch that on demand from tomorrow morning and for our viewers watching live. In case you missed today's earlier panel discussion, we are going to broadcast it in just a few moments, so don't go away. We'll be back tomorrow with our final live q and A show, same time, same place. Until Len, thank you very much for watching and goodbye.
Hello, you are watching the NextGen Digital Infra Summit, part of our year round DSP Leaders coverage. And it's time now for our live Q and a show. I'm Guy Daniels and this is the first of two q and a shows. We have another one at the same time Tomorrow. It's your chance to ask infrastructure related questions to our guests, especially if your questions are connected to AI and inferencing. Because as part of today's summit, we featured a panel discussion that looked at how telcos can support AI factories and distributed inference. Very topical subject. Now, if you miss the panel, don't worry because we will re-broadcast it straight after this live q and A program, or you can watch it anytime on demand. If you haven't yet sent in a question, then please do so now, use the q and a form that you'll find on the website. Well, I'm delighted to say that joining me live on the program today are Andy Linham, principal strategy manager for Vodafone Group. Beth Cohen, product strategy consultant for Verizon and Kerem Arsa, senior principal analyst at Omnia.
(01:54):
Hello everyone. It's good to see you all. Thanks for returning back and coming and joining us on the live q and A show. So let's get straight to our first audience question that we've received and I'll read it out to you. The question asks is the emergence is the emerging importance of digital and data sovereignty proving to be the leading use case for telco Edge AI because autonomous vehicles feels like a dead end and other enterprise applications are being slow to emerge. Well, that's what our viewer thinks. So let me put that to our guest. Andy, I'm going to come across to you first. Do you think it's a leading use case?
Andy Linham, Vodafone Group (02:42):
Now I'm going to be really picky. I don't think it's a use case at all. I think it's a model for deployment that affects lots of different use cases. That sounds like I'm being a real kind of pernickety type of person, but I think is a very important distinction to make. So we've got a lot of interest in sovereignty. We've got a huge amount of customers now firewall than we ever have done before who are interested in where their data's located, where the teams are that manage it, where the technology comes from. All these sorts of things are now much higher up CIO's agendas than they used to be. I think a lot of it depends upon geographically where you sit. There are different priorities in different parts of, even within Europe, if you look at different countries such as Germany versus France versus the uk, we have a different perspective.
(03:30):
I think sovereignty will drive a lot of business towards telcos because we do have that sovereign capability. We do have lots of assets in country. We have lots of people in country that can monitor them and manage them. I don't think it's the use case. I think the use cases will emerge. I think there will be things like autonomous vehicles. You will have lots of computer vision and video analysis for manufacturing and distribution centers. I would agree with the question that they are slow to develop, slow to emerge, but I don't think we're at the point yet where we can say that sovereignty is the use case that's going to drive edge ai. And if I take a country like the UK for example, the UK is a realistically very, very small country, but I look at low latency. I could probably put one node in the middle of the UK and Birmingham and I could have relatively good performance through to it from just about every other corner of the country. If I look at the us, if I look at Germany big places, that's what I need regional edge based services for a lot of these different types of applications. So to summarize the answer, I think sovereignty is really important and I agree the use cases aren't there yet, but I think they will come in the next year or two.
Guy Daniels, TelecomTV (04:42):
Great, thanks very much. Andy. We do overuse the term use case a lot in this industry I think, and I do like the idea of being a model for deployment. That's something we're going to pick up on next month when we look at digital sovereignty in a lot more detail. But for now, Kerem I'm going to come across to you. Do you agree with the sentiment of the question and what Andy's saying or do you have some different views?
Kerem Arsal, Omdia (05:03):
Well, first of all, I do agree with Andy's points that sovereignty is indeed a model rather than a use case. I mean, it is an opportunity at the moment simply because in some markets, hyperscalers are not local entities, and because of that it does create some window of opportunity for telcos to create their own AI data centers and to serve enterprises. But I think again, this time I'm going to pick on a different word which is going to be edge because again, I agree with Andy in that in some markets you don't necessarily have to think of edge as these hundreds of thousands of distributed nodes that do computing, especially in the world of ai. And within the opportunity of sovereignty, probably just being able to build your own AI data centers should be a big accomplishment. And we don't see many announcements coming from some of the operators, at least in Europe for instance.
(06:02):
Some of them are very actively building, but some big ones that we expect to have heard from have not yet made such announcements. So that's one thing. And secondly, I think again, I agree with Andy that there is a continuum of sovereignty and what it means in practice. So that is something that telecom operators will need to learn, which data needs to be held, where to satisfy compliance, how do verticals differ, how the countries differ, which applications versus which data. So those sorts of formulas are things that telecom operators will slowly need to learn. And one last point I want to make about sovereignty is that in the beginning I said that hyperscalers are not local entities. In some markets like most European Union or the UK or some Asian markets, however, hyperscalers do learn very quickly and they're extremely adaptable. So this window of opportunity, if it exists, is not going to be permanently open because Hyperscaler is big public cloud players, they're learning fast, they're striking big deals, and they're also building data centers of their own in these areas as well.
Guy Daniels, TelecomTV (07:15):
Yeah, absolutely. Kerem, thanks very much for those comments. That's great. And Beth, let's come across to you and get your thoughts from this first view of question.
Beth Cohen, Verizon (07:25):
I'm going to pick up on what Andy and Kerem said. So it isn't a use case, but it is an infrastructure support or rather it's a support use case. And I think the telecoms are in a pretty good position to support this. I disagree with Kerem to say that it's a new understanding. I think that the telecoms have been supporting data sovereignty for a long time for just general use. So it's not specific to ai. And obviously telecoms are always going to have more points of presence than even the large hyperscalers just because of the nature of the business. Models are different. However, data sovereignty is not just in country. I think a lot of it relates to how the AI models are using the data and protecting the data from being used. And I think there's going to be a lot of companies and individuals are going to be moving away from letting the AI engines take, I'll use that term a little pointedly, take their data and ingest it into the engines. So I think data sovereignty as a concept is a rapidly changing concept. But I agree that the telcos do have, certainly they're in a good position to support data sovereignty today, but it's not particularly specific to ai.
Guy Daniels, TelecomTV (09:15):
Okay. Thanks very much for those points, Beth. That's terrific. Covered a lot of ground there on that opening question. And as I mentioned a bit earlier, we do have an event in December where we're looking at data sovereignty in a lot more detail. So we'll get some more nuanced views and ideas coming out of that, I hope, and you'll be able to catch up later on telecom TV with the highlights of that. So thanks very much everyone. Let's move on to our next viewer question. This question asks, as we build up compute facilities and increase power demands, which technology or operational innovations look most promising for keeping energy and cooling costs under control at these high density telco sites? Well, let's go straight back to Beth. I think for this one, you do spend quite a bit of time, Beth around at conferences looking at emerging technologies, emerging companies, and seeing what the latest innovations are. What insights have you got for this?
Beth Cohen, Verizon (10:17):
So I recently attended the Open Infra Summit in Paris a couple of weeks ago, and that was a very hot topic. There's a number of big telecoms around the globe that are investing in AI technology to address exactly the energy consumption efficiency. And I think personally, I think it's a great use for AI because AI doesn't have to be perfect and you don't have to be a hundred percent right to still reduce your energy consumption. So there was a lot of use cases and there's a number of companies that are already invested in using this, and it's a way of reducing your consumption by using traffic patterns and predictive analysis to determine when you can turn off nodes or put them in low power mode to match the actual use in live use to reduce energy consumption. So I think it's actually a great use case for the telcos and I suspect most of them will be using it over the next 12 to 18 months.
Guy Daniels, TelecomTV (11:45):
Great. Thanks very much, Beth. Yeah, we look forward to seeing more evidence of that from telcos and seeing what the actual benefits and results are. That's going to be good to watch out for Andy. Let's come across to you. What are your thoughts on this question?
Andy Linham, Vodafone Group (11:59):
There's one thing that I think is really important actually. That's where you actually put the AI data centers and the compute factories. If you put them in somewhere like Scandinavia up in Norway or Sweden where you've got access to lots of naturally cool air and lots of hugely renewable energy sources, that makes it much more likely, you can generate return and investment much, much faster because you have access to all this abundant power through hydroelectric. You've also got a much lower ambient air temperature, so your amount of cooling needs to be reduced. So as much as we can use technology to help us to reduce the costs and to reduce the implications of power that we need, I think we can also be really, really smart about where we put these data center, the news, what nature's given us to make these things as efficient as they can be with the current generation of technology.
Guy Daniels, TelecomTV (12:46):
Yeah, absolutely. Thanks very much Andy. And we're a few years away from putting our data centers in space as I've been reading about this week, but I'm sure in a few decades that might happen. Kareem, let's come across to you and your thoughts.
Kerem Arsal, Omdia (13:00):
Just one thing that I'm wondering is whether these requirements are going to change the way that certain nations or certain industries will interpret sovereignty as well. Because if data centers are indeed moving more towards tier two markets or maybe to certain countries where there are better natural cooling facilities or maybe there's cheaper electricity for instance, cheaper energy costs, when that sort of things start happening, then I wonder whether there's going to be an effort to find work around towards how maybe a data center in a market that is not currently within the definition of sovereignty can be included in that, if you know what I mean. So perhaps maybe there's going to be one or two markets that can satisfy sovereignty requirements of the Gulf region, for instance, or maybe some countries that serve Europe better than multiple countries in Europe satisfying their conditions. So I'm just thinking whether that could be a possible evolutionary path.
Guy Daniels, TelecomTV (14:07):
Yes, a good point. And if you don't mind me saying, I think it's a really good question because I've just been looking at putting together talking points for the December event and that's one of them. So I'm really hoping we're going to get some views around that, I think as we're seeing some interesting trends. Terrific. Thanks very much everyone. Let's therefore move on to the next question we've had in today. So I'll read this one out. This one's interesting. Given current GPU concentration, how are operators mitigating single vendor risk and protecting CapEx through multiple upgrade cycles? Because how long does A GPU last three years or less, which is way shorter than CPUs? Okay, so Andy, are we able to come across to you to get some thoughts on this one yet some analysis of GPUs and what operators are doing?
Andy Linham, Vodafone Group (15:04):
Yeah, of course. So yeah, we've looked at a similar sort of question as you'd have expected us to. The horror stories that you sometimes hear about GPU life cycles are almost always linked towards training workload. So that's where you've got a vast amount of data and you are processing on those GPUs 24 hours a day, multiple days on the tr. So you are just constantly hammering the GPUs and you will almost literally wear it out to a certain degree. But if you're looking at something like an inferencing workload, it's much more variable in volume where the GPU is having some idle cycles in between life cycles are going to be extended. The reality is we've not had this type of data center GPU available to us for 3, 4, 5 years yet to be able to actually realistically say this is how long it's going to last.
(15:54):
Not that kind of a scale that we could use to say that's a statistically viable set of results. Therefore I can say yes, it will be 3, 4, 5 years, whatever it happens to be. I take it the question around single vendor is talking about nvidia and there's absolutely no doubt they have got the most widely used GPU in the market. The joys of working for large companies are that you have strong relationships with lots of other technology partners. So from our perspective, we have a really strong work relationship with Nvidia, as I'm sure every single other telco does because there's a big movement from NVIDIA towards AI ran as an example. So there's a lot of conversations going around, whether you use AI to optimize frequency, drop those allocation of spectrum, that sort of stuff within the ran. So as much as there is a risk around single vendors, it is probably no different to the risk we had when every single networking router was either Cisco or Nokia.
(16:52):
We had a time when probably 95% of our entire enterprise business unit was running Cisco routers. If we'd had a similar problem with access to stock, we would've been in a similar position doing video today. So we're actually pretty good at managing these types of challenges. We have lots of processes in place. We have really highly skilled procurement teams to understand how to balance the supply and demand as effectively as they possibly can. So not something I'm worried about at the moment in terms of life cycles and CapEx. I think Beth and EM give you a really strong answer on that, but from my perspective in terms of how we approach it, it's around partnering. There are many ways that we can partner with different providers. It could be the near cloud, it could be the hyperscalers, it could be Nvidia themselves to look at how we can use their GPUs, how we can optimize their GPUs, and how we can connect their GPUs. And that's probably the best area for us to focus on as a telco.
Guy Daniels, TelecomTV (17:50):
Fantastic. Thanks very much for those insights, Andy. In a way, that question had a couple of different angles to it, so that's interesting to bring 'em up and separate them like that. Em let's come across to you next. The question asked about this single vendor reliance video and also about the longevity and usage of GPUs, what's your take?
Kerem Arsal, Omdia (18:11):
So first of all, I also think that maybe we're not actually there yet, at least the telcos for telcos, for most telcos, this is the beginning of the road yet. So the investment timing is crucial for them. And I think there's a realization that at the moment we are going to go through a phase where the economics will change the capabilities of other xus, be they EPU or CPU or other accelerators. They're going to improve. And also models themselves are also improving too. So they're getting better, they're getting smaller, they're getting more purpose specific at times. So basically probably within the next two years we are going to be looking at a richer environment when it comes to the hardware infrastructure. Obviously we already have a very fertile and a vast variety in the applications layer. So basically while today looks like a single vendor environment, I think most of the operators are actually trying to work around this by not overly investing at the moment and being careful about it.
(19:18):
So maybe that's also one of the reasons why, as I mentioned earlier, some of the operators have not committed or do not necessarily want to commit to large AI data centers and want to partner. To me, this question would probably apply more to a company like what's called a Neo cloud these days where they've already invested quite heavily in the GPUs and they're probably very interested in how the economics and how the demand and supply is going to pan out. So for telcos, in my opinion that really want to work and invest in GPU as a service, they probably would benefit from a lot of partnerships. And Beth, maybe you can have something to say on that regarding Verizon's partnership. But also if they want to do their own GPU as a service, I think it would be very important for them to find some anchor tenant anchor tenants, maybe like a big AI SaaS company or a big AI model developer that can commit to a volume of demand before they invest.
Guy Daniels, TelecomTV (20:27):
Great, thanks very much Kiem. And perfect link there to Beth and we'll come straight to Beth and Beth will have your views on that. And as Kiem was saying, the GPU as a service angle is one of them.
Beth Cohen, Verizon (20:39):
Yeah, I want to kind of turn the question a little bit on its head. GPU as a service I think is something that the are pretty well positioned to help with. And of course Verizon is AI connect, GPU as a services. It has been announced, so we're partnering with a third party vulture to provide those GPUs. So we've our investment obviously. But I think the question is more not necessarily just the telco investment in ai, AI ran, which I think is still pretty close to being an emerging technology that people are still working on, but I'd like to focus on our customer's need for GPU as a service. GPUs are expensive and so therefore companies want to optimize the use of the GPUs so that the GPUs, getting back to what Andy said about hammering the GPUs, even if you're only doing inference, if you're using a Clouds vendor or a telco to use those GPUs, there's a lot of temptation to use it as optimize the usage of those GPUs since they're so expensive. So I'd say that even if you're using a inference engine, you're probably going to want to be using it in a cloud way.
(22:33):
And I think the telcos are in a good position to support this, at least in the short term because these GPUs can be put in not just the core data centers that the cloud providers are in, but distributed across the network to minimize the workload and the latency based on where the data is coming from. But I think the GPUs are always going to want the providers or whoever's buying the GPUs and investing in them is always going to want to optimize their use. So they're always going to be being run as hard as possible. I cannot speak to the longevity of these GPUs. I actually have no knowledge about that. And I think because they're relatively new, we don't really know.
Guy Daniels, TelecomTV (23:31):
No, absolutely. And thanks very much for those points, Beth. That's great. And em, did you want to come back in on this one?
Kerem Arsal, Omdia (23:38):
Yeah, just a little bit to add on how the approach to GPU within the context of RAN has changed quite a lot within the last two years. And again, this is also, I think NVIDIA's approach has been also this way too because when they first began the journey, the ARC one compute was the main flagship server there server, and that computes promise was basically, I can do a lot of GPU work for you, but at the same time I can work as a GPU as a service for third parties. So you can monetize that. But I think again, because of these high prices and because of the inability to basically deploy that sort of compute platform in certain RAN architectures, I think they also kind of turn their attention more towards the RAN specific use cases or what the AI for RAN as the AI ran Alliance would call. So now also the developments that are coming from the NVIDIA side that apply to RAN are also becoming increasingly more realistic with ARC Compact, with the more recent ARC Pro. So they're actually becoming more AI for RAN hardware rather than GPU as a service. So I just wanted to make sure that the audience is aware that the GPU as a service on the RAN equipment can happen, but we are still a long way away from that. Not necessarily technically, but commercially for sure.
Guy Daniels, TelecomTV (25:17):
Yeah, absolutely. Thanks very much for pointing that out. That's good to add. Well, before we move on to another question, it is time now to check in on our audience poll for the next Gen Digital Infra Summit. And the question we are asking this week is what are the most important investment areas to create a future proof telco infrastructure? And there you go. You can see the real time votes appearing to my right there. Well, very reassuring to see support for AI factories and AI edge compute getting a lot of attention. That's a relief. If you have yet to vote, then please do so. These figures may well change and we'll take a look, a final look at the voting during tomorrow's live q and A show. Right, we still have time for a few more questions. There's been a lot of interest recently from telcos in, oh, this is the question. There's been a lot of interest recently from telcos in timing optimized LLMs. What does this mean in terms of infrastructure and what changes to today's timing protocols or standards may be required? This is a little tricky one, and then we certainly saw this emerge at our Dusseldorf event last month. Very briefly. Beth, can I come across to you first on this one and maybe try and get a good understanding of maybe the issues involved here with lms?
Beth Cohen, Verizon (26:47):
Yeah, I'm going to interpret this as telco use of AI for latency to solve latency issues and to optimize networks. And I'm not necessarily sure that LLMs are the right answer. Certainly the telcos have been more relying on machine learning for optimizing the performance of the traffic to reduce the latency because obviously customers are always looking to reduce latency. And of course many of the use cases rely on reduced latency as well, particularly drones let's to beat a dead horse, the self-driving cars and other types of applications that are latency sensitive. So that's my interpretation and I don't think LLMs are particularly the right technology to apply to that particular use case.
Guy Daniels, TelecomTV (27:55):
No. Well thanks very much for those comments observations, Beth. It's an area that I think we're going to have to wait and follow and get more industry development in the months ahead and see how it all pans out. But we will be watching this area with some interest. Well, here's another question that we receive today. And Andy, it's one specifically addressed to you and the question asks, you recently outlined a core edge access, cloud endpoint vision for connectivity. How would you reconcile multi-vendor ecosystems with the traditional stability, performance and security needs of telcos?
Andy Linham, Vodafone Group (28:37):
So this is obviously someone that had the absolute pleasure of watching me give a presentation previously. So we talk around the future of connectivity quite a lot within Vodafone and we kind of break it out in those five key areas and what's going to happen around core, what's going to happen around edge and accessing cloud endpoint? So to come to the question, if you think back a decade ago, we had effectively in the fixed network, three vendors to worry about Cisco, Juniper, Nokia, the RAN side, how we had Ericsson and Huawei. It's a really, really small ecosystem of technology partners that we have to work with. And because of that, you build incredibly strong relationships with teams of people on either side working together to perform development tasks, to perform deployment, to perform architectural blueprints or this type of stuff. We're now, ever since SD-WAN has kind of entered the enterprise networking space in a much more diverse supplier environment.
(29:36):
So the three VS of sd-wan. So tel versa, VeloCloud entered the market with technologies that were very different to what Cisco and Juniper had in particular. So yeah, we had to diversify at that point, but that was almost 10 years ago now. So we're pretty good at managing lots of different partners, lots of different suppliers and working across them to get the best for our customers. I think in terms of security, I'm assuming this is linked to things like patches and maintenance upgrades and making sure you've got heterogeneous policies apply across all of your different types of devices because they're all different. And yes, there is a small element of risk there because you're managing multiple different types of configuration. But one of the things we've invested quite a lot of time and effort in within Vodafone is to build our own orchestration layer. So we have a specific set of tools that we've built that run across every single SD WAN vendor that we have on portfolio.
(30:35):
So it works with Fortinet, with Cisco, with Meraki, with VeloCloud, and it enables us to have a single kind of configuration template that we can push out across any different type of sdwan controller. So from that perspective, we tend to de-risk things by using an orchestration layer trap track that level of differentiation between the different vendors so that some of the engineers don't need to see that in a lot of ways, we shouldn't need to know what the underlying technology that sits at the very edge of the customer network because the orchestration layer that we have says, you just tell me what you want to allow, what you want to block source and destination, these types of things and applications to allow to pass applications to block users, to allow users to block that type of stuff. So I guess my summary would be that we're pretty good at managing multiple vendors now. We've been doing it for, like I said, almost a decade since SD-WAN has reared its head. So I don't think it's a big risk for us right now.
Guy Daniels, TelecomTV (31:30):
Great. Well thanks for handling that one for us, Andy. Appreciate it. And there's a follow-up question, there's a second part to this question. There's a follow-up for the en of our other panelists who want to address this one. And Beth, I think I might come to you if that's okay. It's what major trade-offs have you already seen or anticipate between openness and disaggregation and the assurance of performance security and operability for infrastructure?
Beth Cohen, Verizon (31:58):
The old open source versus proprietary systems question that always rears its head. I'm a big proponent of open source and I think many of the telcos have embraced it wholeheartedly in the industry. Everybody's using OpenStack, for example, Kubernetes has been making big inroads to support container type applications. And of course the companies are investing, many of the telcos are supporting the O ran initiatives through O ran alliance. So again, that helps de-risk relying on a single vendor. Although as Andy mentioned with the advent of SD lan, the vendor, even though there's been a fair amount of consolidation in sd-wan, there's still a few big vendors out there that the majority of the customers are using. But the telcos we're used to having a pretty heterogeneous vendor environment. So yes, our BSS and OSS systems and orchestration systems do cross over those multiple vendors as much as possible. And so again, that de-risks our investments in supporting these systems and managing these systems.
Guy Daniels, TelecomTV (33:36):
Great. Thanks very much, Beth. Yeah, it is a question that crops up quite regularly and it has been for a number of years and maybe we as an industry haven't quite answered it correctly or sufficiently enough or made it sufficiently clear it does come up on a regular basis. Well, we've probably just got time for another question. We've just got a question in a few minutes ago actually, and I think this is really probably a follow up to the question three. We asked earlier, I'm talking about AI and what we were talking about. Question three was it was about GPUs and we got onto the subject of the AI ran in, ran and ran for ran. Anyway, this question asks, and I'll just read it out, ran as a service. What do you think? Or do you think some big tech will finally make this a commodity? Do we think ran as a service may maybe seen or may come to market at some point and be a viable business proposition? Do we have a break? So Kare, thank you very much. I was hoping you'd say yes to this one to have a stab at this answer. What do you think?
Kerem Arsal, Omdia (34:44):
I think not in the foreseeable future, but I think it's a reasonable suggestion because this sort of suggestion again also rears its head. We are used to this where especially like disaggregation openness, all of these do drive this possibility that somebody who's more software oriented can actually take care of and orchestrate multiple players. And also in fairness, there are these examples in private network environments, private 5G environments for instance, where the whole service managed, right? So it's not like it doesn't exist in any form, but the amount of performance, security, reliability demands from the governments and the role of the critical infrastructure. That just sounds to me like too important to have one centralized technology player leasing out if this is what we're talking about. Of course, like this more wholesale approach.
Guy Daniels, TelecomTV (35:56):
Thanks Corm for addressing that one. That's terrific. Yes, I think we first heard it a few years ago, but we didn't really get much traction back then. Beth, what are your thoughts on this one?
Beth Cohen, Verizon (36:09):
There are some precedents related to towers, the microwave towers and the cell phone towers in the US at least where there's a number of players that manage the towers for the telcos. So the telcos have kind of gotten out of the management of tower business. On the other hand, towers are typically shared resources. So if you've ever seen a telco tower, there's usually a ton of antennas hanging off it and it's all the different major players are putting their antennas on the same tower. So in that sense, that makes more sense. Ran as a service, just doesn't make any sense. I think from, as Kerem says, reliability management, I don't even see it as a viable business model because it's really just part of the infrastructure of the telco. And particularly when telcos are going to be getting into network slicing and segmentation and other sort of advanced technologies that are coming down the pike to support the telcos and support new services out to our customers, I think the telcos are going to want to hold on to managing those ran the hardware itself. Obviously we're dependent upon the vendors. I should point out the O Ran alliance came out of the idea that the telcos wanted to move away from a completely proprietary approach to the rans, the ran hardware and move toward more software-based and more open source based models to eliminate the need to support six different proprietary systems, which is always expensive for the telcos. So yeah, I think in general it's just not a viable business model.
Guy Daniels, TelecomTV (38:19):
Okay, thanks very much Beth, and thanks for the question. It's a good question. It's sort of thing we love to throw to our guests on these summit programs. So many thanks for sending that in. Well, we are out of time now, so just like to thank all of our guests who returned and joined us for this live program. Do remember to send in your questions for tomorrow's live q and a show as soon as you can. Don't leave it too late. Get them in now and please do take part in the poll. There is still time for you to have your say and you can find the full agenda for day two of the summit on the telecom TV website. And it includes a panel discussion on new cloud architectures, private clouds and data lakes. And remember you can watch that on demand from tomorrow morning and for our viewers watching live. In case you missed today's earlier panel discussion, we are going to broadcast it in just a few moments, so don't go away. We'll be back tomorrow with our final live q and A show, same time, same place. Until Len, thank you very much for watching and goodbye.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Panel Discussion
The live Q&A show was broadcast at the end of day one of the Next-Gen Digital Infra summit. TelecomTV’s Guy Daniels was joined by industry guest panellists for this question and answer session. Among the questions raised by our audience were:
- Is the emerging importance of digital sovereignty proving to be the leading use case for telco edge AI?
- Which new technologies look most promising for keeping energy and cooling costs under control at high-density telco sites?
- Given current GPU concentration, how are operators mitigating single-vendor risk and protecting capital expenditure?
- Why the recent interest in timing-optimised LLMs and what does this mean in terms of infrastructure?
- How do you reconcile multivendor ecosystems with the traditional needs of telcos?
- Is there a business case for offering RAN-as-a-service?
First Broadcast Live: November 2025
Participants
Andy Linham
Principal Strategy Manager, Vodafone Group
Beth Cohen
Product Strategy Consultant, Verizon
Kerem Arsal
Senior Principal Analyst, Omdia