To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/-TS2i8yHx1o?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Hello, you are watching the Next Gen Digital Infra Summit, part of our year round DSP Leaders Coverage. I'm Guy Daniels and today's discussion looks at how telcos can support AI factories and distributed inference while the emergence of AI factories next generation data centers is impacting governments, telcos, and industry. And with inference moving closer to the edge, can telcos create AI optimized transport fabrics and Edge knows that turn connectivity into new service opportunities? Well, I'm delighted to say that joining me on the program today are Andy, Linham, who is principal strategy manager at Vodafone Group. Beth Cohen, product strategy consultant for Verizon, and Kerem Arsal, who is senior principal analyst at Omdia. Hello everyone. Good to see you all. Thanks so much for joining us on the program. Lots to talk about today as always, but first of all, what new traffic patterns are AI workloads creating and how are they reshaping backbone and access network designs? Beth, can we come across and get your views on this one first?
Beth Cohen, Verizon (01:49):
Yes, sure, be happy to talk about it. So, and of course I'm coming at to it from the telco perspective where we're seeing AI is still definitely new. Not everybody's using it, but it is definitely starting to impact the traffic patterns. What's important to understand about AI is where the data is and where the data needs to be processed. So when the data is centralized in the data center, let's say a big website does analysis of their customer data, it's going to be centralized within the core. However, what we're seeing is more distributed workloads and where the data is not necessarily in the core, it's originating, let's say on an ai, let's say it's originating on a oil platform in the middle of the ocean or wherever. So the data needs to be either processed locally, at least partially and then centralized, or it's also increasingly being processed locally to get that high low latency feedback loops.
(03:18):
I'll give an example of robotic surgeries and using AI to augment them that obviously you need a very low latency feedback loop and you want to handle that data locally, but you also want to at least take some of the requirements and then pull that back into your central location. We're also seeing customers increasingly using this has been true for a while, multi-cloud solutions, and so they want to use the transfer of the data between those clouds. So we're seeing requirements for very, very large pipes growing between data centers as well. So those two trends are sort of happening at the same time.
Guy Daniels, TelecomTV (04:08):
Great. Thanks very much Beth. And as you say that everything we're talking about here is from the telco perspective, which might not necessarily be the same from other sectors here, and just coming back to you Beth, these two trends you mentioned, are they more evolution of what we've been seeing? There's nothing really radical and transformative here.
Beth Cohen, Verizon (04:30):
I would say for the most part it's more of an evolution than a revolution. Certainly the need for more and more giant pipes, more dark fiber. One thing I am seeing is the need and requests for dark fiber installations in areas that don't traditionally have fiber and those are obviously quite expensive installations anytime you run fiber. So that's a new trend. The telcos have traditionally run their run their networks over pipelines and railroad track, right aways and highways and those types of connectivity and now we're seeing requests for very, very large pipes between data centers and colo spaces. So that is a new trend. I'm not sure I would call it revolutionary or evolutionary or sort of a mix of both, if you will.
Guy Daniels, TelecomTV (05:38):
Great, thanks very much. Beth and Andy, what are you seeing from your perspective over at Vodafone group?
Andy Linham, Vodafone (05:45):
So I think what you've got is a mix of trends. So some are linked to the underlays, the actual circuits that carry the data and as Beth said, that's all about growth and capacity for the big data center locations. The other one we need to be aware of is the overlay and how we're actually having AI services talk to each other. It's very early stages right now, but if you think about the evolution of things like agen AI and how the agents will start talking between themselves without needing human intervention, at that point we will have to start thinking quite strongly about how we architect the network to support really ultra low latency because a human being is happy to wait for a couple hundred milliseconds whereas an agent might not be. So making sure you've got that core capacity in terms of the circuits and then linking that into how the overlay can prioritize and can adapt to the different needs of agents and our agent AI will get in the future is going to be really important. There's been great work being done by Cisco that's been taken over by the Linux Foundation now called agency and that really helps define how agents can discover each other, how they can start to find each other, the requirements for interacting, and that's going to start filtering down into the core networks when you start to get a much more responsive court rather than the quite static ones that we've had previously.
Guy Daniels, TelecomTV (06:57):
Yeah, absolutely. That was some very interesting work there being done and as you say now taken over. Thanks very much Andy. Let's just move on and develop this a little bit if you can because I'm interested in where we might draw the line in terms of economics and technology between pushing inference to the edge and then concentrating processing and GPUs in central AI factory type campuses because it appears that these might be two very different strategies for telcos. Kerem, can we come across to you and find out what you are seeing out there?
Kerem Arsal, Omdia (07:35):
Yes, of course. So first of all, I think there is definitely an understanding or almost like a consensus where training is more the hard work of the GPUs in centralized locations and inferencing is often a lighter load and therefore requires a different type of computing environment, perhaps a simpler one for instance. Now, while this is true and our research also shows that telcos themselves believe this to be true, it is also important to note two things. One of them is the difference between the simple inferencing of very small models or even some of the large models that don't necessarily require a lot of heavy compute work. Now those are very edge conducive. So for a telecom operator for instance, it can appear as an opportunity there to have distributed computing to run inference workloads, but at the same time, end user devices are also getting better at doing this as well.
(08:44):
So the telecom operators that we survey and that we talk to also recognize that bit as well. So for telecom operators, therefore the danger is actually will inferencing skip their network nodes and then go to the devices or will they be able to capture that opportunity while the window is still open. The second thing that I wanted to mention is that not all inferencing is also equal. So if you're talking about these relatively simple inferencing models, simple inferencing processing, then yes, this sort of categorization is valid. But if you think about inferencing for reasoning based models for instance, then actually there's a lot of heavy load to be carried, and again, GPU environments can work well for those as well. So I think if I were to just simplify the big picture, it's important to recognize that when we say that training is going to be done in centralized locations where there are a lot of GPUs, we should not dismiss those locations as places where inferencing is not going to happen.
(09:59):
So inferencing can indeed also happen in those locations which are megawatts and some of them are gigawatts, so there's going to be a lot of capacity to do both training and inferencing. The second bit of course about whether inferencing will indeed move to the telco edge is something that we're still going to have to wait and see. Right now there isn't much of it going on. There are attempts, there are plans to do that, but in my opinion, if telecom operators are smart enough to actually grow the footprint of their data centers, even if in centralized locations, I think they'll be doing the right thing and capture some of the AI processing opportunities.
Guy Daniels, TelecomTV (10:44):
Thanks Kerem, that's really interesting because there's always a danger isn't there, of generalizing and just saying inference. It's just one thing and we group together and also this aspect of the edge, we're talking the telco edge, we're talking device application edge there. We've got to be careful. Thanks very much for those observations and Beth, I'd like to come over to you to pick up on this.
Beth Cohen, Verizon (11:06):
Yeah, I want to pick up on what Karen said has a lot of validity, but another thing is that I think companies are beginning to realize that AI is not static and you need to have the rag that you need to go out and get UpToDate information. And so where that's going to get pulled in, is it going to be pulled in at the edge or is it going to be pulled in at the core or telco edge? That's an architecture consideration that obviously affects the network and affects the traffic. I think that's going to increasingly be an important aspect of any of these AI applications and AI tools.
Guy Daniels, TelecomTV (11:57):
Great. Thanks very much for that additional insight, Beth, and we'll go over to Andy as well.
Andy Linham, Vodafone (12:03):
Yeah, so one thing that we've seen from I guess a global perspective is that there's different ways of treating the data and where it resides and where you do the inferencing depending on where you you're in the world. Within Europe, we've got quite a strong sovereignty movement happening right now where there's a push to get as much data stored and managed locally. We've got customers out in the United States, we're quite happy to run as much in public cloud as possible because it's typically a US owned type of scaler that they're working with. We've also got customers out in Asia Pacific who are looking more at the telco to help them because they're kind of in between the European and American perspectives and telco have a really strong right to compete in those sorts of markets. One thing we need to just bear in mind when you're thinking about where to run a lot of the inferencing is where the data stores and where the people are so already has a definite role to play depending on where you're in the world.
Guy Daniels, TelecomTV (12:52):
Yeah, thanks very much and it's good to mention that and we're certainly going to hear more about digital sovereignty at our event in December and it's important to incorporate that in this particular discussion. Well, let's look at the question of power and energy. Should we, because this cropped up several times during last month's AI native turco forum because we're reading at the moment that the five megawatt GPU block, it's currently being regarded as the Goldilocks unit of today's AI era. It's big enough to matter, make a difference, but it's small enough to fit into areas and be portable. Is it viable for telcos who already own this power dense real estate to turn this into high margin, low latency AI zones rather than just maybe slightly dusty old static retail locations? Andy, I'm putting you on the spot here, but any thoughts on the viability of getting more from these assets?
Andy Linham, Vodafone (13:58):
Yeah, of course. Although I take issue with the dusty part, I think we do have quite a well-maintained data center estate. I think you're right, it is been a real step change in terms of the power requirements that are coming from the new GPU providers that we've got. So when we engineered our data centers maybe 20 years ago, we put in around six kilowatts per rack to a lot of the different sites. We're now looking at it between 50 and 60 kilowatts per rack to get even a basic GPU cluster running. So five megawatts for some size is aspirational at best, but you aren't going to have those bigger nodes, those kind of aggregation nodes where absolutely it becomes a viable option. One of the things that we need to balance though is that as an industry, telcos have been moving more and more to towards hosting their points of presence in places like colo centers, so Equinix, digital Realty, those types of providers.
(14:47):
If we do that, you have an abundance of power, but you are effectively reselling someone else's capacity so that margins are much thinner. I think in terms of where the data centers get built now, it used to be the data would follow the people of the network, but now they follow the power. So we have lots of our local markets who are looking at where to put their investments, whether they talk to some of the near cloud providers like N Scale and Core Weave or whether they're go and do it themselves. Like I say, a lot of it is based upon power. The five megawatt Goldilocks range is probably a valid estimate I think in terms of how you get that trade off of meeting both training and inferencing workloads. If you're happy to go for inferencing only, you can get where with a much less dense power three, but if you want to go after those bigger, more compute intensive training workflows, five megawatts is kind your minimum viable power supply I think.
Guy Daniels, TelecomTV (15:39):
Great. Thanks very much. Great insights there Andy. Thank you for that. We're going to come to all our guests now, but Beth, we'll come to you first though.
Beth Cohen, Verizon (15:46):
So I want to talk a little bit, there's two points that Andy brings up. One is that the telcos, our data centers are used and not all of them have excess capacity to give over to these GPOs. That's one question. Some of them do, some of them don't. And the other issue is that particularly some of the pops that are used network for the fiber to strengthen the fiber signals, they're typically in locations that literally don't have any power. This is particularly true in the United States. I suspect it's true in other parts of the world where the flyover states where along railroad right aways and these pops if you will, are just pretty low power and they literally cannot bring in new power to add capacity to put in GPUs and other types of high power density requirements. It's equipment that requires those high density power requirements. So I have heard rumors that there's work on reducing the power requirements for the AI workloads, so that may well be at least some of the solution to solve this significant problem. The power is not necessarily where you need it.
Guy Daniels, TelecomTV (17:31):
Absolutely right. We've heard that before Beth, and as I say at the AI native Turco forum, one of the arguments was tecos need to get real about sourcing power if they're going to be playing in this game seriously and follow the power, as you say, Kerem, let's come across to you being waiting patiently. What are your views on the power situation?
Kerem Arsal, Omdia (17:52):
Thanks Guy. So what I'm thinking is, especially in the case of telcos, anchor tenants are going to be critical. So just because you have the capacity just deploying few racks of GPUs or even five megawatts, which is, I dunno, maybe a few hundred servers, a few thousands of GPUs. First of all, that is not cheap. And secondly, I think in the old non-AI era version of edge computing, we saw that telecom operators found it extremely difficult to create any sort of demand and they also didn't necessarily take enough risks perhaps in building out the infrastructure. And now that risk is much higher in the world of AI because you're talking about say five megawatts as the gold deluxe. I don't know exactly how much that costs. Probably we're looking at potentially nine digits of investment for a note if we were to do it from scratch.
(18:52):
So doing that and then trying to sell let's say GPU as a service by the telco to the end customers can be tricky. And in order to manage that risk, one thing that's very important for them is to find these anchor tenants. Those anchor tenants can be neo clouds like core weave or vulture or Lambda and scale all of those who are actually in search of capacity or it could be like a large model developer like OpenAI that's actually ready to come in and use the capacity already there. So that's only just one word of caution I wanted to add, which is yes, if you build it, who's going to come is the question that telcos need to be careful in answering. And right now there are candidates for those anchor tenants and right now therefore maybe there's an opportunity for that.
Guy Daniels, TelecomTV (19:48):
Great. Well said because quite right, there has to be someone who pays for this. This is some serious investment mounting up additional investment on top of everything else that telcos are spending, right, left and center, right. Let's move on to connectivity, shall we, fiber and the connectivity for this next generation infrastructure, what do tokos need from vendors and their partners to make the transition from 400 G to 800 G and who knows what beyond that in the future in a predictable, affordable, and strategic, brownfield friendly way? Beth, have you got thoughts on this?
Beth Cohen, Verizon (20:30):
It's not like we haven't done major upgrades and overhauls over the years, so I think it's pretty much the same answer as has been done in the past. When we rolled out 5G, we worked with the vendors, we made sure that there were upgrade paths and to make it as easy as possible and not need to pull the network down, you can't do that. So I think that's that's what's needed to do. We will need the capital, we'll need the vendors on board with us and have a clear migration path to upgrade all the equipment. Definitely the demand is out there, our customers are looking for those giant pipes and absolutely the telecoms are going to provide those giant pipes that are needed.
Guy Daniels, TelecomTV (21:32):
Okay, great. Thanks very much Beth. Well, aside from selling raw bandwidth, are there services or commercial models that can let telcos capture more value from AI optimized infrastructure? Andy, are you planning anything out or looking at possibilities for new types of services?
Andy Linham, Vodafone (21:57):
Yeah, I mean if you look at the way the connectivity market is as a whole, it is essentially flat. So if we as a telco want to grow our revenues adjacencies and cross-selling services are a natural way to go. I think for us it's about simplifying the experience of customers and making it a better experience and to take multiple services and AI is a great way that we can do that. So it can help us to optimize our customer experience. We pass on to the end users, it can help us to optimize our services, it can help us to optimize our CapEx or opex. So all these things we can use to help us shape how we deliver services internally. I think from an external monetization perspective, that's still very much a work in progress. There's a whole raft of use cases, literally thousands that we've looked at that have different aspects in terms of do you have a right to compete or not as a telecommunications provider, a lot of it depends on the individual country, the individual market that you're operating in because as I mentioned before, the perceptions of telcos in different geographies is different.
(22:55):
So if I was to be selling services in APAC versus selling 'EM in Europe, I have a different set of meshes I want to get to my salespeople to then land with the customers. This is basically a very long-winded way of saying we don't really know yet. There's a lot of optimism. We believe there are definitely use cases. We've got a lot of work going on to test them out and proof of concepts, the minimum of our products. Right now I don't have a firm. Yes, it's going to be this one, this one and this one. Pat tell you just yet though.
Guy Daniels, TelecomTV (23:25):
No, absolutely. And it is early days and I've referenced our native telco forum event a couple of times already in this show, but one of the interesting pickups from that was day one was really about AI for the network, which was interesting and got a lot of interest, but day two was the network for AI if you like, which got more buzz and excitement, although possibly a lot fewer firm answers. So certainly interest there. But what that is, we're going to have to wait and see. Beth, I'll come across to you next. So what are your thoughts for services or models that Kos can adopt?
Beth Cohen, Verizon (24:02):
I want to build on what Andy was saying, which is that, yeah, I mean the telcos are using, there's really two threads here. There's the telcos that are using the AI internally and absolutely the telcos are using the AI internally and I think that will grow over time. Although one can argue most of what we're using it for is machine learning rather than ai, but neither here nor there. But the more important thing is who's going to be adopting ai? I mean right now it's kind of the big tech firms, the obvious, the usual suspects. But I think that we're going to see other industries kind of going down. Right now we're in the early adopter stage I'll say, and some of these early adopters have a lot of money to spend on this ai, but unless they start seeing significant benefits to it in the next year or two, I think they're potentially going to pull back.
(25:13):
But I suspect we're going to start seeing some of the, not the laggards, but the majority come in sort of the bread and butter companies start using AI in sort of clever ways. But I think that they're still kind of feeling their way. I mean the majority of companies that we talk to are sandboxing it at this point. And when we talk to Verizon has AI connect and we're talking to customers about it and what I've been doing with them is saying, well, it's not necessarily AI workloads exclusively. There's a whole lot of workloads out there and use cases that require the movement of significant amounts of distributed data. And I think if you view it from that perspective and sort of pull back from the well AI because that's the big buzzword and really focus on the different types of workloads that require these larger pipes in more complex architectures, I think we can start, I think the telcos can really monetize not only AI but these other workloads that are emerging at the same time.
Guy Daniels, TelecomTV (26:33):
That is interesting. Beth, thanks very much for those insights and Kerem, let's come across to you and your thoughts.
Kerem Arsal, Omdia (26:41):
I was actually going to say something very similar to what Beth said, and I also fully agree that it's not about ai. Actually, AI has probably been a great reminder of some of the basics that needed to be done and thinking about the network in general and how to improve it in serving things like multi-cloud, I mean, who's your customer, right? So your customer is probably an organization that has applications running on very different cloud environments. Now, if it's with ai, data sits in one place and then another place and then another place. So it's all about this disparate places, distributed computing multi-cloud environments. In a place like this, of course providing connectivity services is not going to be as easy as just simply selling raw bandwidth. And there the concept of network as a service and its potential promises come to the for. So one needs to provide new consumption patterns, one needs to provide simple to use interfaces. Quoting cannot take months before getting back to clients. Selling should be really easy. So if we are indeed talking about distributed multi-cloud environments, then there's a lot more to be done than just raw bandwidth.
Guy Daniels, TelecomTV (28:04):
Thanks so much Kerem. Great points there to end the program because we must leave it there. I'm sure though we will continue this debate during our live q and a show later. For now, thank you all for taking part in our discussion. If you are watching this on day one of our NextGen Digital Infra Summit, then please send us your questions and we'll answer as many of them as we can in our live q and a show, which starts at 4:00 PM UK time. The full schedule of programs and speakers can be found on the TelecomTV website, which is where you'll also find the q and a form. And of course, our poll question For now though, thank you for watching and goodbye.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Panel Discussion
GenAI is scaling faster than any previous workload. From 5 megawatt “GPU blocks” to giga-campuses, AI factories are popping up everywhere – but inference must also move closer to end users to beat latency budgets and comply with a patchwork of data-sovereignty laws. Telecom operators already own the world’s most ubiquitous edge real estate, long-haul fibre and resilient power footprints. This discussion explores how telcos, colocation players and equipment vendors can collaborate to create AI-optimised transport fabrics, edge nodes and cooling-constrained points of presence (POPs) that turn connectivity into a new service opportunity.
Featuring:
- Andy Linham, Principal Strategy Manager, Vodafone Group
- Beth Cohen, Product Strategy Consultant, Verizon
- Kerem Arsal, Senior Principal Analyst, Omdia
Recorded October 2025
Participants
Andy Linham
Principal Strategy Manager, Vodafone Group
Beth Cohen
Product Strategy Consultant, Verizon
Kerem Arsal
Senior Principal Analyst, Omdia