To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/y5mrFlnCTH0?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Clarence Reynolds, TelecomTV (00:01):
I'm Clarence Reynolds at MWC26. AI is ramping up across the telecom industry, driving huge new training and inference workloads and putting increased pressure on networks. Steve Daigle, global head of Telco Systems Engineers at HPE, joins us to explore how telcos can capture the upside of this AI wave. Steve, thank you for being with us today.
Steve Daigle, HPE (00:22):
Thanks for having us.
Clarence Reynolds, TelecomTV (00:24):
So AI is exploding in the telecom industry, but is that a good thing? And how can we work on monetization?
Steve Daigle, HPE (00:32):
Oh yeah, two good questions. So first of all, I think it's a good thing because there's many ways that the dynamics of the traffic growth are going to be captured by the service providers. Some things that we're seeing already and we're picking up from customers this week is we really feel like the flows are going to become more symmetrical. In the past, flows were generally asymmetrical and if you're thinking about digital video and downloads of that nature, we've been dealing with that for a while. But ever since the work from home, our traffic's been a little bit more symmetrical where video as far as Teams calls and uploads, people streaming, TikTok. This is causing the traffic to be a little bit more symmetrical, and that's going to really take off with AI as multimodal uploads for AIs for inferencing coming from homes and from enterprises is going to cause a lot more that's a metric for traffic.
(01:31):
So customers are going to have to adapt their networks in that case. But in order to monetize across all of that growth, we feel like customers are going to have to build intelligence and automation into their networks. So build the network to handle the increase of traffic, build it to handle the increase of asymmetry, and it created automation and cell peeling that provides an ability as a baseline for enterprise customers to take advantage of. Then there's service level agreements around things like latencies. Latency is becoming much more important now than it has been in the past around AI workload. So traffic's not only becoming more symmetrical, but latency is becoming more and more important around that. And customers or service providers are creating service level agreements around that latency. So they can go to an enterprise customer and say, "We will guarantee you bandwidth with this latency and it will be deterministic." And that is a way to monetize the AI boom.
(02:33):
This is a way to monetize already the AI boom. I feel like anytime you can put a service level agreement on parts of the network, you can monetize that.
Clarence Reynolds, TelecomTV (02:41):
So see, this feels like an inflection point here with AI, but we've been here before. How do telcos not fall into that BitPipe mindset?
Steve Daigle, HPE (02:52):
So that's a great point because this has happened to service providers before where they become disaggregated from the higher level services that they're providing and only providing the moving of the bits. And the capabilities and the advantages that we see this time is that customers or service providers can come in and build those networks and have intelligence built into them and automation around those. Now, what do I mean by that? What are the constraints around AI that service providers can address? First of all, it's mostly power and space. Power and space for foundational to where AI can happen and where AI training can happen and where AI infancy in that. The next is memory and compute. Where is that memory compute allocated and how can it be subdivided and divvied up to the most efficient manner? The next is service levels around latency and strict service levels around latency.
(03:55):
And that's another way of constraint around the network. And the ability to address those is what our customers and what the service providers are going to be able to have to tackle. And how do they do that? They've got space, they've got power, and they've got fiber, and they've got reach into the networks. They've got the ability to provide services around those constraints of space, power and fiber. Those are facilities and abilities that they have, and then reach further into the network, closer to not only the customer, but closer to the enterprise, closer to where the traffic is actually generated. Those are assets that service providers have to address the constraints that this AI boom is under.
Clarence Reynolds, TelecomTV (04:43):
No, a lot of service providers are using AI for provisioning and fault management, and they're doing it in silos. Do you think that Agentic AI can help break down those silos?
Steve Daigle, HPE (04:55):
Yes, I do. And here's the way that I think it's going to transpire. There are silos today for Agentic AI for a reason, because the technologies all grew up in a silo. If you think of there's RAN and there's through GPP around RAN, there's transport, there's data center, there's optical, there's IP. All of these evolve differently, which makes sense because there's different vendors, there are different standards bodies. And organizationally, service providers are organized around these silos. So the agents that address these problems are all going to come up from that perspective. I actually just got finished talking to a customer and they're absolutely focused right now on IP and transport in that domain to use Agentic AI to solve their problems in that space. And then they said the same thing that we're saying. There's going to be a human that takes the results of that particular silo and translates it to the other domains at first.
(05:54):
And then eventually each of those other silos will have agentic AI solve those problems. So we'll have different domains or different silos where Agentic AI gets implemented. And then ultimately with a human in the loop at first, we'll have orchestrator AI that actually orchestrates in an agent to agent fashion that orchestration agent manages each agent in each domain. And then at first there'll be a human in the loop and then ultimately it can be a self-driving network. Those are the steps that we see it taking to go from silo driven agentic AI to more multi-domain agentic AI.
Clarence Reynolds, TelecomTV (06:36):
Sovereign AI is gaining momentum. Why are telcos uniquely positioned to take advantage of that momentum?
Steve Daigle, HPE (06:44):
Because for the first point, they've been around for a very long time. They know how to work within the legal fabric, within the countries that they resay. They've built trust over in some cases a hundred years of being a critical services provider for the countries in which they operate. They have understandings of the legal ways of dealing within these countries. They've got space, power, fiber within these countries. They have assets that they've built up over many years. And most importantly, as I mentioned, they have that trust of the countries in which they operate. They're uniquely positioned to have that same trust, enable them to hold the sovereign data and the sovereign AI technologies of the countries in which they operate. I think this is an advantage that they've got over some of the hyperscalers, to be honest. These are the capabilities that they can bring to bear to be the providers of sovereign
Clarence Reynolds, TelecomTV (07:53):
AI. As AI networking matures, where can service providers take advantage of creating value along that journey? And as for the last mile, how do they really take advantage of that as something that they can use for monetization?
Steve Daigle, HPE (08:09):
So I really like this question because I'm really passionate about this subject about, and we call it edge inferencing. So I think this is the place where service providers have all the assets in place. And let me back up a little bit. What we see is large centralized data centers where a lot of the AI training is taping for us. And it's in the globally the very largest data centers, the gigawatt factories, the AI factories that we're seeing in the world. That's where all the training of these models is taking place. That is going to continue, and there's data going between these large data centers and these AI factories, and that's one of the things that Juniper was purchased by HPE to provide the connectivity between those. And we've got the best and the most dense platform in the business with the PTX 12K that we introduced this week.
(09:07):
It allows you connectivity at the lowest latency of the deterministic transport between those centralized data centers that no other vendor can provide. That's number one. Number two, we're also the on- ramp into those data centers. So not only between those data centers with the on- ramp, but the MX. And we've got the logical scale that no one can match to provide all these virtual on- ramps to all these virtual customers into that data center. Now, once you've got that in place and you've got the models are created by training, the next step is infancy and inferencing is going to move from centralized to distribute. And this is where the service providers have the space, the power, and the fiber to allow the inferencing workloads to move out closer to the edge, closer to the enterprises, and closer to where the data is actually created. And then that data is transported back over the on- ramp of the MX into a data center.
(10:12):
And if that data center needs to retrain on that data and maybe needs to go to another data center in order to do that, it rises over the PTX for the enter data center, enter AI factory transport. So there's the on- ramp and there's the inter-data center transport. And then there's the inferencing at the edge that I just mentioned. The assets that service providers have, space, power, fiber, and reach is one that will allow them to capitalize on this, this emerging inferencing that I think better than any hyperscaler or any other provider. And the way that they do it, we see it in four steps. The first is the infrastructure layer. I mentioned that traffic's becoming more symmetrical. Build the network to handle that symmetrical traffic, automate it, and make it self-healing. So you've got a foundational level there. The next one is to create those deterministic flows that I mentioned, SLAs around latency, provide another base of the network that the customers can guarantee transport, because that's what's very important for enterprises, and to provide SLAs around latency and around deterministic transport.
(11:27):
Then service providers can begin to partner with GPU as a service providers, because again, they've got the space to power in the fiber, and they can allow the GPU as a service providers to come in and take advantage of that because they've got the lowest latency. Finally, they can become an AI as a service provider themselves and offer GPUs as a service and offer AI as a service to enterprises themselves. That's the way we see, call it the layers of monetization, the layers of taking advantage of inferencing, which is growing at the edge.
Clarence Reynolds, TelecomTV (12:06):
What a future we have ahead of us. Steve, thank you so much for your insights today.
Steve Daigle, HPE (12:10):
Thank you, Clarence.
I'm Clarence Reynolds at MWC26. AI is ramping up across the telecom industry, driving huge new training and inference workloads and putting increased pressure on networks. Steve Daigle, global head of Telco Systems Engineers at HPE, joins us to explore how telcos can capture the upside of this AI wave. Steve, thank you for being with us today.
Steve Daigle, HPE (00:22):
Thanks for having us.
Clarence Reynolds, TelecomTV (00:24):
So AI is exploding in the telecom industry, but is that a good thing? And how can we work on monetization?
Steve Daigle, HPE (00:32):
Oh yeah, two good questions. So first of all, I think it's a good thing because there's many ways that the dynamics of the traffic growth are going to be captured by the service providers. Some things that we're seeing already and we're picking up from customers this week is we really feel like the flows are going to become more symmetrical. In the past, flows were generally asymmetrical and if you're thinking about digital video and downloads of that nature, we've been dealing with that for a while. But ever since the work from home, our traffic's been a little bit more symmetrical where video as far as Teams calls and uploads, people streaming, TikTok. This is causing the traffic to be a little bit more symmetrical, and that's going to really take off with AI as multimodal uploads for AIs for inferencing coming from homes and from enterprises is going to cause a lot more that's a metric for traffic.
(01:31):
So customers are going to have to adapt their networks in that case. But in order to monetize across all of that growth, we feel like customers are going to have to build intelligence and automation into their networks. So build the network to handle the increase of traffic, build it to handle the increase of asymmetry, and it created automation and cell peeling that provides an ability as a baseline for enterprise customers to take advantage of. Then there's service level agreements around things like latencies. Latency is becoming much more important now than it has been in the past around AI workload. So traffic's not only becoming more symmetrical, but latency is becoming more and more important around that. And customers or service providers are creating service level agreements around that latency. So they can go to an enterprise customer and say, "We will guarantee you bandwidth with this latency and it will be deterministic." And that is a way to monetize the AI boom.
(02:33):
This is a way to monetize already the AI boom. I feel like anytime you can put a service level agreement on parts of the network, you can monetize that.
Clarence Reynolds, TelecomTV (02:41):
So see, this feels like an inflection point here with AI, but we've been here before. How do telcos not fall into that BitPipe mindset?
Steve Daigle, HPE (02:52):
So that's a great point because this has happened to service providers before where they become disaggregated from the higher level services that they're providing and only providing the moving of the bits. And the capabilities and the advantages that we see this time is that customers or service providers can come in and build those networks and have intelligence built into them and automation around those. Now, what do I mean by that? What are the constraints around AI that service providers can address? First of all, it's mostly power and space. Power and space for foundational to where AI can happen and where AI training can happen and where AI infancy in that. The next is memory and compute. Where is that memory compute allocated and how can it be subdivided and divvied up to the most efficient manner? The next is service levels around latency and strict service levels around latency.
(03:55):
And that's another way of constraint around the network. And the ability to address those is what our customers and what the service providers are going to be able to have to tackle. And how do they do that? They've got space, they've got power, and they've got fiber, and they've got reach into the networks. They've got the ability to provide services around those constraints of space, power and fiber. Those are facilities and abilities that they have, and then reach further into the network, closer to not only the customer, but closer to the enterprise, closer to where the traffic is actually generated. Those are assets that service providers have to address the constraints that this AI boom is under.
Clarence Reynolds, TelecomTV (04:43):
No, a lot of service providers are using AI for provisioning and fault management, and they're doing it in silos. Do you think that Agentic AI can help break down those silos?
Steve Daigle, HPE (04:55):
Yes, I do. And here's the way that I think it's going to transpire. There are silos today for Agentic AI for a reason, because the technologies all grew up in a silo. If you think of there's RAN and there's through GPP around RAN, there's transport, there's data center, there's optical, there's IP. All of these evolve differently, which makes sense because there's different vendors, there are different standards bodies. And organizationally, service providers are organized around these silos. So the agents that address these problems are all going to come up from that perspective. I actually just got finished talking to a customer and they're absolutely focused right now on IP and transport in that domain to use Agentic AI to solve their problems in that space. And then they said the same thing that we're saying. There's going to be a human that takes the results of that particular silo and translates it to the other domains at first.
(05:54):
And then eventually each of those other silos will have agentic AI solve those problems. So we'll have different domains or different silos where Agentic AI gets implemented. And then ultimately with a human in the loop at first, we'll have orchestrator AI that actually orchestrates in an agent to agent fashion that orchestration agent manages each agent in each domain. And then at first there'll be a human in the loop and then ultimately it can be a self-driving network. Those are the steps that we see it taking to go from silo driven agentic AI to more multi-domain agentic AI.
Clarence Reynolds, TelecomTV (06:36):
Sovereign AI is gaining momentum. Why are telcos uniquely positioned to take advantage of that momentum?
Steve Daigle, HPE (06:44):
Because for the first point, they've been around for a very long time. They know how to work within the legal fabric, within the countries that they resay. They've built trust over in some cases a hundred years of being a critical services provider for the countries in which they operate. They have understandings of the legal ways of dealing within these countries. They've got space, power, fiber within these countries. They have assets that they've built up over many years. And most importantly, as I mentioned, they have that trust of the countries in which they operate. They're uniquely positioned to have that same trust, enable them to hold the sovereign data and the sovereign AI technologies of the countries in which they operate. I think this is an advantage that they've got over some of the hyperscalers, to be honest. These are the capabilities that they can bring to bear to be the providers of sovereign
Clarence Reynolds, TelecomTV (07:53):
AI. As AI networking matures, where can service providers take advantage of creating value along that journey? And as for the last mile, how do they really take advantage of that as something that they can use for monetization?
Steve Daigle, HPE (08:09):
So I really like this question because I'm really passionate about this subject about, and we call it edge inferencing. So I think this is the place where service providers have all the assets in place. And let me back up a little bit. What we see is large centralized data centers where a lot of the AI training is taping for us. And it's in the globally the very largest data centers, the gigawatt factories, the AI factories that we're seeing in the world. That's where all the training of these models is taking place. That is going to continue, and there's data going between these large data centers and these AI factories, and that's one of the things that Juniper was purchased by HPE to provide the connectivity between those. And we've got the best and the most dense platform in the business with the PTX 12K that we introduced this week.
(09:07):
It allows you connectivity at the lowest latency of the deterministic transport between those centralized data centers that no other vendor can provide. That's number one. Number two, we're also the on- ramp into those data centers. So not only between those data centers with the on- ramp, but the MX. And we've got the logical scale that no one can match to provide all these virtual on- ramps to all these virtual customers into that data center. Now, once you've got that in place and you've got the models are created by training, the next step is infancy and inferencing is going to move from centralized to distribute. And this is where the service providers have the space, the power, and the fiber to allow the inferencing workloads to move out closer to the edge, closer to the enterprises, and closer to where the data is actually created. And then that data is transported back over the on- ramp of the MX into a data center.
(10:12):
And if that data center needs to retrain on that data and maybe needs to go to another data center in order to do that, it rises over the PTX for the enter data center, enter AI factory transport. So there's the on- ramp and there's the inter-data center transport. And then there's the inferencing at the edge that I just mentioned. The assets that service providers have, space, power, fiber, and reach is one that will allow them to capitalize on this, this emerging inferencing that I think better than any hyperscaler or any other provider. And the way that they do it, we see it in four steps. The first is the infrastructure layer. I mentioned that traffic's becoming more symmetrical. Build the network to handle that symmetrical traffic, automate it, and make it self-healing. So you've got a foundational level there. The next one is to create those deterministic flows that I mentioned, SLAs around latency, provide another base of the network that the customers can guarantee transport, because that's what's very important for enterprises, and to provide SLAs around latency and around deterministic transport.
(11:27):
Then service providers can begin to partner with GPU as a service providers, because again, they've got the space to power in the fiber, and they can allow the GPU as a service providers to come in and take advantage of that because they've got the lowest latency. Finally, they can become an AI as a service provider themselves and offer GPUs as a service and offer AI as a service to enterprises themselves. That's the way we see, call it the layers of monetization, the layers of taking advantage of inferencing, which is growing at the edge.
Clarence Reynolds, TelecomTV (12:06):
What a future we have ahead of us. Steve, thank you so much for your insights today.
Steve Daigle, HPE (12:10):
Thank you, Clarence.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Steve Daigle, Global Head of Telco Systems Engineers, HPE
AI is driving new training and inference workloads across telecom networks. Steve Daigle explains how operators can adapt to shifting traffic patterns, build intelligent and automated networks, and create new revenue opportunities through deterministic connectivity, edge inference, and sovereign AI capabilities.
Recorded March 2026
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.