The AI-native telco: Capturing revenue opportunities in the AI value chain

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/Yoe617yNsxQ?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Guy Daniels, TelecomTV (00:20):
Hello, you are watching TelecomTV and our panel discussion on the AI native Telco capturing revenue opportunities in the AI value chain. I'm Guy Daniels and today we're diving into one of the biggest transformations the telecom industry has ever faced. The rise of the AI native Telco. We'll explore how AI is reshaping networks, traffic patterns, investment decisions, and business models, and we'll also look at what service providers can do now to capitalize on new opportunities across the AI value chain. Well, in the lead up to this live show, we have been asking for your questions on the AI native telco and we will address those during this show. You just about have time to get further questions to us. There's a submission form on the telecom TV website, but don't leave it too late. Well, I'm delighted to say that joining me on our live program today are Adaora Okeleke, who's principal analyst with Analysys, Mason; Francis Haysom, principal analyst at Appledore Research, and Steve Daigle who is Global head of Telco Systems Engineers, HPE. Hello everyone. Really good to see you all. Thanks so much for taking part in today's live program. Well, let me start by addressing the fact that AI adoption is accelerating across every enterprise vertical. It's moving well beyond simple automation and towards massive data intensive training and inference workloads. Steve, let me come to you first and ask you, is this ultimately good news for telcos?

Steve Daigle, HPE (02:14):
Yes Guy. I think so, and for a couple reasons. The increasing demand for enhanced connectivity, which if they execute well, they can monetize and the opportunity to participate in delivering AI solutions for enterprises. And in fact, we're already seeing a few of these things happening in the market today. First of all, traffic is growing and shifting from asymmetrical to symmetrical. We saw this transition start during the pandemic with video conferencing and streaming, but AI is accelerating it as training and inferencing workloads are causing a surge in that traffic. We're also seeing, and it's early, but we're also seeing multimodal AI and upstream videos for AI analytics and action latency Expectations are tightening up apps like real-time inferencing, robotics and automation. They can't tolerate delay and again, early days there we're seeing this emerge. AI specific SLAs are also are coming about, so as we move from customer connectivity to workload connectivity, enterprises begin to look for guaranteed bandwidth on demand, deterministic performance, tight assurances for both AI training and the inferencing workloads. And additionally AI training generates enormous east-west traffic inside and even between the data centers and that's where we're already seeing an increase in demand for high capacity data center Interconnector, DCI.

Guy Daniels, TelecomTV (03:44):
Steve, thanks very much for that. It was introductory comments. That's great. And Francis, have you got perspective on this? Is this ultimately good news for telcos?

Francis Haysom, Appledore Research (03:53):
Yes, I think it is very much good news. It fundamentally grows the need for connectivity, but I think Steve's comment is key. What we have the opportunity is initially to build on products that telco's already delivered, whether that's data center inter connect, or whether that's VPNs for enterprises to connect to the cloud effectively. Those are great opportunities, but I think the other key thing is that telcos have an opportunity to do more there where they can tie the SLAs or the stickiness of what they're delivering in terms of connectivity to the particular needs of AI workloads. The more that they can do there, the more likely they are to add value add. So yes, there's a default position of just growing what telcos do today, but they also have an opportunity to grow that stickiness, that relevance to AI beyond just the connectivity.

Guy Daniels, TelecomTV (04:51):
Great, thanks very much Francis. Well, I've got a follow up. I wouldn't mind asking, and Steve, you alluded to this just a few minutes earlier, but when we look at different network traffic patterns, are we able to say what are the near term impacts that we're seeing from these changing traffic patterns and expectations as we move ahead?

Steve Daigle, HPE (05:15):
I mentioned shifting traffic from asymmetrical to symmetrical. That's a big part of where service providers will have to invest to adopt to that because we've always built with oversubscription and we've always built around actually north south traffic, much of this will move east west in between AI data centers. And an additional change is that these flows will have to become more deterministic with requirements around low latencies. And like Francis said, when you put SLAs around these flows and you guarantee the performance of the network, that provides an opportunity to actually garner revenue from that. And SLA means you can actually generate revenue from your network, but it requires some investments to shore it up in order to provide these AI specific service levels.

Guy Daniels, TelecomTV (06:07):
Great, thanks very much for adding to that Steve. Well look, there's opportunities galore here, but AI whilst representing a major inflection point for the industry, telcos have been here before, we've had similar inflection points. How do the operators avoid falling back into what we might call a bit pipe trap, Adaora let's come across to you and get your thoughts on how they can navigate this?

Adaora Okeleke, Analysys Mason (06:35):
Sure, thanks. Thanks so much Guy. I think we've already started addressing that question in some shape or form. We're already in the discussion with feedback from Steven and Francis, but I guess the important point is for operators not to fall back to being considered as big pipe providers. I think it's more about changing their view in terms of the value that it could bring to the overall ecosystem, not just as connectivity providers, but also seeing themselves as being part of the overall AI value chain and not just being part of that value chain but also playing a critical role in assuring the quality of experience or services that specific players within the value chain would derive from delivering these AI services. So there are a number of opportunities that operators would have. First I would call out is we've spoken about providing SLAs and tying SLAs to specific requirements, specific outcomes that different value chain providers would demand.

(07:44):
So data center providers would have certain requirements, the model providers, the AI application providers will have certain requirements and operators are in that position to guarantee to provide that level of guarantee of the quality of service that these different players would experience in delivering the services. Now it's not just about guaranteeing those but turning those to products that they can actually offer. Now this gives them the opportunity to go beyond just offering connectivity and falling into that big pie trap that we talked about that we've been talking about, but also seeing themselves playing a role or targeting other players within that value chain. I think a second opportunity is also being a part of that value chain now, which is beginning to build AI platforms or indeed other platforms that would enable delivery of AI services to enterprise customers are indeed end consumer users. And we are already seeing some of that happening, particularly with operators across the globe, operators like Orange offering AI platforms to enterprise customers so they can, they're able to host models on those platforms but also host applications, either applications that they have developed, first party applications or third party applications, AI applications on this platforms.

(09:13):
Now the benefit of having this platform is that you have a common asset that a single asset that can then feed or address multiple use cases or multiple users in that regard. Now as there are benefits that operators have by being connectivity providers and that's having deep insights and intelligence that could potentially, and that will be a core product or a core value proposition to different AI value chain players and they can really take that position of being realtime real-time or just special behavioral insights providers to the different players in that regard. So telcos can really take advantage of this level of intelligence or level of insights that they have into the connectivity that would be going on through their networks to really bring added value to the different providers. And they've got different ways. There's a lot of discussion right now with network APIs and these insights can also be fed to the different players using these APIs.

(10:26):
There are also opportunities to provide AI based applications that provide some level of risk scoring to different organizations. For example, fraud provide some level of fraud detection to different players, potentially application users or indeed the application providers. So there are, and these are just a couple of examples that operators could potentially find themselves exploring and avoid the potential of being considered as connectivity, just connectivity providers. I may add that there are operators who, I know it may be considered a boring topic, but it is vital when you're talking about AI and the value and the entire business around ai and that's the ability to provide data management services and we have operators today that are already offering different enterprises those capabilities. Now they can potentially become value providers by providing these data management services to enterprises but also to enterprises to enable them prepare effectively to deliver, to consume, maximize the investments, whatever investments that they're making in delivering AI services. So these are some of the areas that I think operators have the opportunity to really runaway or keep away from being considered big pipe providers.

Guy Daniels, TelecomTV (11:57):
Thanks very much Adaora. Great advice there and a number of monetization opportunities you've covered there, which is fantastic. And Francis, let's come across to you for your input as well.

Francis Haysom, Appledore Research (12:07):
Yes, I think this, it's another inflection point and really the CSPs own the opportunity to make a big opportunity of it. I think there's two things we need to be aware of. Apple Door recently did analysis of 20 top telcos worldwide over the last 10 years and looked at its investment, CapEx, opex, et cetera. And if you look at it from a telco point of view that's very, very flat. If the joke is for us is if it was a hospital drama, the patient's dead. But in comparison to hyperscalers or even other utility providers, that level of investment is not one that is going to really grab this and run with it as it were. The other thing I think is very important is actually to look at the other inflection points we've been through before, the sort of data center plus telco, a sort of movement in the early two thousands and to a greater extent that what happened with the MEC and far edge in the last decade, again, it was very, very easy for telco, it gets disaggregated from the value there that data centers plus telco was very easily unbundle into hyperscalers plus telco delivering connectivity.

(13:36):
And similarly with MEC to some extent we were running so far ahead of the use cases that the investment in that technology, whether it would be used by edge, didn't really matter. It was the investment cycle, never quite caught up with the actual use cases. So when we're looking at ai, the BO really important thing in grabbing the opportunity is going to be investment and we are seeing positive signs from CSPs looking at quite substantial investments in AI and that's all very good, but also really I come back to that word stickiness. How do you create stickiness? How do you make yourself unbundle ball from the problem that the enterprise is trying to solve with AI or whether the consumer is trying to solve with ai. If that stickiness is there, then there's the opportunity for value.

Guy Daniels, TelecomTV (14:28):
Oh yes, great insights. Thanks very much Francis. Steve, would you have to come in with comments on this question?

Steve Daigle, HPE (14:35):
Yeah, I really like the way that Aurora and Francis teed that up. The way that I've been looking at this from the standpoint of how not to or how to learn from the MEC for instance is what are the constraints that an AI application deployment and first of all it's power. It's power and space. So data center operators are distributing their facilities to overcome this. There's compute and memory resource limitations and the practical sharing of those resources. There's service requirements like latency, reliability, reliability, and there are other performance requirements like protection of critical data, data sovereignty, compliance trade and government tensions. So with the market that's under these kind of constraints, I think the service providers, based on the assets that they have, they can leverage their space, their power, their reach, and especially their trusted role as a national digital infrastructure provider. They can build a business case around these assets to grow up the stack from just pure connectivity to even GPU as a service.

Guy Daniels, TelecomTV (15:42):
Absolutely. Thanks for those inputs there Steve. That's great. Thanks everyone for contributing to that question and another viewer question, we'll move on to a different question now if you can. This is another question we received a little earlier. Many operators are already using AI for tasks like provisioning, compliance checks and fault management, but these tools often operate in silos, can an age agentic AI approach help break down domain silos and enable true cross-domain intelligence. Steve, if you don't mind, I'm going to come straight back to you for this one because age agentic ai certainly favor of the moment. What are your thoughts? How can you answer this question for us?

Steve Daigle, HPE (16:27):
I think absolutely, but it will take some time. I mean we've made some great progress in integrating AI and ML for predictive analytics fault identification remediation. We've also integrated LLMs with a virtual network assistant to simplify and automate routine tasks and provide those insights to network analytics. We have digital twins that now assess validating network performance, closed loop functionality and providing operators with shorter MTTR when those issues do arise and we're seeing those impressive single domain wins. Verizon's announced their RAN controller optimizes energy, their at t's got the network digital twin that identifies proactive issues or issues proactively. Our own Marvis does ML based anomaly detection for campus and branch and these are all working well within their domains because the data and the workflows are consistent. But currently the reality is that all these AI tools are really operating in silos, but that's for a good reason. Each domain evolved independently, different protocols, different vendors and different data models. So I think a AI with the platform like MCP, the model context protocol anthropic as one example is very promising. We saw some actually we've provided some demos and we're working through multiple proofs of concept around this.

(18:01):
We showed these at our tech jam in Vienna, but we can provide true end-to-end visibility, top to bottom, up and down the stack and deliver an orchestrator agent that can coordinate specialized agents of each domain that the agentic orchestrator agent can handle the diverse domain specific agents and then collaborate between them and provide proactive remediation across the entire network. I think that's the way it's going to happen. It's a layered approach. It'll be a to a agent to agent for the agent level routing and then MCP for tool level execution.

Guy Daniels, TelecomTV (18:38):
Great, thanks very much Steve. And you say it's going to take time and moving from agents to AgTech is another leap here, but what we've done already and what we're seeing already is absolutely incredible. I'm going to come to Adaora and Francis, but Francis let's go to you first and then I'll come to Adaora. Francis.

Francis Haysom, Appledore Research (18:54):
I think this problem with silos is fundamentally not really actually about technology, it's actually about organization within telcos. A lot of this is to do with, we've got to realize it's not only that everybody's cooperating with telco quite often they're conflicting domains, conflict organizations, assurance planning, fulfillment for example, conflict in terms of what they're doing and quite a lot of it is safe. Solving this with AI is about as much solving it with what is the human intelligence we want the organization to work with. So I think an important part of that is allowing different parts of the organization to see what the others are seeing. And Steve mentioned a very key thing in our opinion, which is the digital twin here, the digital twin going beyond just the view of what is happening with the particular part of the organization and growing the understanding across the organization such the data is being shared about what has happened in the network and what we predict will happen in the network is one way in which we can understand both the cooperation that is possible but also the conflict that is there. So I'd really emphasize we really need to look at what is the organization see that made a very, very strong point there. See that end-to-end, end-to-end view and understand the conflicts and the prioritization of decisions that we need to do. AI can then help you in doing that. And also AI can help you in terms of resolving, resolving and understanding from a business perspective what needs to happen within the network.

Guy Daniels, TelecomTV (20:39):
Great, thank you very much Francis and Adaora, let's come across to you on this question about breaking down the silos with Agen ai.

Adaora Okeleke, Analysys Mason (20:47):
Yes, I think Francis and Steven have started off well. If I may add, I think it's also about thinking from that data perspective because you've got different organizations, it's been a culture to run different teams in silos and that's created the scenario where operations have occurred in that manner and there are other reasons for that, but it's now thinking about how you bring all of those together and it's really about thinking of unifying the insights and the information from these different organizations. But that's one. The second thing to consider is how do you then correlate or how do you compare or how do you relate the information from one silo with the other? And that's where we begin to think about how you create a common or normalized data model that cuts across the different silos. Because if we're thinking of egen ai, one thing is certain, one thing is specific is important and that's the point of context.

(22:01):
If we're going to take the route of egen AI workflows to address cross-domain closed automation within the network, then it's about making sure that context is in place and context gets down to the point of the dependencies that typically would exist between domains. And so having the right data model and then having the right context potentially maybe provided by an ontology layer would to a very large extent enable this end-to-end visibility or automation that we're looking at. I think one other important point I wanted to raise here, it's easy to think about the silos occurring across domains, but there are also silos occurring within domains. And what I mean is when you're looking at specific domains, you could look at the random domain, you could look at the transport domain or the core domain where there's been a lot of transformation happening core for example, where we you're beginning to see the cloud and cloudification of these environments and gradually moving to the radio access network you see that the management of the different layers within this network domains seem to be siloed and so then creates the need for observability because if you're thinking of end-to-end visibility and operations of network using agent ai, it's also about how you can mirror how you can leverage insights across the different layers within these domains to also inform or take the right actions or understand the best actions or decisions to take within the network.

(23:45):
And doing that proactively. And I think this is where AI plays a very important role because it has the ability to take insights from the different domains, analyze this, and then potentially come up with the right insights to then drive the next best actions that need to occur within the network.

Guy Daniels, TelecomTV (24:04):
Thanks very much Adaora. Well let's move on to another audience question then, and this one's about sovereign ai, regulatory pressures and data sovereignty requirements are driving strong momentum behind sovereign AI keeping data compute and control within national borders. So why are telcos uniquely positioned to lead here, especially when compared to global cloud providers? Francis, can we come across to you for your thoughts first?

Francis Haysom, Appledore Research (24:37):
So I think sovereign, sovereign ai, so sovereign everything is actually a great opportunity for CSPs. They are intrinsically usually tied to national boundaries. They are regulated within national boundaries and they sit within a very strong framework to be able to deliver a sovereignty in all aspects of it, including ai I think. So there's a great opportunity there. I would just qualify, I think one of the challenges of sovereignty is one, it's not really just one thing. It's very easy to see it as kind of like it's a wall and moat or a border force in terms of it where your data centers are in country and where your data center interconnects and connectivity is within the country and your data is kept within that one. But that's quite a limited thing. It can be a lot of other things. It can be sovereignty at much higher levels of the network.

(25:34):
It could even be sovereignty and a lot of the hyperscale cloud providers are working very heavily in this area is the ability to deliver sovereign AI on the hyperscale hyperscale cloud. So I think for the CSPs it's rather than just relying on the fact that they are in country, they have data centers in country, they have connectivity in country, it's building on that to give much, much more in that area. The other thing that I think is important to learn from existing sovereign initiatives, particularly within Europe, we were at the telecom TV event in D, one of the key points that was made there was already people are building European sovereign clouds and unfortunately they're sat there, they're not quite as big as the hyperscalers and they're not being used enough by people. So a lot of, again, the focus needs to be about what is the business case, not just simply to move workloads that have to be on sovereign cloud environment, but to make it as easy as for the CSPs to have the opportunity to make it as easy for people that may not have a compulsory requirement to move it to a sovereign cloud but have a great deal of business requirements that would make it preferential to move to a sovereign cloud, make it easy to deliver things on a sovereign cloud even if that is on top of hyperscale hyperscale resource.

(27:00):
Things like things like secure containers. We were recently talking to a company called Contain, a very small German company looking at a very strong problem which is health records within Germany, but their ability to move that onto hyperscale cloud internationally but to security in a virtual sovereign domain. So I think, yeah, just to conclude, CSPs have a great opportunity but think beyond just the compulsory things that need to go into the cloud. Think about making it as easy as possible to adopt sovereignty rather than it being a mandatory

Guy Daniels, TelecomTV (27:35):
Thing. Yes, it's an interesting area Francis, thank you very much for that Steve, we'll come down to you next. What are your thoughts on sovereign AI and the role that telcos can and perhaps should play?

Steve Daigle, HPE (27:48):
Yeah, thanks Guy. I think that our sps, they bring capabilities that especially and I'm seeing mostly predominantly in Europe right now that are hard to replicate. They're already compliant with the laws and regulations and the local jurisdictions. They possess a long-term government relationship as a trusted partner. That's very important when developing a highly sensitive sovereign AI infrastructure to be reused their expertise in architecting or designing, constructing and maintaining a large scale resilient communication network. I believe all these make them trusted partners for projects involving this sensitive national infrastructure with built-in resiliency. But France has a made a great point. It needs to be easy and it needs to be attractive beyond just the compulsory workloads that need to be there. And that's a very important point that he made where all of these advantages need to be put together in a package where the customer can consume it in an easy to consume fashion.

Guy Daniels, TelecomTV (28:59):
Yeah, absolutely. Thank you very much Steve and Adaora, we'll come across to you and your thoughts on server AI and the telco role.

Adaora Okeleke, Analysys Mason (29:06):
Yeah, so it's obviously an opportunity for operators to explore, but I just wanted to add to the comment around making it easy for the enterprises and I guess the point I tried to make there is understanding that in as much as sovereign AI is an opportunity that potentially operates can deliver and fulfill for enterprises, it's a scenario that they should understand would require them to potentially fit themselves into the much broader ecosystem. Because as we speak about public cloud providers or the hyperscalers, yes they have a global remic but then they require the local presence which the operators do have. And so making sure that they're embedded that within that much broader ecosystem and shows that the investments that they're currently making are exploited as much as possible, particularly in terms of building relationships with public cloud providers, which we already seeing happening with some operators already.

(30:14):
I think the second point is understanding, understanding that the meaning of sovereign AI does vary depending on the region that you are looking at or considering what sovereign AI means for North America for example, or the US to some extent does differ to how we view sovereign AI in other regions around potentially Europe or Asia Pacific Pacific. But then understanding that the requirements of the enterprise and the fact that there are also different levels not for getting that point, that there are different levels or requirements for sovereign ai. And so in order to make things easy are up perspective is making sure that you come to this market with a more modular approach that makes it easier for enterprises. For example, whatever skill that enterprise is making it easier for them to pick and choose what is possible. If they want certain requirements for sovereign AI because of the level of sensitivity associated with those capabilities or functions, then they make those available and then make it possible through partnerships that they will have with public cloud providers for these enterprise users to then potentially get access to capabilities that could exist on the public cloud but don't have that level of sensitivity compared to what needs to run on sovereign AI cloud.

(31:45):
So being able to provide that flexibility for enterprise customers to speak and choose what they need will be important. And then that kind of drives the need for operators to make sure that as they pursue sovereign AI opportunities, they do not position themselves as being in siloed markets but being part of the overall AI market because it's really going to be more or less a cooperative situation for them to succeed in sovereign ai.

Guy Daniels, TelecomTV (32:14):
Thanks very much Adaora. And yeah, you're absolutely right. Sovereign AI means different things in different territories and make it easy. That's the common message. The takeaway from that one I think. Well let's move on to another question area here. We've got one on the edge and the question is telcos have a deep last mile reach and a growing edge footprint. So how can operators leverage this position to in edge inference services? So Steve, can we come across to you this whole issue of inference and the role ts can play is very pertinent at the moment.

Steve Daigle, HPE (32:52):
Absolutely, and this is one of my favorite topics, a good place for them to start is with edge ing services. They're uniquely positioned, and I kind of mentioned this a bit earlier, but the telcos have the local last mile, they have a metro footprint. They can deliver lower latency services than a centralized cloud. They can offer it cost-effectively, but inferencing services closer to the user, closer to the source of the data, closer to where the workload is. They've got the local loop and the real estate in the facilities I like to call it. They've got power, space, power and fiber, which are three key aspects and location where there can be closer to the edge, closer to where the data is actually generated and consumed. It becomes really important for real-time applications like computer vision automation, industrial iot, and I knew again these are emerging, but for example, we have an SP customer who has a video rendering use case for their customer. It's for studio movie production and the use case it includes things like artist feedback loops and real time pre-visualization. AI in the render loop is one of the use cases that they have and they've been using centralized clouds but they're moving closer to the data as the low latency is more critical for their viewport rendering, their interactive lighting and camera adjustments and even just having to make fewer revisions later. The reality is that that edge inferencing with nodes everywhere, that capability, it's a solid low latency metro access play for our telcos.

Guy Daniels, TelecomTV (34:36):
Great, thanks. For example, Steve Francis, have you got any examples or thoughts on this?

Francis Haysom, Appledore Research (34:42):
Yeah, very much. I think telcos have a great opportunity here. It is the opportunity that they can ride what I would term as the data gravity wave, the massive data that is at the edge is a great opportunity for them. But I would slightly caution on that one, which is it's very easy to see this as an alternative between the telco's edge and the hyper sale or centralized cloud. There is another alternative. There's always the opportunity to put something on a end consumer end device and there's always the opportunity, the enterprise to build their own compute facilities, GPU facilities, whatever else it is. And in that situation, the telco gets reduced to connectivity between the enterprise's own own compute. And approximately sort of six years ago we did an analysis on the edge use case as it was then in telco. And one of the things we think that is really important is slightly latency is incredibly important because it means that you are close to the edge, but actually most use cases for enterprises or for consumers, they're not sat there saying I need latency.

(36:01):
What they're saying is I need it to do something in a certain time and for telcos to realize that low latency can be delivered by something on the device, low latency can be delivered by something in the enterprise. It's your abilities to take away the other problems of doing that, an enterprise has to run compute its edge device. A supermarket has to put compute into its stores that has a cost, it has support costs, it has a whole load of other issues. The telco has an opportunity to take that away using the low latency to make it easier for them to consume inference within their business or inference at the edge, inference within their business, make it easier, make it cheaper, make it easier to consume, make it cheaper not just in terms of consumption models but in terms of support models, et cetera, et cetera. That's the opportunity for telcos here.

Guy Daniels, TelecomTV (37:01):
Great, thanks very much Francis. Yeah, big opportunity as well. Now we still have more audience questions. We've actually received quite a lot of questions whilst we've been talking on this live program. So let's take a look at the next one we've got, let's try and get through as many of these as we possibly can. And so I'm going to read out the next one to you. If we look at the longer term, how do we think that AI is fundamentally going to change the role and value of telco networks in the digital economy? So a longer term question here, anybody want to take this one? I mean Steve, can I come down to you for any quick thoughts on will AI fundamentally change the role of networks?

Steve Daigle, HPE (37:47):
Sure, sure Guy. And at the risk of being a little repetitive about my favorite subject, I think it's inferencing at the edge, although France has made some fantastic points about you can reduce that latency with the CPE, the end device, but the telcos have that distributed space power and fiber which allows 'em to capitalize on those assets and deliver that secure low latency services to where the data resides. He had an excellent point though it has to be easy because it it'll take the cost off of the enterprise or off of the end user and that's I think their value prop. That plus their footprint, the tangible assets and they're not easily replicated. So the key is to baseline that connectivity but work up the stack up that value chain and then ultimately with an edge platform you can partner with hyperscalers or the neo clouds because you've got the space, the power and the fiber or even offer GPU as a service for themselves. But I think, and again, not to put too fine a point on it, I think it is to make it easy and simple for customers to consume because not only are you competing or not only is the telco competing with the hyperscalers, they're competing with a smart edge device or competing with the enterprise themselves.

Guy Daniels, TelecomTV (39:11):
Oh yes, well put. Thank you very much, Steve. Francis, have you got thoughts on this one?

Francis Haysom, Appledore Research (39:15):
Yes, Guy, like all good questions. The answer is it depends, but I will sort of put extra caveat on that one. It's about trying to get stickiness beyond just the connectivity, whether that's stickiness in terms of compute, GPUs, service contracts, ease of use, scaling, all those sort of things are means by which you can go beyond the dumb pipe connectivity. But those require the dependent bit is if that requires investment in operational processes, operational autonomy, whole load of things, maybe even investment in compute at the site maybe directly and maybe with partners. So the depends comes down to it is telcos have a great opportunity but they need to build in that stickiness, they need to build in that ease of use and they need to build in the investment to make that happen.

Guy Daniels, TelecomTV (40:14):
Thanks very much Francis and Adaora, we'll come to you as well.

Adaora Okeleke, Analysys Mason (40:17):
Yeah, just a point. I think, yes it does, AI would fundamentally change the role of operators, but I think it also would also emphasize their role as well because I think in as much as we've been talking about other opportunities that the operators could build on with the AI boom, connectivity still remains critical and that's why we are seeing the high level of demand from data center providers or public health providers for the data center interconnect services that they provide. So being able to guarantee the services, the connectivity services to very high level to fill the low latency requirements to fulfill all of the forms of requirements will become important. And I think that's where the investment in autonomy, in building in high levels of autonomy within the network becomes valuable and become really, really important to really meet the high demands because the use case for the network continues to grow and AI continues to extend the use case of AI and even so the need for high quality service levels being delivered to all customers of all types.

Guy Daniels, TelecomTV (41:41):
Thanks very much Adaora. That's great. Thanks everyone for commenting on that. There's a question here about more inferencing services. Now we've covered inferencing already certainly on the video production film industry case study, but there's a question here about are we seeing what are the AI based services operators are seeing that actually demand inferencing at the edge or at the far edge? What clear examples exist? Has anybody getting more examples of what these potential services and use cases might be? I say we've had the film production, but are we seeing any other at this moment in time? Real examples, Francis, are you seeing anything or hearing anything or where telcos are absolutely fundamentally important yet?

Francis Haysom, Appledore Research (42:32):
I think the honest answer is there are a lot of opportunities to drive things based on huge data volumes. Things like video is the obvious case, but the caveat against that is that if you look at most of those, they are being done. If I need to do video analysis in a factory, I'm putting compute into that factory to deliver that possibly on a private network. But that's where in order to get that, if I'm in a store and I'm needing to do massive data analysis on my stock, I'm putting compute into the store. So those sort of large data problems are really very, very strong examples of where inference is needed. But it comes back to this challenge. Where is the best place to place that inference? Is it at the far edge? Is it in the enterprise as it were or is it in the hyperscale? And the opportunity is for telcos to make it, again, it comes back to this ease of use. How easy can I make it to put loads in the far edge? If you can make it as easy as possible and not just get the ones that can't be done by the hyperscale centralized data center or can't be done by the enterprise, the more you can maximize that, that's the opportunity here.

Guy Daniels, TelecomTV (44:02):
Fantastic. Thanks so much for coming on this one FI do appreciate it. Another view of question we've had in the past five or 10 minutes and Steve, I think this could be an ideal one for you, lemme just read it out. As telco service providers move towards becoming AI native, how will they securely authenticate and govern all the individual AI agents that are required, ensuring clear visibility into what each agent is doing, where it operates and why is this something you can shed some insight on for us, Steve?

Steve Daigle, HPE (44:35):
Yeah, I think that's a complicated one that I've actually seen before. The idea is you have to apply segmentation to be able to keep each of the agents from talking with each other or sharing information that shouldn't be shared. I think it's very similar and my thoughts, it's very similar to cloud native in general where you have to keep your workloads from, you have to avoid noisy neighbors, you have to keep them segmented and you have to make sure you have a security appliance that continues to monitor at the lowest levels. We've got different solutions actually that we provide across our telco cloud capabilities, but it's a question that continues to arise because security and securing the agents and where and how they can operate and how they communicate with each other is going to be one that we continue, I believe to chase as we continue to evolve.

Guy Daniels, TelecomTV (45:38):
Yeah, absolutely right Steve, thanks for those comments though, I appreciate those and we'll get comments from Adaora and Francis Adaora. Have you got thoughts on how telcos can best authenticate and monitor these agents?

Adaora Okeleke, Analysys Mason (45:53):
It's an interesting question, but I think it's one that quite a number of, when you look into the agent AI platform market, we begin to begin to see some of these solutions coming from public cloud providers or indeed other players within this market. I think it's all about making sure that there is a platform that the operators have a platform, a common platform that enables them to manage the life circle of agents regardless of where that agent was developed. But once this agent comes into their ecosystem, they've got the tools, they've got the capabilities offered through a common platform that provides a common set of capabilities and features to secure govern and manage the activities of the agent. Obviously this particular platform should have the ability not only to also guarantee the behavior of these agents to the point that whatever those agents are doing there is good visibility into that as well.

(47:06):
So there are tool sets, there are platforms out there in the market that have been offered to different operators to take on to use and to support how they life manage their agents. But I think it's also about operators themselves thinking through the functions that agents will perform within the network and then defining procedures for which they would govern and manage those agents and then go out and then pick up the tools or capabilities that would enable them to execute on those set policies or procedures. So it's going to be a bit of a difficult and challenging situation, but there are capabilities, but then there are also responsibilities from operators to also make sure that they have a good understanding and a good grip on how to manage security and governance of agents.

Guy Daniels, TelecomTV (48:03):
Yeah, absolutely. Thanks so much Adaora for that. Francis, I'll come to you for opinions on how to do this.

Francis Haysom, Appledore Research (48:10):
I think one of the interesting things about agents is to a degree we don't really look at what we already do. There are lots of human beings in telcos that are making decisions of the time and quite often we're not really conscious of the decisions that they're making. We're really asking agents to make decisions in the network to evaluate stuff and make decisions in the network. I think a lot of this comes down to realizing that all decision making is not perfect. All data on which decision making is never perfect, whether it's a human being or whether it's an AI agent, and I think a lot more care and initiative needs to be done as Apple do. We're looking at this in terms of management of data associated with decision making is a much stronger way of looking at what are the kind of decisions we're asking these agents to do, how often do they get it right, how often do they get it wrong?

(49:12):
What are the consequences of them getting it wrong? Sometimes it doesn't matter, but sometimes it really does. If you're going to change the BGP routing tables in a core network, that matters if you get it wrong because it takes the network down. If I make a decision to provision a new SIM card, it doesn't matter quite importantly. So I think a lot of this in this agent is actually making be very, very clear about what are the decisions it's making and tracking whether they're making the right decisions and effectively resolving when you are making too many bad decisions versus the alternative. Then you need to make change. And again, come back to all agents don't exist in just like human beings do not exist in the vacuum is what are those interplays, when do I make a decision between, I can solve most of the problems in assurance by putting over capacity in the network, but that doesn't play to a CapEx model for telco. So how do I make those balance points? And Steve made a good point. You need that communication between the agents to say, this is what my model is saying, this is what my model is saying and we need me to make a mutual decision. Where is the optimum point between our conflicting needs? So final point, decisions are important. Tracking decisions and what the results of those decisions are important in making this work.

Guy Daniels, TelecomTV (50:44):
Great. Thanks very much indeed, Francis for that. We've just about out of time, but I do want to squeeze in one final question because we've had so many great questions come in before and during this program. It's difficult to select just one, but here it goes and maybe we can get some concise summarizing answers for this one final question then. How does all of this, how does this journey play out? Is there room for operators to create value at multiple points along the way or is success an all or nothing proposition, multiple points or all or nothing? So lemme come to all of you in turn, Francis, let's start with you.

Francis Haysom, Appledore Research (51:25):
It's a multiple points. Even the hyper scales don't play every part of the value. If you can work out the points where you can create value, start creating that value. Don't wait for everything to line up in a perfect, perfect scenario, start investing. And I would also say is start thinking about rather than the perfect architecture says start trying this with people. Start trying this with enterprises. Don't try to do it for your whole network. Do it in, I dunno, one city or whatever. Try it out with things. Think about minimum viable products in this area. That's the way you're going to attach your value. Get that stickiness, understand what that stickiness

Guy Daniels, TelecomTV (52:05):
Is. MVPs and stickiness. Francis, thank you very much. Adaora, let's come across to you. Is it multiple points or all or nothing?

Adaora Okeleke, Analysys Mason (52:13):
I'd say it's multiple points. I'll put it at multiple points and it's really about, like Francis said, understanding your strengths and exploring the strengths starting early, but more importantly, being ready to fail fast. If something doesn't work, be ready to fail fast and try something else. But don't wait. Don't just wait. Keep experimenting. Be ready to learn through the process and where there's video, then be quick to correct them and then move on. That would be my word for operators looking into this area.

Guy Daniels, TelecomTV (52:51):
Keep experimenting, prepare to fail fast. Adaora, thank you very much indeed for those comments. And Steve, we'll leave the last word for you. Multiple points for operators or all or nothing?

Steve Daigle, HPE (53:02):
I think we'll make it three for three. I'm going to go multiple points, but I'm going to say at different layers of the network, I think there's a connectivity play. You could establish networks to handle the more traffic and the symmetrical traffic that we're seeing. You create a secure layer to the network with intelligence and automation or on demand scaling for deterministic traffic patterns with traffic engineering, compute aware routing, and even AI aware networking capabilities that we begin to offer and we see that are on the market. Then beyond connectivity, you offer space power fiber closer to the customer, the workload connectivity, partner with the hyperscalers and neo clouds to offer secure assured network, move up the value chain that way for the hyperscaler or the neo clouds, their own GPU or GPU as a service offerings for them. But then last, the service provider can offer their own GPU as a service to the market directly.

(53:58):
And we see examples of that in Syntel and SoftBank. But the reality is, I think if I'm thinking about across the different questions, you've got edge inferencing nodes you can place everywhere, which is a low latency metro access play for our service providers. There's the DCI for centralized training hubs and the power abundant regions like Texas where I am and in Virginia. And then you become the trusted partner for regional sovereign AI clouds, which is what we discussed earlier. Each of these has different SLAs, different margins, different competitive dynamics, but I think the operators who win will be the ones who can deliver all three and then make them work together seamlessly.

Guy Daniels, TelecomTV (54:42):
Fantastic. And Steve, we'll take three for three any day. Thank you very much indeed. That is all we have time for though today. So thank you all very much indeed for taking part in this live program today. And you can find further information on the subjects covered in today's program by visiting hpe.com. And thank you for sending in all of your questions. We received a lot of questions, some really great questions both before and during the program and it's always difficult just to select a limited number. For now though, thank you very much for watching and goodbye.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Panel Discussion

AI adoption is accelerating across industries, with the generative AI market projected to reach $467bn by 2030. This creates a significant opportunity for service providers to move beyond basic connectivity and monetise their value as strategic AI enablers for business and residential customers.

In this webinar, Steve Daigle, global head of telco systems engineers at HPE, Adaora Okeleke, principal analyst at Analysys Mason and Francis Haysom, principal analyst at Appledore Research, joined TelecomTV’s Guy Daniels to explore how AI strategies are enabling network service providers to:

  • Simplify operations and enhance service delivery with AI-native automation that reduces costs while improving customer experiences.
  • Build AI-ready infrastructure to support edge inference services, ultra-high performance datacentre interconnects and distributed AI workloads.
  • Launch new revenue streams through sovereign AI clouds and AI-as-a-service offerings that differentiate beyond commodity connectivity.

Watch to discover how embracing AI-native strategies can accelerate your network transformation, unlock high-margin revenue opportunities and position your organisation as an AI-native telco in the rapidly evolving digital economy.

For more information please visit hpe.com.

Featuring

Steve Daigle

Global Head of Telco Systems Engineers, HPE

Adaora Okeleke

Principal Analyst, Analysys Mason

Francis Haysom

Principal Analyst, Appledore Research