The AI-Native Telco: Leveraging AI throughout the network

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/POJTRbCVA8w?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Ray Le Maistre, TelecomTV (00:27):
Our next session, which is focused on the AI native Telco. Obviously we've heard a bit about this already today, and the focus here is how to leverage AI throughout the network. Now, AI and in particular, generative AI is having a profound impact on all industries, telecoms included of course. And currently telcos are taking what might be deemed to be a bit of a cautious approach to deploying AI solutions within their networks. But how does a telco move from trialing AI in specific areas to becoming truly AI native? How does an AI native strategy align with future network strategy? And of course with cloud native as we've just been hearing. So let's meet our guest panelists. I'm going to ask them to quickly introduce themselves starting at the far end with Lee.

Lee Myall, Neos Networks (01:20):
Thank you, Ray. Good morning everybody. Great to be here and thank you. So Lee Myall, CEO of Neos Networks. We are a UK national owner and operator of a fiber network.

Ray Le Maistre, TelecomTV (01:32):
Okay, thank you, Lee. John?

Dr. John Naylon, AWS (01:35):
Yeah. Hi folks. My name is John Naylon. So I work at Amazon Web Services, AWS longtime telco guy. Used to work for at and t and startups in the telco space. I look after solutions architecture for some of our larger Western European telcos.

Udayan Mukherjee, Intel Corporation (01:50):
Okay, good morning. My name is Udayan Mukherjee. I'm an Intel senior fellow. My response is the product design for Intel, and that's targeting radio access network packet code as well as the edge. And this product is not just silicon, but also the software and system that goes into the virtual RAN or the cloud ran.

Ray Le Maistre, TelecomTV (02:11):
Okay, thank you Danielle.

Danielle Rios, TelcoDR & Totogi (02:13):
Hi, I'm Danielle Rios, known as DR. Or TelcoDR. That's my Twitter handle. Feel free to follow me. I am a founder and CEO of thought leadership group called Telco dr, but also have a startup named to Togi that's trying to rewrite all the software that runs the telco industry that if you were to do it over again with the modern tools we have today like public cloud and ai, you would do it a lot differently. And so that's our mission and we're really excited about that.

Gabriela Styf Sjöman, BT Group (02:43):
Exciting, yeah. Gabriela Styf Sjöman and I'm the managing director of research and network strategy at BT Group.

Ray Le Maistre, TelecomTV (02:51):
Okay, thanks everybody. Now Gabriela is our co-host for this discussion and we'll start this session with a special DSP leaders address from the podium. So Gabriela, if you would like to make your way over.

Gabriela Styf Sjöman, BT Group (03:05):
Yes, thank you.

(03:11):
Well first of all, thank you Ray. Thank you for the telecom TV crew who's invited me up here up on stage and share a little bit with you of the important role, this central role that AI plays as part of our network as well as business strategy. And as I shared earlier this morning, we tried to approach any technology from two lenses, profitability, saving me money or revenue making me money. And from that lens, we also approach our strategic options. So on the one hand we see AI for networks, so the role of AI or what I do to adopt AI as an organization. And I'll talk about those strategic options that we have and how we're executing on it. But equally important is of course the role that we as a communication service provider can play in the value chain to help our customers in their adoption of ai, which is what we call networks for ai.

(04:28):
So once again, two lenses. AI for networks, networks for ai, were the first one is much more about the network per se. And networks for AI is both about networks, but also our business strategy. Now when it comes to AI for networks, it's really not something new we have for decades being researching and innovating and adopting. Before it was called ai really it's really about automation. And I'm proud to say that VT group has the single largest amount of patents of all network operators in EMEA today. So we're doing a lot of invention, a lot of innovation and adoption of AI for networks and predominantly like many others, we're looking at use cases such as network planning, monitoring and optimization. Really those are the basic use cases. We can do much more, but we've done plenty in that space moving forward, more midterm have another theme.

(05:29):
So then the first one is what we call of course, intelligent operations. Another one which is very much on the rise where we're also doing quite a lot of invention and innovation is in what we call AI for sustainability. And the third one is AI for cybersecurity because of course trust is at the center of our network strategy as well. And when it comes to AI for cybersecurity, we also have an extensive patent portfolio and we look at the use of AI for detection, for analysis and response of course more mid to long term. We're looking at the use of AI for intent based networking, closed loop automation, fraud detection, predictive maintenance. But of course those use cases are slightly more complicated because as we spoke about earlier this morning, these are more of a holistic, they require holistic approach. They require very much a redesign of processes, a lot of new skill sets, but above all a different type of data fabric as well.

(06:40):
And centered to all of this, of course I talk about the data fabric is democratization of data that is also central to our network strategy. How do we make sure that anybody in a trustworthy manner can access the data that they need to innovate for new use cases. But I also want to talk about the networks for AI strategy that we have because I think it is now it a time for us as an industry to really talk about the role that we as telcos can play in the value chain as AI increasingly is adopted in society and who knows, right? We've done a lot of research. So my research team has been looking extensively because AI is just a big topic, but there are going to be multiple use cases, multiple use cases that use X language, different types of language models that will require different type of performance.

(07:43):
And we really don't know right when some use cases will be adopted. But what we do know is that we have to evolve from our legacy of building networks for the big mass to building networks which actually are much more about tailor made performance. It will no longer differentiation, will no longer be only about speed. Speed will matter, but it will not be the differentiator. It will not be about bundles, but it will be about delivering for the outcomes of that specific AI that our customers need. And you heard my colleague Laura speak about customer centric. We have to be much more customer centered. We have to be much closer to our customers to try to understand how they will use ai, what the ecosystems will look like, when will different AI use cases actually materialize, and what type of network performance and networks will be needed at that time.

(08:46):
So the build and operation for networks for AI is going to require a very different mindset. And then the role of standardization of course is going to be key. We need to accelerate standards. Not everything will require standards, but we know that in our industry standardization is key for scalability and this is something that we all need to focus on. But all in all, despite all the challenges that we see, of course skill sets, once again the data fabric, the maturity of processes, people, I think we have tremendously exciting times. I think that AI is going to be very disruptive, much more than cloud per se, but if you really want to monetize on the opportunities that AI will bring, we have to rethink. And I would like all of us to talk slightly less about the technology and talk much more about what are the new products and services that we can offer as society begins to adopt AI in a true true sense, what's the role that we can play in that value chain and how do we win? And of course the foundation of this is going to be cloud native, the foundation of this is going to be this automated networks, but we have to talk much more about the product that we're going to be selling. What is the demand, what is the product and how do we build? And that should be driving a roadmap towards an AI, native telco. So challenges but more exciting times ahead. And with that I back to Yuri.

Ray Le Maistre, TelecomTV (10:29):
Okay, thank you very much Gabriela. Round of applause for Gabriela please for setting the scene here. And I like the theme we're seeing here. We're really focusing in on much more on business cases and monetization rather than technology, which is a good thing. And I really look forward to the time when speed is not the differentiator. Hope that gets through to the marketing teams. That would be good. So let's kick off here and broaden this conversation out to our panel. And let's start off by talking about how operators can prioritize AI use cases that are actually going to deliver rapid return on investment while still laying the foundations for the AI native architecture that's going to be needed in the future. So Danielle, I'm going to come to you first here. I know you've got a very strong view here and I'm expecting a good lively panel here. We've got a great mix of people from different parts of the industry and I know that nobody hears a shrinking violet, so let's get stuck in Danielle.

Danielle Rios, TelcoDR & Totogi (11:41):
Well, coming off of the previous panel, I think there's something really interesting that the industry needs to realize, which is cloud and cloud native technologies have been around for about years. They're very well established. There's a lot of tech talent that you can access AWS Azure and the universities are teaching it. That is not the case with ai. It is super experimental. I mean in terms of accessibility really three years ago, I mean who thought that putting a chat interface on top of AI was the breakthrough we needed? But it's made it massively accessible to my mom, who's 80 kids at school. The universities are trying to block them from using it and now it's out in the wild in your customers and enterprises and things like that. So it's super experimental. So when you're starting to think about your AI strategies, what I see telcos doing is kind of overthinking it.

(12:40):
You're kind of doing it the way you've done other technologies that are very well established, blue printing it, talking about it, strategizing about it, maybe a proof of concept. And I advocate you got to just deploy it straight to production. Literally start with something small, a real business value case. If the business is yelling at you about it, it's valuable. There's escalations about it, you're doing daily meetings about it, it's valuable. Start with that, scope it down, get it into production, hook it into app. It's actually really easy to do. So Toki can go against any vendor and we can have impact. So just scaffolding, just that little piece up and getting users using it is super valuable. And then from there, iterate and expand, right? You'll see new models being announced almost every other week. It feels like new technologies that you're going to rip out, whatever you put in your first iteration, you're going to replace it the next day, try different model. So this, I mean by the time you do your POC and deploy it, you're going to throw it away. So you just got to go straight to production and that is not your culture. So I don't hear enough of leaders talking about the human HR impact. HR should absolutely have a seat at the table talking about how you're transforming the people side of this. So they're ready for rapid experimentation, things not working, but you just got to go. This is a rocket ship and we're off. There you go.

Ray Le Maistre, TelecomTV (14:13):
And John AWS works with a lot of telcos for their ai AI strategies. Are you seeing this approach? Are you helping them to overcome this overthinking and helping them to focus on some specific use cases?

Dr. John Naylon, AWS (14:33):
Yeah, I mean I'd say so because if you think about cloud services, one of the fundamental principles is their self-service, right? So if you want to go and consume an IT resource, whether that's foundational compute storage databases, or whether it's something newer like LLMs or whatever the idea is, you can just go and do that without needing intermediary. So I kind of interpret the question as being should telcos be looking at the business imperative, something that the business actually wants to do, or should they be thinking about building a platform to enable things? And I would say much more on the do what the business wants to do. And you can do that in a self-service way using the primitives that like cloud companies and others offer already without necessarily needing a large intermed layer. You may need some kind of guardrails that you want to configure that just do implement some standard safeguards across the business.

(15:22):
Okay, fair enough. But most things that you will use will offer those as features. So you don't need to build this kind of layer. And I'd say there is a risk of pausing and building a layer because actually that can very easily become the legacy that you think, oh my god, why did we implement this five years ago or five months ago? In the case of ai, because it is moving extremely quickly, which means that things go from being a useful platform to being a legacy piler junk that you wish you didn't have a lot more quickly. So this is a risk I think.

Ray Le Maistre, TelecomTV (15:55):
Okay. And Lee, obviously OS is not a consumer play, but obviously the AI potential and possibilities is just as relevant to you as any other operator. What's been your approach here?

Lee Myall, Neos Networks (16:12):
So I think one thread that's been here, I think that getting get on and get going, don't overthink 100%. And there's another theme that's come through here. So I think manian the first session and then Laura in the last, and that's the why, which should inform, you need a strategy, but you need to get going. And so I think you've got this sort of high level thinking here, which for me with our business with this fiber network, we need to be basically vertically integrable and horizontally integrable. So vertically basically because we've got customers that then incorporate us into solutions. So we need to be as integral as possible for them. Horizontally integrable means that we have a network that goes a lot of places, but it doesn't go everywhere. And so we need to basically be easily integrable with other networks for ease of deploying a solution.

(17:07):
And I think where those two interlock for me is sort of a bit of a beacon of customer value and customer centricity. So at the strategic level we call that making connectivity work in our business. So at the strategic level that AI is all about enabling that, then the getting going is bite-sized stuff that basically shows the organization. It's a great thing. And we've found that, for example, wherever you're handling inbound structured data, something like that, that's coming in from supply chain stop manually handling that. I mean it's a very low hanging fruit, but just get going with things like that. And then of course you've got that thing in the middle, which is the data, the landscape. And I think for telcos in it on the technology side, digital transformation as it was called, and then cloud first cloud native has changed the way that world works enormously in telcos. You go into the ops world, there's a lot of work to do. So as far as I'm concerned, you've sort of got to basically have a tech ops mentality, not a technology and operations mentality. So the equivalent of what DevOps did, I guess for it.

Ray Le Maistre, TelecomTV (18:17):
Okay. And this question was particularly tailored because it mentions how can telcos get into this quickly but at the same time be laying the foundation so they don't just go down a rabbit hole and have to come back out and start again. Are you seeing evidence that the telecom operator community is able to juggle those two elements at the same time?

Udayan Mukherjee, Intel Corporation (18:47):
Yeah, that is already happening. I agree that cloud was there for a long time and a lot of teaching and stuff, but genetic AI is very recent, but AI as a technology has been there. Also. LY detections has been there in the operators for a long time. The three levels that they are actually looking into first and foremost is on the resource utilization dynamic resource, whether it's a spectrum or whether it's a platform. And platform means all modern architecture that you're running ran or packet code is a multi code architecture. And every core has multiple different power states. As an example, sleep state we call it traditionally it used to be depending on operating system governors to transition those. But a very simple thing is that now we are exposing this to an applications and this is the first element of AI technologies that I know Ericson is looking into it, Samsung is looking into and we are working closely with them, say, Hey, here is the multi-course silicon architecture, I have all the sleep state, the load changes at any given time, especially in base stations, the loads are not fluctuates hardly.

(19:58):
30% utilization, 40% utilization. Why are you spending so much of power? You assemble some of these things, bring it to some of these codes, shut it down. So a lot of those technologies aren't actually in the application level now. And this is the first area that we see already. Some of the key terms are already building on it. And the second one is telco level, not just ran. A lot of companies are building and we are working with a major operator in the us. He wants to build a copilot for operations. So he has huge amount of logs and he has 35,000 radios that he wants to manage and they have T people to find out what is the fault. So what we have done, we took a open source, LLM, just fine tuning with telco specific three GPP data. By the way, I own three GPP team as well in my organization.

(20:53):
So we train them and then we essentially put an ACT pipeline to that operator and that operator is simply feeding the logs. And this pilot is going on is actually how quickly I can actually do a root cause analysis. And that's another application of an AI that people are already using. So these two, the third one where most challenges are is AI native, where we are talking about channel estimation, M-I-M-O-B management, three GP, we already talks about location services. So this is where a lot more experimentation going on ROI and am I improving the wireless performance? Is am I getting two DB more by having instead of using MMSC or MLD or AI based channel estimation. So there are some startup companies looking into it. A lot of companies like Ericsson and Intel research also is looking into those.

Ray Le Maistre, TelecomTV (21:48):
Okay. And Gabriela, is there a danger that certain teams within a network operator might bite the bullet and start pursuing a development path that the rest of the organization isn't aware of? And you might start to create almost silos of knowledge and silos of development. Is it easy to be able to manage things but sort of help innovation

Gabriela Styf Sjöman, BT Group (22:23):
Very much? And I think there's always this trick and this balance, and by the way, I fully support the experimentation part and I want to go back to what needs to be true for that to happen. But to your question, Ray, absolutely, and I think it's always a trick, how much bottom up innovation do you want and encourage versus top down? And of course you want bottom ups, you want teams to feel empowered to innovate and solve the problems or build solutions where the problem arises. At the same time, you also need that top down business driver to prioritize, but above all also that scalability. Because scalability reusability will not happen if you only leave it bottoms up. So what I mean is that you have lots of teams developing their solutions, treating data as their data. By the way they say it's my data.

(23:21):
I say it's not your data, it's the company's data. But also that modularity, reusability of different modules like Lego pieces, if you allow everybody to build their own solutions, you'll not be able to reuse those Lego pieces. So that is happening as we speak. And I think not only in bt, it's happening everywhere. And the question is how do you treat that balance? But I think this also goes to that experimentation. Well, you want experimentation and there's a lot of experimentation going on, what I call domain level. So that is what you say, the silos, it's on per domain, but the true material. And that needs to happen by the way initially because that's how people learn. But the material impact of AI is going to be that holistic, those holistic, that north star intent driven networking, et cetera. And to do that, it requires a total different way of exploration, but also the risks become higher.

(24:24):
And I'm just listening to Danielle and some others. I was thinking, okay, that is true, but what it needs to be true. I always ask my teams what needs to be true for something to happen. And I would like to just, okay, what needs to be true for telcos to have the courage to explore faster and launch faster? Because the truth is that the truth is that we are so regulated and we're such critical infrastructure that if something happens with the network, we're in deep trouble compared to many others over the top if something happens, yeah, it's not the same trouble. So how do you isolate? I would like to, I'm thinking as we speak, what needs to be true in terms of architecture for resilience? Recovery needs to be faster. How do we automate recovery so that when something goes wrong, you can recover fast? How do you isolate problems better? What are the type of architectures you can modularize an architect so that we can explore faster? Because today, you're absolutely right, we're doing all these explorations and it takes six months up to a year before and by the time you launch

(25:34):
It, it's obsolete. It's obsolete.

Ray Le Maistre, TelecomTV (25:36):
So what's the low hanging fruit then that can be addressed now without scaring the management level or creating problems?

Danielle Rios, TelcoDR & Totogi (25:48):
Well, probably the biggest use case of AI has been in customer support, right? Structured data. You have huge homogenous groups of experts at different levels. You can start with your customer support that's like 80, I don't know, the status is really high, like 60% billing questions and start there, get it deployed something, take your most common support ticket and make AI solve it completely on its own. And the way that we've done this with our customers is we put a human in the loop and have the AI write the answer and we have our AI engineer literally sitting next to the person and watching them work and they see, oh, this wasn't right. So then the AI engineer goes and fixes it, deploys it instantly like that minute, and then the worker is like, oh wow, you fixed it. So you start to build a little bit of a flywheel of success where the human is starting to trust the AI that hey, it is working.

(26:49):
And the AI engineer is seeing where are those exception points? Maybe it starts at 75% being correct, then 80, then you're to 95%. And then you get to the point where we're like, we don't need humans in the loop. And I think that was on the very first keynote panel. You start to take the human out of the loop that's now running, what percentage of your support tickets could be a hundred percent solved by AI with no humans? And that number, your goal should be 100%, it should be 100%. And so back to your point of we're either making money or saving money, this is a huge saving money idea. It's valuable, it's measurable. You can say did a human touch it or not? And start to track that on a daily or weekly basis. And that is impact. And it starts to build that flywheel of success and excitement in the organization that this works.

(27:39):
And before you can start to give it to your customers and actually help your customers, you have to be able to sit down in front of them and say, we use this ourselves. We know how to get this going. And so as a vendor, that's like my number one question I'm looking at who are you guys trusting on your AI stuff? Are you guys asking them how do you guys use AI every day? Are you guys using it? Show it to me. Right? At togi, everyone uses ai. We give them a measure every Friday, what percentage of your work was done with ai? We have sniffers on everyone's laptop and it gives, gives them a goal. 75% of your day should be spent in ai. The only way you're going to learn it is to use it. And now we have a chat where everyone's literally, I could open up right now. It's like, did someone test this new model? Did you try this? And there's excitement around it instead of fear of this is coming to take my job and that's today's blog. If you managed to read telco DRS blog today, it was about speaking very openly about the fact that it is going to impact jobs, but it's also going to create a bunch of jobs and we got to get our people ready for that.

Udayan Mukherjee, Intel Corporation (28:47):
Absolutely. That's true because the co-pilot for the operation that I talked about is exactly that. How many of these tickets that you automate,

Danielle Rios, TelcoDR & Totogi (28:54):
Correct

Udayan Mukherjee, Intel Corporation (28:55):
And fix it currently with the people, but slowly but surely try to

Danielle Rios, TelcoDR & Totogi (29:00):
Absolutely

Udayan Mukherjee, Intel Corporation (29:00):
Improve the core structure.

Ray Le Maistre, TelecomTV (29:03):
So I mean return on investment, that's got to be front and center for a lot of the strategies and a lot of thinkings, but there's lots of ways to achieve that and lots of ways to get there. So we're advocating here finding some key areas to start some proof points that isn't going to knock anything else over create any regulatory issues. Because a great point. There's a lot of rules that need to be aligned up with. If we can move on now and take a look at if there are particular parts or layers of the network that are going to benefit first from gen AI models, there's already a lot of talk about how this can impact certain network optimization tools for example, but are there particular parts of the network that can benefit first and what does that mean for new data governance hurdles? Do those crop up as we start to introduce gen ai? So John, if I could start with you here and get your perspective from obviously from a company that's been at this for a little bit longer and looking at it from a slightly different perspective.

Dr. John Naylon, AWS (30:33):
Sure. Yeah. So I would say it's not so much layers of the network, but more the locus. And it's really where the real world meets the machine world or where humans meet the machine. So one classic example is customer service because there you've got real customers subscribers dialing in with a problem that you need to diagnose. That's one area. But telcos actually have quite a rich set of internal customers as well. So you might have a field force of people installing fiber or climbing towers or things like that. So those are internal customers and it's those scenarios where you have a complex, messy real world that actually AI systems now are quite capable of describing in human understandable terms. And you can compare that description in human understandable terms with how you configure the system and how it's supposed to be. And you can look for anomalies there.

(31:24):
So that's actually quite a rich seam, particularly in the field of legacy as well actually. So you can look at, okay, so what the configuration management database thinks is supposed to be installed in this location is this set of stuff. But there seems to be some extra thing here with a sign on it that says do not switch off. We better understand what that is before we switch it off. So that's a real world example of something we're doing with one of our customers in Western Europe. So yeah, it's not so much I would say individual layers. It's where you have the machine world meeting the human world and the cracks that can appear there. That's a really rich theme.

Ray Le Maistre, TelecomTV (32:01):
Okay,

Lee Myall, Neos Networks (32:02):
Lee, completely agree with John on that one. And you touched on structured data. We are sort of being gifted structured data inbound as well as hopefully we're working with structured data within our organization. So where change management of a certain kind for us was an early win structured data coming in. We have a lot of partners, as does literally any network provider and you need to work to understand what is the nature of that change, what are the services going to impact, when's it going to happen, et cetera, et cetera. And AI and automation enables us to do that in just a fraction of the time. So a few things on that is you are picking on things. The point about this being critical infrastructure, et cetera, we obviously have to be very risk aware, but we also have to be ambition and progress not adverse to ambition and progress.

(33:00):
So these are things that actually are quite containable. You can gain a lot of advantage, you can learn how to work with AI effectively before you're moving on to things that maybe your study will expose yourself a lot more to third parties and have sort of AI to AI interaction. So things like that have really helped us improve turnaround and manage change. I think the other two areas which also talk to customer service and experience would be capacity management and obviously inbound first line interaction with customers are the quick and easy and obvious wins I think that everybody's focusing on. But I certainly agree we tend to think in a very layered way in telco for obvious reasons. But actually I definitely think it's, it's more about points of interaction and how they can be made much more efficient and you can save talent for the stuff up here. Typically in telco, that's third line of course in engineer to engineer resolution.

Ray Le Maistre, TelecomTV (34:04):
And incredibly, I need to keep an eye on the time here because we're rapidly approaching lunchtime as well. But I do want to spend some time on investment models and what AI means for the way that network operators are going to use their purses. A lot of compute resource that's required here to enable AI native strategies and AI native operations and lots of different ways to approach this. Do you build your own, do you partner our hyperscalers the best collaborators? And also what does this mean, not just only on the compute side, but what does it mean for the network in general? Because AI and cloud as we know is kind of useless unless it's really well connected, everything needs to be really well connected. So Lee, if I can just start with you before we get stuck into the compute side of the investments and ask you what kind of impact you expect AI to have on the demands on networks and demands on capacity. Because this is often a topic that can be missed out a little bit, but we're seeing some business models like Lumen and ZEO in the US who are now all in about delivering AI basically data center interconnect as it was known. So what kind of impact is this having on networks?

Lee Myall, Neos Networks (35:45):
So it's reasonably early days, but the landscape is changing. I guess I'm fortunate to come from the position of having run a DC business that had some instances of AI sitting in there and watching how those things mushroom and grow and consume connectivity as well as kilowatts of megawatts of infrastructure. So everybody's chasing basically power mostly. And if you're going to build a DC to host these things, it's a lot more expensive to try and drag power to a DC than connectivity every time. Frankly, no matter how remote. And what's happening with that is there's a dispersal. So it's not all about West London anymore. And it also AI has, if you're going to have a huge large language model and training, you don't necessarily need to be that close to the end user with regard to that. That's the inference piece. And then it comes to the edge.

(36:44):
So there's this distribution and there's a large pipeline, I'm speaking for the UK now of megawatts of planned DC deployment. If 50% of that comes to be, then that's going to change the landscape. And if Edge becomes as it's forecasted to be, then you have to be careful because there's a local authority term for this. But if you think of tier one cities, but actually if you look at London down to Manchester in terms of population, then go Manchester down to a hundred K, then you've got 85 cities that in theory are going to be edge locations and that's going to require data centers of some of of an edge size if you like. And it's going to require a level of connectivity of capacity and quality to drive. So it's changing and it will change rapidly I think as an accelerator to the next few years.

(37:36):
Go on. Back to the other piece though, which I think we're touching on is build or buy. And I've had a little bit of exposure to that. I think it's got to be a very large organization where build actually works for them. And it's not just about buying a boatload of Nvidia. You've got a lot of stuff, there's a lot of complexity way beyond that. And then you've got the financials around it. So I've interacted with a very large organization spending 40 million a year on GPU as a service that thought, hey, let's buy. Got into some conversations with 'em. It literally, even at that level, it was not that easy to come up with a model that actually would be cost effective for them. A lot of value add in there, not just the hardware. So that debate will rage, no doubt.

Ray Le Maistre, TelecomTV (38:30):
Okay. But we are starting to see this happen. We we're starting to see telecom operators investing in their own infrastructure, an AI compute infrastructure. They see not only a chance to use it themselves, but a chance to develop revenue generating business opportunities from it. So how is this going to weigh up? Is it just going to be different for everybody? Is there a scale? I'm some pretty strong.

Danielle Rios, TelcoDR & Totogi (39:00):
I think it's a no-brainer go hyperscaler partnership, right? I mean they have the chips, right? It's not just Nvidia, but AWS has Tanium and Inre, Google has their TPUs, Maya from Azure. I mean, where are you in the line to get Nvidia servers? Are you behind Elon Musk? Are you in front of them? Right? I mean that guy's just swallowing them whole, there's a lot of complexity. It's not just power, it's the chips too. And so if you are going up against another telco that goes the hyperscaler route and now they're building all the applications, and that's really where the value layer is here. It's not at the LLM. I see people like, oh, let's go build an LLM. That's a commodity. And we're seeing that happen. I think prices on LLM APIs are down like a hundred x and I mean it's a race to the bottom where you're double, you have to double to just stay in the race. I mean this is terrible for the LLM providers and I'm like, telcos are talking about building lms. I'm like, dude, hard pass, right? But you should be building those applications. So don't build up all that infrastructure. Focus on the apps. That's fricking hard. Dude. Lots of hard,

Udayan Mukherjee, Intel Corporation (40:14):
But that requests real time inferencing. There's a difference here. The training, yeah, cloud. But I have operators say, I don't even have want to get my prompt out into this. I want something secured in my infrastructure and I have so much of underutilized infrastructure. What are you doing with it? So every roadmap that we're talking about, we are making sure that it's AI inferencing capable. So realtime inferencing, if you're doing channel estimation on mime, on ai native ran, you can't do that in the freaking cloud. Your base stations running on that far edge. You have to do this there. Now if you're model training, yes you do that in hyperscaler cloud, fine, but there is lot more inference.

Danielle Rios, TelcoDR & Totogi (40:54):
But the scale that you need by the time you're done doing all the training is much reduced.

Udayan Mukherjee, Intel Corporation (40:58):
Training is once.

Danielle Rios, TelcoDR & Totogi (40:58):
Can you predict that now? I don't think you can.

Udayan Mukherjee, Intel Corporation (41:01):
Training is offline often fine tuning maybe a couple of times. But inferencing is constant. Your workload changing to massive inferencing going on.

Danielle Rios, TelcoDR & Totogi (41:11):
But I wouldn't also make the leap of like, oh, let's turn this into a moneymaking business, right? At the same time,

Udayan Mukherjee, Intel Corporation (41:17):
Moneymaking machine. If they can make it then

Danielle Rios, TelcoDR & Totogi (41:19):
I mean this is like,

Udayan Mukherjee, Intel Corporation (41:20):
Yeah, we will be very happy.

Danielle Rios, TelcoDR & Totogi (41:21):
Swallowing more money, she'll be, but you're swallowing a whale hole, right? You're eating the entire elephant. I mean it's early days in AI still. It's so early days, right? I think it's a lot to take on.

Ray Le Maistre, TelecomTV (41:36):
Yeah.

Gabriela Styf Sjöman, BT Group (41:37):
I also believe, I think it's about balancing the culture that we have build it will come to now you need to start exploring. I do think it's early days. I think it will be very challenging. Go and build it. Some have done build and it will come. I believe much more in partnerships. Try to understand where the value lies in the value chain,

Danielle Rios, TelcoDR & Totogi (42:02):
What works, what doesn't. It can throw it away. Exactly. You don't have a big

Gabriela Styf Sjöman, BT Group (42:05):
Part. And the partnerships you have fail. Not as expensively as we usually we telco, we fail slow and very expensive. We need to fail fast, fast. We need to fast and cheap. So I do think we need to find those models of exploration and partnerships is going to be key for exploration.

Ray Le Maistre, TelecomTV (42:24):
Now John, I do need to come to you. Obviously you're right in the thick of this and having conversations, but I'm sure you're also having conversations because one of the drivers for some of the telcos that are investing in the AI factories is the sovereign cloud and sovereign AI opportunities that are now becoming more and more obvious. So how are you seeing conversations with your telecom operator partners?

Dr. John Naylon, AWS (42:56):
So that is quite a good example of the kind of targeting that I think would be necessary for telecom operators to succeed in this space. Because I think if what you are really saying is let's just go and offer a generic GPU as a service offering, the kind of news is that's a highly competitive space with some

(43:15):
Well-funded companies already doing it. So being a late entrant to a hyper competitive market where you actually have a fairly parochial localized offering, it's not a good set of factors for success in general. If it's a generic service, if it's a highly targeted service, that might be looking at public safety opportunities, public sector in general, something which is localized to a sovereign location because that's so much more targeted, that has a higher probability of success, I would say. And it's also realizing that because you are targeting it, your customer segment is likely to be able to afford a premium price for the kind of wrap that you're putting around it, the sovereign wrap that you're putting around it, right? So I would say that is a possible avenue for success. I think a generic GPU as a service offering, I don't see that being successful

Ray Le Maistre, TelecomTV (44:08):
Personally. Okay. Agree.

Dr. John Naylon, AWS (44:10):
Well that obviously from a partisan

Ray Le Maistre, TelecomTV (44:12):
Point, no, sure. Yeah, no, absolutely. Well, I can sense that this is a conversation that's very likely to spill out into the next hour and a half because we do need to end at this point. We are out of time for this session for our online audience. Stay tuned because the discussion is going to continue on extra shot with Tony plo. So please send in your questions for Tony and his guests. And don't forget to take part in today's poll, which you can find on the telecom TV website. For those here lunches now are being served just outside. And there's plenty of time of course to take part in our charity pinball tournament, which has a very musical theme I noticed this year looking, I'm going to be stuck in there at some point during the lunch hour. But we're going to be back here on the main stage in about 90 minutes. So we will see you again at 2:00 PM UK time. But in the meantime, a round of applause for our panelists. Please.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Panel Discussion

This dynamic panel explores how telcos can move beyond cautious experimentation to become truly AI native. Experts from BT, AWS, Intel, Neos Networks and TelcoDR share insights on building AI-ready infrastructure, accelerating deployment and balancing innovation with risk. Topics include using generative AI in network operations, customer support automation, the cloud-native foundation for AI, and why partnerships – not just infrastructure – are key to monetising the AI opportunity.

Broadcast live 3 June 2025

Explore the standout themes from this year's DSP Leaders World Forum — download the report for curated highlights, key quotes, and expert perspectives on telecom’s next big shifts.

Featuring:

CO-HOST

Gabriela Styf Sjöman

Managing Director Research and Networks Strategy, BT Group

Danielle Rios

CEO, TelcoDR, Acting CEO, Totogi

Dr John Naylon

Senior Solutions Architect Manager, AWS

Lee Myall

Chief Executive Officer, Neos Networks

Udayan Mukherjee

Senior Fellow, Network and Edge Group, Intel Corporation