Telcos and the LLM opportunity

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/bsNk0zuSi3k?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Guy Daniels, TelecomTV (00:24):
Hello, you are watching the AI Native Telco Summit part of our year-Round DSP Leaders Coverage. I'm Guy Daniels, and for our second discussion today, day two of the summit, we are going to take a closer look at large language models and the opportunity for telcos. How will the development of LLMs impact telcos? Should they develop their own and whatever strategy they adopt? How can these telco LLMs be implemented and monetized? Well, I'm delighted to say that joining me on the program today are Michael Clegg, vice President and general Manager for 5G and Edge at Supermicro Shujaur Mufti, senior manager, global Partners, solution architecture, telecom, media and Entertainment for Red Hat, Scott Cadzow, chair of the Etsy Technical Committee on securing artificial intelligence. And Aaron Boasman-Patel, vice President Innovation with TM Forum. Hello everyone. It's really good to see you all. Thanks so much for taking part in our discussion today. Now, about a year ago or so, we started seeing interest in developing telco specific large language models, LLMs. So what's the thinking here? What are the potential benefits and would such a model be used for customer engagement, internal operational processes, or perhaps as a new digital service enabler? Aaron, what are you seeing here? What are your views? Perhaps we could start with you please.

Aaron Boasman-Patel, TM Forum (02:06):
Absolutely. Thank you so much and it's a really great question. I think there's two main parts to answer this, and the first is, of course is around customer and really driving better customer engagements. And one of the first things we have to talk about that is of course those intelligent chatbots and agents, which we used from a customer experience perspective. The real benefit for telco to develop these specific elements is the vast amounts of data they have and the intent in which they can drive. The idea of intelligent agents and using large language models is to understand and drive what is that customer intent, what do they want to do? And when it's generic and some have very specific questions when they go to their service provider by actually making a much more precise using and leveraging that data and creating their own language model, we're going to get much better answers and much better responses.

(02:53):
And this is all about making people feel like they're talking to a real human being and having those types of, so I think that's a really important one to look at. And I suppose that's the one that gets the most exposure.

(03:05):
The side that I'm really interested in, one that I think is game changer for the telco industry really is around internal operations and specifically

(03:13):
as we start now to migrate to what we call these autonomous networks where networks are self-healing, they're all intent driven and that is really where large language models that are specific to telco are going to be critically important to do that. If you think of use cases around things such as IP fault management or optical fault management, RAN fault management, all those types of things want the network to self-organize and to fix itself, these elements are going to be critical to do that.

(03:41):
When we think about level four where operations are getting to for autonomous operations, that is being intent driven and using AI. And the only way to do that is to have these telco specific models and it will be used for revenue generation. I think that's really important. So as well as fixing the networks themselves, allowing having things like copilots as well to help the field agents and fix these networks, but think of those new types of services that can be set up in real time to drive new revenues. So really, really important these large language models in the telco context.

Guy Daniels, TelecomTV (04:18):
Great, thanks very much. Aaron. A real positive call there for telco specific LLMs. Shujaur, let's come across to you. What's your views on having telco specific LLMs?

Shujaur Mufti, Red Hat (04:31):
Yeah, so Aaron, stated well on the two aspects, one was customer engagement and second on the internal network operations. I wanted to add the third point there about the enterprise services because at times you see

(04:45):
There's a big amount of revenue that is generated by telcos on the enterprise, and LLMs could actually play a role in enabling as well as creating new enterprise services.

(04:56):
Especially I take an example where every year there is some sort of new, let's say device launch and then based on social economic situations, service provider has to come up with a related pricing plan as well. That goes along with the new device launch. If you train an LLM in that particular aspect based on historical data, based on social economic situations, based on the current market engagements, the customer perspective demographics, it can automatically create some sort of an intelligent pricing plan that you don't have to remake every single time. So I think LLMs could actually play in the other aspect as well in monetizing the 5G network, for example, monetizing the overall telco enterprise services, different form of engagements along with the customer as well as within the network, plus enabling a new digital service provider experience.

Guy Daniels, TelecomTV (05:52):
Great, thanks for those additional points. Well, we're going to hear from other two panelists as well. So first of all, Michael, let's get your views.

Michael Clegg, Supermicro (06:00):
For me, in some cases it's a little essential for them to do this, and I like the word in large language model, it's a language, right? And if we're familiar with languages and particularly verticals and enterprise language, there's a lot of jargon associated with this. So we've seen some of these companies already say that

(06:17):
In order to get the best out of their customer service or even their internal operations, the LLM needs to be able to speak telco, right, speak the telco dialect. So I think developing a vertical, and I think we will see this in other industries, developing a vertical both based LLM in this case, the telco, LLM, that is generic and as it speaks to telco, is references into telco networks, into telco solutions, and that's something that all the telcos together can benefit from.

(06:44):
So I think it's a little bit less of a desire, but almost at some point a need that they're going to have to do this.

Guy Daniels, TelecomTV (06:52):
Okay, great. Thanks so much Michael. I want to pick up on some of the things you said there in a moment, but first of all, let's come across to Scott.

Scott Cadzow, ETSI (06:58):
I'm just going to completely reinforce what Michael said. You cannot build a model for telecoms that doesn't speak the world of language of telecoms. So you can't take a model from anywhere else.

(07:08):
So if the telecoms industry is going to do any kind of optimization of services and look towards 6G, where we're doing very advanced services operating at very high speed, very specific to each customer, then AI based LLMs are essential. I mean they cannot be anything but essential. And in doing that they have to have the language of telcos. You cannot import it from anywhere else. So that's basically where we're, we cannot not have an AI specific or telecom specific LLM.

(07:41):
I think that's answer.

Guy Daniels, TelecomTV (07:43):
Great, thanks very much Scott. Really positive responses there. I guess we hear from a lot of telcos and vast majority that seem to be supporting this, there are a few, some high profile ones that maybe aren't as keen on a specific vertical who think that the regular LLMs are developing to such an extent that they'll cover the ground. But Michael, I want to come back and pick up on our next talking point here related to what you were just saying there, I'd like to ask what attributes a telco has that would warrant the creation of an LLM. We heard from Aaron earlier saying about the large data it has available to it. So how can a telco use its large customer base and traffic levels to create such an LLM?

Michael Clegg, Supermicro (08:27):
I think if we look at telco and you think a little broader, these are huge organizations and working with the telecom alliance forum that has been set up by the telcos, I think they've quantified it nicely. So when I look at AI for telcos, you have three areas. You have AI in the telcos. So this is where AI has been applied to run their network better. Scott, a few minutes ago with six I think see six G being ai, we've been doing some work with NVIDIA around building AI into the ran. So you have your processor that does the ran the DU also been very AI capable to be able to do that. So I think in the network security optimization handover many, many areas that AI can apply. Then you have ai, what call for the network and AI are just large enterprises. People have picked up on this already.

(09:20):
They have billing systems, they have customer experience systems, they have customer support systems, all of these are ai. Now you might argue for these, you don't need a customer ai pick it up on a previous question, but I think again, if you teach at the telco language and it gets familiar with telco terms, your customer's going to have a better experience. The other thing about part in your own system is a lot of these operators are multilingual. So they have customer databases, they have operations in multiple countries where they need to be able to speak different languages so they can support all the groups in the different countries. And then the third element is AI by the telco. And we rarely see, and this is work we've been doing with Nvidia around doing AI factories or telco AI where the telco is now offering AI as a service and we start to get into the sovereign AI where telcos are a very trusted entity that has traditionally provide government services, government relationships.

(10:16):
So they have the privacy aspect, they have the security aspect, and they have an engagement and they have the network to bring all the data in. So you've got in the network for the telco and buy the telco. So they have all of those three. And as you said, I mean data is the new oil. I really think AI is the refinery. It takes that raw material and turns it into useful information. And telcos have incredible amounts of data about what's going in and out of the network about the customer relationships, the connectivity of the customers. So there's just hundreds of ways that AI can be applied within a telco operation and telco business.

Guy Daniels, TelecomTV (10:55):
Thanks very much, Michael. It seems such a compelling move really making this, it's so apparent that telcos do need LLMs. It'd be interesting to see what our viewers think when they send in their questions, which will answer in the live show later. So look, we've spoken about some of the benefits and the many benefits of creating toco LLMs, but what about the associated challenges and risks involved in developing such a solution? Shujaur, let's come across to you. What's the negative side of this or what should telcos be aware of?

Shujaur Mufti, Red Hat (11:29):
So I think when it comes to challenges, I will try to address in three different parts. So first our technical challenges because if you see that the overall the AI related influencing or these LLMS requires significant amount of processing, you require graphical processing unit GPUs in order to process this high amount of modeling data. So I think first the processing challenges, second, we'll see the energy consumption because these models can run over weeks, over months and you require signal amount power consumption, especially at the MSOs and data centers, which may not be meant for these kind of high processing. So you'll run some challenges there. And third, the moral training you require. The data that is in telcos traditionally has been deployed in silos. So there's a silo for let's say, I think micro was earlier talking about the RAN traffic. So there's a silo in the RAN traffic, there's a silo in the network traffic that comes in 5G core or IMS core or messaging core.

(12:38):
There's a silo in the customer traffic, the billing systems, all these systems networks have been deployed in silos. All the data that is produced is in silos. So stitching some of those datas correlating that will be the challenge for the LLMs. And at the same time, I think also looking at the aspect of this bias nest and unfairness because some data could actually be based on some demographics, some particular profiles. So I think taking out that aspect and in there I would also wanted to add that there is additional framework that is available today in terms of rack, which is regenerative augmented generation, which can actually further fine tune your model by taking a data that is maybe across other telco customers. You could actually even take that data and run that data in order to further validate and verify the results that has been produced by ep.

(13:41):
So I think that there are some challenges. There will be some risks as well associated, which we will see from governance and security perspective because these LLMs have to be skewed and also they has to follow some of those governing principles. It has to follow, especially the regulatory compliance, which is a part and parcel in any telco CSP environment. There is another thing that we have seen when the data, for example is not there or there's a technology difference between the dataset, then we may see there is a LLM hallucination that can come in play, it can gives you a fall response. So I give an example, you are behind a wheel on a self-driving car and you cross a red light, you would still get ticketed, which means that even if the LLM has come back with a fall response, you may still get penalized for that. So I think these are some of the challenges at risk that we have to aware of and service provider will have to have additional approach in order to address these challenges.

Guy Daniels, TelecomTV (14:48):
Great, thanks very much for that Shujaur. There's certainly a lot of areas that telcos need to be aware of and as you mentioned earlier, siloed data is always a problem for telcos. A few more responses to cover off here, Scott. Let's come across to you first. Thank you.

Scott Cadzow, ETSI (15:03):
Yeah, I mean I think there's a certain number of challenges because your data suddenly changes for being ephemeral, like you throw it away to being persistent. That's a major challenge. It also introduces challenges for where do you gather that data, where do you store that data, how do you transfer that data? So there are challenges in even getting the equipment in the right part of the network and that's something we need to look at because we're not yet in a position of having lots of data processing, lots of data storage, lots of capability all across the network. We often have very specialized equipment and making that suitable for AI for LLMs is not a trivial task. And so the expectation is yes, LLMs will be useful, but they need time to generate and time to get the right equipment in the right place, comms in the right place, the algorithms in the right place and that challenge shouldn't be underestimated. You've still got at the same time to build the functionality of the customer wants, build more bandwidth, more personalization, all those kind of things we still need to build and LLMs are part of that picture and we need to build the picture consistently and together. I think that's kind of where we're going.

Guy Daniels, TelecomTV (16:20):
Great. Thanks very much Scott. As you say, certainly non-trivial. Aaron, let's come across to you for some of the challenges that Togo should be aware of.

Aaron Boasman-Patel, TM Forum (16:30):
Yeah, and I think I want just move on a little bit the conversation, really good points covered by the setting it up, but actually one of the things I think we really need to be aware of once we've set it up is actually LLMops. That becomes really, really important. The way across that life cycle, this is all about the steps, actions, design, development, those types of things that we need to look at and really see this as an integral part of a service provider's operations. I think sometimes we really think about setting it up and all those things we talk about bias and those things where the data sits getting hold of the right data, watching out for hallucinations, but actually what happens when it's in action and managing those large language models becomes absolutely critical. And that's why we need to have a whole new approach to something that we are looking at and developing at TM forum is LLM operations.

(17:16):
And that framework becomes really, really critical because if we don't get that right, it's going to fall apart very, very quickly. And as you heard some of the dangers earlier, that really starts to creep in. And I think the challenge then to look at that is who does LMMops? It doesn't have to be the service provider. And we're looking at and talking with people at different ways to do that. So of course the service provider can do that themselves. They can look after their own operations or actually it could be a very interesting business model, I think for vendors to start saying actually this is something that they do as a service and I think we've got to have a lot more about what happens when they're in operation.

Guy Daniels, TelecomTV (17:53):
That's interesting. Aaron, thank you very much. We've certainly received a few early questions from our viewers about structure and about teams and about what skills they're going to need for this. That plays very nicely with that. Well thanks everyone for those responses and let me move on a bit now. Collaboration is always important for telcos, but why is collaboration here so important in advancing AI technologies, including obviously the creation and adoption of LLMs. Scott, what are your views about industry-wide collaboration?

Scott Cadzow, ETSI (18:23):
Well, I come from a standards background and collaboration is meat and potatoes to standards. We develop a process for, I call competitive collaboration. We're competitors in the marketplace, but we collaborate to make sure with the same foundations to work from. So that's kind of important. And doing that means we get all sorts of players around the table, we get vendors, we got operators, we got regulators, and they can talk together, they can build services together, they can build frameworks together. So collaboration is the natural working environment of telcos and LLMs and the application of LLMs in a telco environment shouldn't be any different. We should just encourage our continuing collaboration using standards. I mean the TM Forum here, essentially they're a standards body. ETSI is a standards body. We will encourage everyone to come around and table talk, build services, build capability together and then we can build competitive services on a collaborating interoperable platform.

Guy Daniels, TelecomTV (19:29):
Great. Thanks very much Scott and Michael, what are your thoughts on the importance of collaboration?

Michael Clegg, Supermicro (19:36):
As for me, and it's picking up a little bit the earlier topic on the challenges in doing this and also our initial one, why do this? To me, it really comes down to scale, cost and expertise are one of the key areas working really actively. In 5G, we saw the challenges telcos had and the transition to cloud native networks. It really was a re-skilling of the way they would run the networks from an appliance-based model into a cloud native model. So that's one of the areas I think we will see the same in AI for everybody, this is new but it's going to be an area of investment in terms of skill sets. The other area is in terms of scale, we work with a number of suppliers, NVIDIA, Intel, AMD and others and with some of the telcos on

(20:24):
What does it take to build a large AI factory to do training? And you're talking about a hundred thousand GPUs. For many telcos that's going to be a little bit challenging from a cost point of view. So there's training and inferencing, right? So the training side is the expensive side and that is areas where there can be some good collaboration across telcos. And we've already seen the Telecom AI Alliance, a number of operators getting together and creating this telco language model, this telco dialect we spoke about earlier. We could take all the standards that ETSI produces the work that TM Forum comes and feed it into there. There's no reason why that needs to be telco specific. That's an industry domain area. So doing what telcos have done in the past, collaboratively working together, they can sort of get the common denominators done, share that information, that learning, costs maybe amongst themselves a little bit and then they localize it into their own particular telcos operations. So I think it's going to be essential just given the cost and scale to build and particularly to train a LLM model that there's going to be a lot of benefits in some cooperation.

Guy Daniels, TelecomTV (21:36):
Great. Thanks very much Michael. There's a logical flow here, isn't there? Shujaur, let's come across to you.

Shujaur Mufti, Red Hat (21:43):
I think just to add what Michael said, I take Red Hat as an example. So Red Hat DNAs open source, we build, stabilize and operate open source models. One particular aspect that is the open source collaboration as well as we telcos who may not want to take a pre-trained LLM, they may adapt or contributing open source LLM and customize to their needs, especially the size of the telcos. Tier one to tier two or tier three, not everybody could actually afford the cost of operations. Their micro was just staring by writing thousands of GPUs. So I think there could be an opportunity for collaboration there based on the open source model. The other example that I wanted to state earlier, Michael mentioned about the AI for RAN, AI with RAN, AI on RAN, that there's a collaboration happening in the industry where Supermicro, us, Red Hat, NVIDIA and the RAN providers, we are actually working together in order to enable this whole stack with RAN, virtual RAN on one side and EF factory on the other side in order to enable AI at the Edge could actually help further collaborate and monetize 5G enable additional use cases to run influencing at the edge as well as utilizing this idle resources as well as power management on the ran as well as can actually expand to the network data as well as of the customer experience as well.

(23:23):
So I think overall the collaboration within the industry is important, telco may not be trained as we have seen this cloud native journey from day one, but there could be an opportunity for the AI companies can actually bring their LLMs, telco bring their large and rich set of data that we just talked about earlier and they can develop these LLMs for specific tasks especially can be enabled across, I think talk about enterprise services within network operations as well as for perform customer engagements.

Guy Daniels, TelecomTV (23:55):
Great. There's some fascinating work already underway between several companies. I'm sure that's going to continue. Well I've got a final question for you. We've seen huge progress with LLMs in general and the rate of innovation we're seeing is absolutely staggering, which does make business case predictions rather difficult, but if it proves inefficient to run LLMs locally on consumer devices, then might we see more increased edge based hosting? In which case can telco's benefit here? Shiza, I'm going to come back to you for this one. The Edge has been an interesting case for a few years now, but maybe this is bringing it back into focus.

Shujaur Mufti, Red Hat (24:43):
Yes. So as we just talked about, the LLMs require higher computing power storage as well as power consumption. That may not be available on consumer devices. It may go on these CPEs, maybe a small language model that can actually go there for a very domain specific function. But I think Edge ultimately will become important because there's a significant amount of opportunity in the iot side of the house where those iot management can happen at the edge. These small LLMs or even large LLMs can be deployed at the edge, can actually do significant monetization opportunities. There could be a developer opportunity at the edge using these LLMs, develop new services, create new services for enterprise use cases. These SLMs or LLMs could actually help even enable what we're talking about nowadays. Industry 5.0 could actually help enable Industry 5.0 could be the baseline for the 6G. As earlier Scott mentioned that. So I think at the edge with especially AI and the RAN togetherness and the collaboration there could actually help enable more and more use cases as well as enterprise services. And since Telcos own the Edge infrastructure, it could actually help them further accelerate the monetization of the 5G and ready for the 6G as well.

Guy Daniels, TelecomTV (26:18):
Great, thanks Shujaur. It's certainly an interesting time looking at the different architecture models and plans and how this might develop. Michael, I'm going to come across to you for your thoughts on what we're seeing here.

Michael Clegg, Supermicro (26:30):
I pick up a little bit what we learned from 5G. There was a big notion that Edge would take off significantly and at Supermicro, we've been very active in that space. The reality is that to move equipment out to the edge from the call is more expensive than running it in the call. So Edge really needs a driver and the driver tends to be latency. So if we go back to consumer devices, the applications that would run maybe on your phone and on your pc in many of those cases, ideally you want to run them locally. There's a privacy aspect about it. You can make your femoral forget the command quickly, but if the processor doesn't have enough power, you always have the cloud. It's not always clear to me that the cloud that you need to do that at the edge if it's a human interaction.

(27:16):
But now that's a little different because what Shuhjaur said in terms of IoT and smart cities, we rarely see when it comes to smart cities, when it comes to video processing, when it comes to Edge IoT, maybe private factories there, the latency rarely kicks in quickly. You want to be able to have something come in, make a decision and react on it fairly quickly. So normally I think we will see Edge grow, but maybe beyond the consumer devices, but it'll driven more by the enterprise and the industrial case or the smart city type applications. And also for me, as we said, we've been working with Nvidia and Red Hat and others to actually develop these RAN devices, these cloud RAN devices that will also be AI enabled, that will have spare cycles, that they can do AI edge processing. So the other challenge with moving computer out to the edge is it increases the cost a little bit, but if you can essentially get those cycles for free because you're already putting equipment out there and times when the network is not as busy, you've got this available compute resource. So I think we will see a mix of more driving for new Edge IoT applications, particularly as we get into latency and low latency on 5G and then coming up with 6G plus the natural enhancement of the capabilities of what we can do at the edge. I think those will be the bigger drivers in the end.

Guy Daniels, TelecomTV (28:38):
Thank you very much Michael. Lots of variables for telcos to consider when making these decisions. Well, we must leave it there for now, although I'm sure we will continue this debate during our live q and A show later. But for now, thank you all so much for taking part in our discussion. If you are watching this on day two of our AI Native Telco summit, then please do send us your questions and we'll answer them in our live q and a show, which starts at 4:00 PM UK time. The full schedule of programs and speakers can be found on the telecom TV website and that's where you'll also find our q and a form and of course our viewer poll Question for now though, thanks so much for watching and goodbye.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Panel Discussion

How will the development of AI large language models (LLMs) impact telcos? Last year we heard strong arguments for and against telcos developing their own LLMs. However, these are now starting to emerge. Earlier this year, SKT, Deutsche Telekom, e&, Singtel, and SoftBank Corp. announced the formation of a joint venture to develop specialised LLMs for telcos, operating across different languages and supporting a customer base of 1.3 billion people. So what is the LLM opportunity for telcos? How will they be developed, implemented and monetised?

Recorded October 2024

Aaron Boasman-Patel

Vice President Innovation, TM Forum

Michael Clegg

Vice President and General Manager for 5G and Edge, Supermicro

Scott Cadzow

Chair of ETSI TC Securing Artificial Intelligence (SAI)

Shujaur Mufti

Senior Manager, Global Partners Solution Architecture Telecom, Media, & Entertainment, Red Hat