To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/IHJwy61zALk?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Hello, you are watching the AI-Native Telco Summit part of our year-Round DSP Leaders Coverage. And it's time now for our live Q and A show. I'm Guy Daniels, and this is the second of two q and a shows. This is your final chance to ask questions on AI and the opportunities for telcos. Now, as part of today's summit, we featured a panel discussion that explored where telcos should focus their Gen AI activities. Plus we had an additional discussion around the large language model opportunities for telcos. And if you missed either of the panels, don't worry because we will rebroadcast them straight after this live q and a program, or you can watch them anytime you want on demand. Now, we've already received several questions from you, but if you haven't yet sent us one, then please do so now using the q and a form on the website.
(01:22):
Well, I'm pleased to say that joining me live on the program today are Beth Cohen, SDN Network product Strategy at Verizon Business Group. Martin Halstead, senior Distinguished Technologist for Aruba Telco Solutions, Hewlett Packard Enterprise, Warren Bayek, vice President Technology with Wind River, Shujaur Mufti, senior Manager, global Partner, solution Architecture, telecom, media and Entertainment at Red Hat. And Michael Clegg, vice President and General Manager for 5G and Edge at Supermicro. Hello everyone. It's good to see you all. Thanks so much for joining us for the live show. Well, let's get straight into our first audience question. We've got a lot of them today. So the first question is, what are some of the pitfalls associated with ai, native Telco strategies? What do telcos need to watch out for? Well, Beth, perhaps we can start by coming to you. Any warning signs for telcos?
Beth Cohen, Verizon (02:33):
Yeah, plenty. So AI is new technology. It's still definitely untried. So the telcos need to be on all the time, needs to work all the time. Uptime is super important to us. We cannot afford to have things go sideways, and if there are any hallucinations that sends our networks, that is not a good thing. So we need to make sure that when we put the AI insights into our live network that needs to work, we need to trust that it will produce the correct answer. So right now, I think most telcos are not using it in the live network. However, we need to do a whole lot of testing and gathering of data and making sure that the data is the right data before we can deploy it into the live network.
Guy Daniels, TelecomTV (03:46):
Thanks very much. Beth Shujaur, can we come across to you any pitfalls or warnings that Telco should be aware of?
Shujaur Mufti, Red Hat (03:56):
Sure. I think as Beth was stating about, telcos have to be careful about the pitfalls for ai. At the same time, if I see across in the past 10 plus years, telcos has been using predictive AI pretty much in the network side of the house, maybe on the customer experience side of the business as well. But at the same time, there are some risks as associated with the whole AI momentum that we are seeing with Jania and others. There are some the risks that telcos needs to be aware of. I think LLM hallucination that Matt mentioned, that where there's a large set of data and telcos are working off of generic lms, it may create some improper data response that could go wrong. So I think we need to be careful for that. There may have to be some specific LMS that may have to be created for domain specific functions.
(04:55):
For example, for network side, for customer side of the house, especially if we are building some virtual AI agents or if we are looking into monetizing new services. So I think we have to may look into each and every domain, look at the data that is available on the network sites, the data labels and attributes, and especially be aware that AI could be another technology that could be mislead us towards some of the risk where AI could also be used for negative purposes, plus the wrong data can actually fool ourself. So I think these are some of the pitfalls that telcos needs to be aware of. There's a huge amount of data, but at the same time I think the extraction, the abstraction out of that data needs to be carefully managed and operated.
Guy Daniels, TelecomTV (05:44):
Alright, good advice. Thank you Shujaur. And Warren, what about you? Any pitfalls that Telco should be aware of?
Warren Bayek, Wind River (05:51):
Yeah, maybe just to piggyback a little on what Shujaur said, the use of AI in telco could span a lot of different use cases all the way from, as you mentioned, customer interaction all the way out to delivery of service, which as Beth mentioned, is absolutely critical that stay at a hundred percent availability. So the telco use cases for AI will vary at each of those different levels, and I think a pitfall may be to try to bite off a little more than you could chew at the beginning because AI is such a possibly generative technology that will change the world. But we need to really be careful to start slowly in the places where halluc hallucinogenic responses and some of the pitfalls of AI that we all are very well aware of now are a lot less expensive, a lot less destructive, and a lot less can create a lot less problems within the telco environment. As we become more and more aware and more and more capable of creating AI solutions that make sense, we can bring the AI parts deeper and deeper into the critical parts of the network. But the important thing is to make sure that before any of that happens, we're very comfortable with it at the outer edges and as we move it closer and closer to the critical delivery of the customer use case in the very edge where the a hundred percent reliability is a requirement.
Guy Daniels, TelecomTV (07:17):
Great. Thanks very much Warren. Get comfortable with it first, Beth, did you want to come back in and pick up on some of your early comments?
Beth Cohen, Verizon (07:26):
I do. So I think Warren and Jar bring up some good points, but I think it's appropriate to mention that there's a difference between ai, gen, ai, machine learning and automation. And telcos need to and do in fact use all of the above and they're used differently. So automation is certainly being used heavily in the core today for high availability Gen A, yes, we see it in operations more on the customer experience side in terms of being used for chatbots and other kind of interactive types of things where if you get a weird answer, the customer will just sort of ignore it rather than bringing down a piece of the network. So I think it's important to understand the different tools and how they're used and when it's appropriate to use one tool over another or a combination of tools.
Guy Daniels, TelecomTV (08:34):
Great, thank you very much Beth. Thanks everyone for those perspectives. Right. Let's move on to another viewer question here, and this one is a skills-based question, and let me read this one out to you. From a gene AI readiness perspective, how can telcos transform their existing teams and workforce since upskilling is a difficult task? Yeah, and let me just add because last month we had our cloud native telco summit and certainly upskilling and reskilling came top of the poll responses as a problem that telcos need to be aware of. So Michael, perhaps we could come to you first on this one. Is there an issue? Are there problems with upskilling for gen AI readiness?
Michael Clegg, Supermicro (09:18):
I think there definitely will be, but the interesting thing is you don't need to do it all at once. So AI can apply to so many parts of the telco operation in terms of their own IT services or customer facing services as well as use within the network itself and then potentially offering it as a service to customers. I think we'll initially see, and we are seeing all the initial use cases, customer support, customer experience, telcos are used to mining their billing systems to understand different packages and those will happen first and there's a lot of enterprise supplier support to do that. This is standard IT stuff, it's not unique to the telco, but the benefit of that is it's really going to help them get familiar with ai, whether that's as said earlier on machine learning or generative ai, which is what a lot of people are moving towards and then applying that within the network itself.
(10:12):
Again, predictive maintenance and analytics. People are starting to use AI for security and monitoring. Again, there's going to be a lot of industry support, a lot of learning from industries outside the telco. I think the key then is to really make sure you don't bring harm to your network so you understand what you're doing and then finally building it into the network itself. Maybe 6G going down the road be more AI-Native or offering it as a service. So I think from the usage point of view, there's a sort of graduation that you can hold this in over time and get more and more familiar with it. Coming back on the 5G experience, which was really a transition to a cloud native network there we saw some of the underlying skills were maybe missing inside the telco in terms of running a true virtualized cloud native network that there were some skills development that had to happen.
(11:05):
If I carry that over to the AI phase, I really think that's going to apply more in data. AI runs on your data and telcos have a huge amount of data that's not necessarily structured or stored in the same place. So managing those data sets and getting some data scientists and learning how to bring that data together so it's available to the AI engines as well as either structured or unstructured to get the best use out of it is going to be a fairly large learning curve, not only for telcos, but obviously anybody that wants to use ai. So that's the area I think that's going to take more skills development, might even take some infrastructure development to happen over time. And then finally, we need to be careful that we don't become like any tool, we just take the tools output at face value, right? Particularly with, we spoke about hallucination with generative ai, somebody needs to check the answer. So you still don't want to lose that inherent expertise that you have of your top personnel when they can look at something. Maybe if you're using it for predictive maintenance or fault reporting that the thing looks at that and says this doesn't make sense, we shouldn't act on that. So maintaining those skills to be able to monitor that your gen AI is actually putting out valid data is going to be equally important as well.
Guy Daniels, TelecomTV (12:24):
Great. Thanks very much Michael. So there are differences between AI and cloud native in terms of skills and as you say, it doesn't all have to happen at once. Beth, let's come across to you for comments here.
Beth Cohen, Verizon (12:37):
Yeah, I wanted to build on what some of the things that Michael mentioned, particularly around data scientists. I mean there are certainly gen AI data scientists, most of them are still at this point academic or they might be working for some of the cloud companies or starting their own companies up. But the expertise of the senior network engineers, that takes literally decades to develop that kind of expertise. And their skill sets don't necessarily match with the skill sets of the AI data scientists. So that's where I think we need to have the data scientists work with the senior network engineers or we will over time probably develop a few people that will marry those two skills together. But currently I think that the number of people who have those skills in combination could probably be counted on two hands, but over time that's going to change. And of course, yes, you still need the senior network engineers to say, yeah, this is a hallucination, this doesn't make any sense.
Guy Daniels, TelecomTV (14:05):
Great, thanks so much Beth. Thanks for those answers. Great insights. Thank you. Now next question we have received from our audience. Let me read this one out to you. AI is not a product or solution, but it is expected to supercharge existing products or processes just like digitalization is not a product in itself. Therefore how can AI help evolve the telco from a utility company into something different? Okay. Martin, have you got any thoughts on this one? And you can help out our viewer here.
Martin Halstead, HPE (14:45):
Yeah, sure. So yeah, completely agree that AI is not a product and the implementations that we've seen so far for AI and telco kind of factored into how they are looking to deploy it today. What I mean by that is from our company's perspective,
(15:12):
There's a lot of opportunities that we are seeing on the platform side of AI, i.e. the infrastructure GPUs, the MLops environment, to go and build the applications is kind of what the telcos are focusing on. And
(15:29):
then separately the application side of it, i.e. the products that would run on top of that, which typically fall into two distinct parts: i.e. you have product sets that make the network more efficient and those would require AI capabilities that would run on the platform. And then the other side is how do you generate new revenue for the telco and what do those product sets look like?
(15:53):
And so those are typically split into product sets where the telco is trying to get more revenue from the end user by looking at existing behaviors from the data that they have and upselling additional packages, additional service offerings. But also
(16:15):
If you have that AI platform, you also then have the capability to offer that as a service offering as well. So it's almost like the telco becomes a tenant of the platform for their own internal use, but also offer it as, for example, 'GPU as a Service' to their end users. So there's a lot of uses for it,
(16:36):
but it's almost in a way how NFV or network function virtualization was 10 years ago whereby the application developers have machine learning or deep learning through large language models but in a vertically integrated stack. And that really needs to change where you can have a consistent platform approach for the telcos and then overlay the applications across that platform. So the separation of the application from the platform is going to become more and more important for the telcos in order for AI to go mainstream in the operators.
Guy Daniels, TelecomTV (17:15):
Great. Thanks so much Martin for explaining the split there and where the focus is fascinating. We'll come to Beth in a moment, but Shujaur come across over to you for your thoughts.
Shujaur Mufti, Red Hat (17:26):
Yeah, I think just to build on what Mar said about what AI has brought to the table is redefining the cloud native architecture. I mean especially with 5G, we saw there was a challenge in monetizing 5G enabling new services or what AI brings to the table is the relook at the infrastructure, especially with GPUs and GPUs as a service that botta elaborated, relook at the cloud layer, the operating system layer where we could enable additional capabilities in order to enable some of those CPUs and the monitoring at the infrastructure specifically for sustainability use cases, look at the cloud layer, the cloud operating system for example, the Red Hat operating system that runs in the middle with the platform that actually enables the AI capabilities for model survey, model development model pipeline. Some of those things were not nuances as a part of cloud native environment, but I think AI has brought this relook at the architecture, at the application all the way from application to the infrastructure to redefine what additional technologies can be enabled in order to further extend the network as well as some of those new services that can be enabled on top of 5G network for monetization as well as enabling new use cases for new services especially, I mean one more example that I can give is ready to AI ran, this was not being discussed as in the past few years where now AI has combined with the ran along with some concepts of desegregation with open ran or virtual ran that can actually bring some of those capabilities at the edge at the closer to the user because more AI that actually is going close to the user at the ran, there are more monetization opportunities for influencing as well as some recent use cases where your CPU has I capacity can actually be leveraged for the AI functionality.
(19:34):
So I think AI I agree has not been a technology, but at the same time it has actually helped across every fund from the network side, I think we talked about digital transformation a bit before as well as on the enterprise side has helped transform the industries on the 5G. 5G mass ultimately would help the industries industry 5.0 evolution. So pretty much has redefined the whole ecosystem around cloud native and five gja.
Guy Daniels, TelecomTV (20:07):
Thank you very much indeed. As our viewer says, it's supercharging existing products and processes. We've got comments from all of you on this one. So first of all, Beth, let's come across to you next.
Beth Cohen, Verizon (20:22):
So I think there's two opportunities to monetize and one of them is directly customer facing and the other is internal. So let me talk about the internal first, which is that obviously telcos have enormous amounts of information about our customers and our customers use of the network and we can use them as an industry, we can use AI to drive additional insights into our customers and their use and then build products that address how our customers are using our services and extending that. We can also use AI to sort of test scenarios to see if our customers potentially could use our products. So that's internal focus obviously. But the other area which I think we're struggling to monetize because customers are expecting it, is the direct services that we expose to our customers using AI services. So insights into their own networks. We offer those services today and it's hard to monetize that because often customers were like, well, we just expect that we need to add value in some way.
(22:00):
So one area that is often used is we provide the AI ins to our customers and then we provide a service that allows us to interpret the results and allow them to have actionable insights where they can take the information and using us as subject matter experts can digest it and then apply it to improve the efficiency of their networks. But that's pretty tricky because some companies are open to that and some companies like that, we'll just do it ourselves. And it's a complicated component because it's not just the network that needs to be incorporated into these insights. It also needs to be how the customers are using the network for the delivery of their applications, whether it be I OT data or whatever kinds of applications they're using and how they use it with their customers and with their end users.
Guy Daniels, TelecomTV (23:11):
Great, thanks very much Beth. And I think certainly on a consumer side as well, I think we've somehow created this expectation that we are going to be delivering AI products and services and so far that hasn't gone down terribly well. So this question is all about how AI can help you evolve the telco from being utility based and into something different, perhaps the DSP Warren, let's come across to you then we'll go to Michael, but Warren, what are your thoughts?
Warren Bayek, Wind River (23:38):
I think it's important someone to differentiate between the different parts of how telcos will realize revenue benefits, especially from AI on the backend. We won't talk about that. That's making their operations more efficient, making their purchases more efficient. Frankly, as a provider to telcos, we make our ability to deliver the products they need more efficient. I want to take this in a little different direction with services and as a consumer of a 5G service, it's very difficult for a telco to charge me more money for something as everyone I think has mentioned, I kind of expect that, right? I expect that you're going to give me a better network, a faster connection. So your ability to charge me more money per month for something that I sort of expect is difficult. So I think what we're seeing, and we've worked with several industries and right now the automotive industry is one that's pretty compelling.
(24:37):
Telcos are starting to look at partnerships with other industries and in those industries they can use the telco environment with AI infused knowledge and AI infused products to enhance their customers and their customers deliveries and their customers potential product purchases. So that using other industries as partners, the telcos can provide the data, provide the insights, provide the end user cases where the other industries can get money from their customers and provide interesting new services. And the telcos can take a piece of that because they're the ones supplying the data, they're the ones supplying the users. That industry has a ways to go. But a lot of the people mentioned this has happened in the core, right? The NFE space, other places, frankly we don't know all the use cases that are going to fold out, but we do know that we now have a far edge AI capable society. It's going to be very different in five or 10 years. I can guarantee you our lives will be very different and a lot of things that we can't imagine happening today, they will be our reality in five years. So the ability of telcos to partner with those industries who will see those significant use cases that people will be spending money on, I think that's a big part of what telcos are starting to look at in a way that they can capitalize on this intelligent edge society that's coming upon us.
Guy Daniels, TelecomTV (26:06):
Food for thought there. Warren, thank you very much indeed. And Michael, let's come across to you for your comments on this question.
Michael Clegg, Supermicro (26:13):
Yeah, I think over this in two ways in terms of peer services, and we discussed this earlier on in our LLM session, one of the areas we've been focused on is sovereign ai and this is where telcos actually run a training engine and do specific LMS that are either unique to country specific or government specific. So really enabling government to offer different services or enterprises different services. This is very much an infrastructure play. It's starting out today with multiple telcos doing GPU as a service, but then you can layer that on to offer that as a full training service. And as we discussed before today a lot of these LMS are trained in English and to be applicable around the world, they need to be localized, they need to be trained in local languages. And then with data specific, for example, if a government wants to offer government services that are AI g BT enabled, you really need to bring in all the data that that country has into its network and then you need to overlay some privacy and security on that.
(27:16):
So telcos are going to develop these skill sets internally for their own use. And that sort of leads onto the second one, which Beth covered a little bit, but it's not so much a direct AI sell to the customer, but it's taking the skillset that or competencies that telcos have developed in their own network and mapping them. And one obvious one is security that we spoke about. If I think of 5G, when slicing finally arrives or sd-wan, you really have these virtual service networks together and then you've got to bring some traffic monitoring, traffic patterns, optimization of the network, security of the network into play. And a lot of that is going to be developed within the telco by themselves for their own network purposes. And at the same time, if they're the one providing that network service, they already have better access to all that data to do that.
(28:05):
So it may not be sold as a service but it could be bundled into a service. The other one is on very large private networks, the expertise that telcos are going to develop in managing their own networks network, 5G networks and then being able to map that over, you can maybe turn up a large private network quicker or you can make it more effective in its optimization. So I think that two offering native services need to be fairly big in terms of what you're doing, but being able to remap some of your competencies into either integrated into a service or offered as a service will be a second opportunity. And I do think the market will be more enterprise focused in terms of these business opportunities.
Guy Daniels, TelecomTV (28:46):
Yeah, agreed. Thank you Michael and great insights everybody. That's terrific. Well, before we take our next question, it is time to check in on our audience poll for the AI-Native Telco summit. And the question we are asking you this week is how can telco's best leverage AI innovation to improve operational efficiency and develop profitable new services? And the real-time votes have just appeared on my right here and again like yesterday, no surprise really that working with vendors and cloud partners is showing quite strongly. So is the edge, but also it looks like the development of a telco LLM is getting some serious consideration as well. Now if you have yet to vote, then you are running out of time. We will keep the polls open until the end of today and then we'll analyze the results and we will reveal the final figures during next week's extra shot program.
(29:49):
So don't miss that. Right back to the questions then now viewer questions and for our next one I'm going to combine a couple here on a related theme and they'll follow on quite nicely from the poll voting as well. So when we talk about telco LLM, is it a specific LLM or a generic LLM that's trained for telco scenarios or perhaps multiple versions each designed for a specific telco use case and supplementary to this, why wait for a common LLM? Because surely that just forces everybody to go at the speed of the slowest and therefore kills the opportunity to differentiate. So some strong viewer opinions on the LLM issue. Shujaur perhaps I could come to you for comments first and then we'll go to a few of other guests.
Shujaur Mufti, Red Hat (30:50):
I think in general in this forum we have talked about telco has few verticals. Customers is one big vertical network that has signal model traffic data is another vertical and then we have this enterprise that actually addressing and enabling new services for enterprise use cases is another vertical. Having a generic LLM can be helpful, but at the same time that generic telco LLM can actually have the data from all three different domains. If you combine into one, it may create, I think we talked a lot about the hallucination, it may create some fake data, some full data that may not be relevant for the response. So I think having creating some of those specific L LMS first, a bigger the higher domain specific, for example, one is customer experience domain specific telco lms. Second could go in the network side of the house because this is processing all the structure and unstructured data that is being produced in the network.
(31:50):
And the third could actually focus LLM focus on the enterprise use cases that can go across industries. I think we talked about automotive, automotive, healthcare, financial services, different industry innovations as well as the requirements and then can build some specific models as well as the plan and the services around those enterprise. So I think there is one big generic RLM that can come in and process all the data. I think it has to be further fine tuned into domain specific LLMs and within the domain specific LMS we could actually further drill down into S SLMs. We could also be satisfied with the small language model. Let's say take network as an example, IT is producing different sort of datas at the radio level at the 5G core level at the network level, at the O-S-S-V-S-S, the messaging side of the house, the data side of the house.
(32:52):
So there are sub-verticals within the big vertical. So if we wanted to optimize just the RAN piece, then you could create a small language model that can only focus on the RAN data that is being produced at the air interface that is communicating across the cell side with the front hub, we could just focus on that data and that SLM can actually focus just on the RAN optimization kind of AI for ran. And similarly the other SLM can actually go in the network data in the 5G core can just focus on the data traffic can help identify the different data patterns, user specific geo-specific language specific. Some of those data can actually be used for identifying new opportunities as well as optimizing the bandwidth, the data usage, the different sort of can go in the plan planning. So I think the big generic LLM can further divide into further fine tune and train into telco specific data can actually further go into the small language models can actually be domain specific.
(34:04):
The smaller the domain, the accuracy of the data will show plus the further the enhancement into the architecture and the design and connection could be through the RAG where RAG can actually help process this proprietary data. So let's say the customer PCI data, I think we talked about security and privacy. That should be the top of mind for telcos when processing these data. So some of those L LMS can actually be further fine tuned through the rack with the specific proprietary data and then can be generated for the accurate response. So I think one having an IO on one big generic telco LM that can be applicable across global environment. I think that will be definitely will slow down. There could be some learning that can be applied from there, but I think the further fine tuning may have to define several verticals and have to process several tags for geospecific language specific data on the customers side of the house. Plus I think we talked about some network examples similarly for enterprise, each enterprise could have different set of requirements for the edge, for the iot, for the in vehicle operating system and the communication there. So each vertical could actually have some focus and specific requirement that can further divide this fine tune these L LMS into the small L lms, just particularly trained and extracted for that particular industry.
Guy Daniels, TelecomTV (35:40):
Great, thanks very much for those thoughts. Shujaur, we've got several other thoughts from our panelists. So first of all Michael, let's come across to you next.
Michael Clegg, Supermicro (35:50):
Yeah, one of the things we spoke about a little bit earlier on is the separation between the trained model and the foundational model. So it's not necessarily clear that we need unique foundational models and it's quite interesting what Orange announced recently where they're going to develop an abstraction layer so they can actually run their trained models on top of a number of different foundational models in order not to get lock in into a particular model. So I actually found that quite interesting when I saw that, but I think really on the train model we already see this. I mean why do we need to optimize it for a specific domain? Well we are already using either fine tuning as one way to take a generic model and try and make it a little more domain specific and we can use retrieval augmented generation, again, taking a generic model and oops, sorry, my energy saving thing went in there, taking a generic model and basically trying to augment it with data that's specific.
(36:51):
The far end of that is to train that specifically in your data. There's a lot of jargon in telcos. Different telcos, different enterprises have their own wording that they use internally. So even if you're trying to do customer support and you've structured your packages in certain ways and given them certain names, the model needs to know that it needs to understand that same in your network, how you characterize data inside your network needs to go into the model. Some of this will be generic to a telco, so telcos can benefit all. For example, if you fed the three GPP standards into a LLA model, it's going to be become a three GPP aware. So we can see the benefits of doing that. So it's a combination of eliminating extraneous data that's not relevant to the domain that you're doing and then as well as making sure that it has all the information relevant to the domain.
(37:41):
And I like what we said before in terms of these small language models because training is the expensive piece is really where you make your money and being able to have smaller language models is going to be able enable you to do infra in much closer to the edge. And just one final comment, training is not a one-off. I think people don't always realize this, but once you've trained your LRM, new data is coming in all the time, so you have to constantly retrain the LLM and today LMS are sort of a little bit batch process trained. So that is an ongoing process that needs to happen all the time and that's going to be again, a little easier if Telco is the only LM and is narrowing that training time only to the new data that's relevant to operations.
Guy Daniels, TelecomTV (38:25):
Great. Thanks for that Michael. Very interesting. And yeah, it was fascinating what the approach at Orange has announced that it's taking quite surprised by that. Right? Come to Martin in a second, but Warren I'll come to you next.
Warren Bayek, Wind River (38:38):
Yeah, I think the panelists have really covered a lot of the major points that I was going to cover. And just to add to it, I like the separation of the large language in the SLM versions given when River's place sitting on the far edge of the network where as it's been mentioned, training is really not something you do way out there at the far far edge, but inferencing would be so the ability to create large language models to go through large amounts of data, and I'm going to go back to the services delivery, but then to be able to push out S SLMs to the very far edges where the data can actually be inferenced and used in real time, that's going to be the critical piece to me to allow services. As I mentioned, some of these automotive services where we can do passive safety and or insurance companies may be interested in things that are happening in real time out at the far edge.
(39:31):
We can't do based, we can't do LLM based training out there, but we can do inferencing at a very high level at the far edge. So I think the separation of that is very important. Also, it is interesting to note that one of the powers of the LLM model is that it brings in data and learnings from other industries and other parts of intelligence, if you will, into what we may not see connections that the LLMs can create. So while I agree that it's important to create vertically specific models for this training, there'll always be a need to do some of the LM training on all of the data to make sure that all of the models are being updated at the highest resolution they can and that we get the best possible scenarios baked into every vertical. A lot of consistency. One thing we've learned is that there's a lot of consistency across models that we may not notice as humans.
(40:34):
We may not see the inferencing and the ability to make those connections, but the training done in the LLMs will create better s LMS to be inference at the far edge. So I agree it's going to be a combination. I think it's important that we do the language translation. I thought it was interesting when it was mentioned LLMs are typically done in English and they need to be obviously globalized. Well telco has, its another very specific language that many people would say isn't even English. So having language models that understand telco specific network specific, each telco has different languages. We have a different language than some of our other vendors. So it's important that we continue to train the models on all of our specific lingos, if you will, and create models that can satisfy the entire industry in that way.
Guy Daniels, TelecomTV (41:28):
Absolutely. Thank you very much Warren and Martin, let's come across to you next for your views.
Martin Halstead, HPE (41:34):
Yeah, sure. So just I guess relatively short point, which is this industry is evolving and what I mean by that is if you look at the application vendors, so O-S-S-B-S-S vendors, their use of AI is currently more as feature sets of their existing products. So they have AI machine learning based features for processing data for a particular domain of the telcos network network to do assurance for example. So in order for them to do that, that vendor has already selected the model types that they're going to have for their applications. As the industry evolves and the telcos start looking at having a common AI platform, you then need to start looking at just how far up the stack do the telcos want to go? Do the telcos actually want to select the models and then expect the end vendors, the development community around that to make use of those models that they have selected?
(42:53):
Or would the opposite of that be the telco supplies the platform, they supply the development environment, but the application developer can select their own models and provide that as a vertically integrated application stack for O-S-S-B-S-S that is applicable to a particular domain of the telco. And in then doing that, they then have the onus on which models they would use for that be their lms, SLMs, whatever, depending on what the actual application set is. So I think it's very early days for the industry, but I would say for the telecoms operators themselves having a view as to how far up the stack do they want to go in selecting the AI capabilities for the network is going to become more and more key.
Guy Daniels, TelecomTV (43:51):
Great. Thanks Martin. Some interesting choices there for telcos to make. And Beth, let's come across to you for your thoughts on the question. That's all around the LLM question.
Beth Cohen, Verizon (44:03):
Well, I'll open with a comment, which is the nice thing about standards is there's so many to choose from and I think this that applies here as well. So there is some work happening at the open source and the standards bodies working on creating some standardized data sets and of course underneath of course we have the IP packets and all that, which have been standardized for decades. But I think the problem that many of the telcos are struggling with is that they've been doing their standalone network stuff, if you will, and analysis and machine learning and automation and all of that's been happening in kind of a silo. And so each one of the telcos has their own answer and their own BSS and OSS systems that have been fine tuned over literally decades. So I think they're all struggling to apply AI to this sort of morass of data that's out there and to apply it across all the different telcos I think is very difficult and I think the vendors have a good insight into that because of course many of the vendors, including people on this panel, work with multiple telcos and know that the data sets are in fact different across the different telcos.
(45:46):
So maybe that's an area where the vendors can really help to not necessarily standardize, but at least make the connections that, oh, a telco a calls this particular process widget and this other process is called wat, but they're really the same process and the same data. So I think there definitely does need to be some data standardization to really take AI to the next level, but I think we have a long ways to go before we get there. So I guess in the end we need to do a mix of applying AI today to our own systems and then at the same time working on some standard platform and standard language to allow us to take the AI to that next level that we need to optimize across all the networks.
Guy Daniels, TelecomTV (46:55):
Great. Thank you Beth. Thanks everybody for those comments. And if you haven't watched the panel that's available on demand today that is purely focused on LLMs telco LLMs, then I really urge you to do so because there's a lot more information in there, but we hope we've wetted your appetite with this live show question. Right? We have time for perhaps just one more question. We received this one via social media about an hour ago and we wanted to get it in the show. So let me read this one out to how many telcos are actually transforming into AI-Native organizations. The theory sounds great, but what's the reality on the ground? What evidence are the panelists seeing, right? Michael, are we able to come across to you to start this one? Are you saying evidence of telcos actually making that transition to becoming AI-Native?
Michael Clegg, Supermicro (47:55):
Yes, we were speaking a little bit about this before the show started just amongst ourselves. One of the points we did highlight is how long these industries actually take to develop. This is a fairly long process that they'll be involved in. We've already spoken in today's session about multiple ways that AI can be used in the network. Some fairly easy to start with maybe customer operations and some a little harder down the stream. So I think the idea of going AI-Native is not a big bang, it's more of an evolution for most of the telcos and they will take their time to get there. I'll just go by what's sort of been publicly announced. We see Syntel has made quite a few announcements around AI have launched a number of services or partnering with other telcos in their region. So it depends on your definition of what a pure AI-Native telco is.
(48:45):
For now, I'll just define it as a telco that is clearly taking action and implementing AI within the network and maybe being public about it. So I'll put stel out there. skt Telecom is another one that has been fairly active in terms of pursuing their AI activities and of number of initiatives that they've announced publicly. I think being on your show as well mentioned over here, SATA in Indonesia has really taken on a little bit the Southern AI thing working on behalf of their governments and we seen Deutsche Telecom announced they have over 50 trials of AI activities internally. We mentioned Orange looking at multiple language models. So we are seeing publicly number of telcos starting and I probably haven't mentioned them all, but a number of telcos making announcements as to their activities within the standards body. We belong to the AI Alliance Telco AI alliance that is starting to look at activities particularly around AI ran in those activities through the telecom infrastructure project. There's a telco AI initiative, so I would definitely say that most telcos obviously recognize this, are using that already in the IT side of their house, but are definitely going to look to do this internally and I think 6G is going to be a very AI-Native network that's still half a decade away, but a lot of the learning today is going to be built in and then it'll be fundamental in the architecture when 6G comes along.
Guy Daniels, TelecomTV (50:16):
Great. Thanks very much Michael. Thanks for going through those examples as well and yeah, certainly we here are seeing a lot of evidence of telcos at least starting the process to becoming AI-Native. As you say, it's that's often a long way to go. Shujaur, let's come across to you next for your observations.
Shujaur Mufti, Red Hat (50:33):
I think Michael pretty much shared quite a bit evidences there specifically, maybe I'll add to the ER ran recently announced between Invidia and T-Mobile RAD Harris part of it, so it has the architecture that actually supports RAN and AI in parallel with the AI enterprise. I think one thing that we are seeing in the industry is there's a significant amount of interest that is developing on the open source LMS as well because especially tier two and tier threes who cannot support bringing that or creating their own telco RLM investing some big money there. I think they're looking at some of those open source capabilities and models that are being developed. For example, red Hat and IBM has open source and grand model, which has car core bit interest because the service providers who are actually engaged in conversation with us are looking into some of those open source models in order to develop their own custom R lamps. I think we talked quite a bit about in this panel the need for having its own generic RLM further fine tuning and training on the foundational model, further dividing vessel lamps, but I think there's another work stream that is slowly starting to kick it off slowly starting to grow is on the open source L lms. There are foundational part of the Red Hat ecosystem as well, and these models are actually being adapted and looked at by the service providers in order to create their own very specific, very tailored customer elements.
Guy Daniels, TelecomTV (52:12):
Great, thank you. Shujaur and Warren, did you want to come in on this final question about what evidence that we're seeing of telcos actually becoming AI-Native?
Warren Bayek, Wind River (52:22):
It's a slow process that's going to take some time. Look, let's face it. AI is a nation industry, right? We're still all trying to kind of figure out the best way to use it, the best way that it can help all of our industries, all of our products, all of our services. We see telcos starting to dip their toe and obviously they've already used it extensively in customer service. On the IT side, we see a lot of it starting to happen in the operational side of the house where analyzing network capabilities, network operations, network, everything that's going on in the network, we see some really interesting work being done in the real time. Rick area of open ran with energy savings and beam forming for radio so that they can more do some nice work there to save money and energy, which is obviously critical to telcos as that becomes a bigger and bigger part of their opex and we see it on the vendor side.
(53:21):
Frankly, we're part of a lot of vendor work where we're creating AI behind delivering more virtually ready products. As the telcos in the 5G space are working to this virtualized environment, one of the big problems that everyone's facing is this integration of all these disparate components and the ability to use AI before the delivery to create an ecosystem of partnerships where the products work together in a faster way. That's kind of the places we see right now. The telcos dipping their feet, sort of proving out the technology that it's something that we can trust and use. I think it's a little ways before it's being used in a way that impacts, as Beth said, that impacts actual customer, the bits and bytes that are showing up on your phone because that's a little more important that we have a model we can trust in AI information that we can trust in a way that we frankly just can't trust it today.
Guy Daniels, TelecomTV (54:23):
Thanks very much, Warren. In fact, I was just making a note there. We've actually had seven years of the cloud native Telco summit, but this is only the second year of the AI-Native Telco summit, so a lot still to cover, a long way still to go and we look forward to being here in subsequent years and seeing how this sector develops. Well look, thank you very much indeed. We are out of time now. Thanks for all of you for joining us for this live program, and that's a wrap for this year's AI-Native Telco summit. Thank you to all of those who submitted questions. We're trying to cover a wide range as possible in the time we had available, and a reminder that you can watch all of the programs from this year's summit on demand from our website featuring this dazzling array of industry experts.
(55:17):
Thank you to all of our speakers and sponsors and viewers for supporting the DSP Leaders Summit series. Now. We'll be back next week for our extra short review program when I will be joined by guest presenter Chris Lewis. We'll take one final look at the highlights from the summit, the main talking points, the key takeaways, and of course the final poll analysis. And for those viewers watching us live, we are going to broadcast today's panel discussions immediately after this program. So stay with us, our DSP Leader series returns in November with the next Gen Telco infra summit. For now though, thank you for watching and goodbye.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Live Q&A discussion
This Live Q&A Show was broadcast at the end of the second day of the AI-Native Telco Summit. TelecomTV’s Guy Daniels was joined by industry guest panellists for this question and answer session. Among the questions raised by our audience were:
- What are some of the pitfalls with AI-native telco strategies?
- How can telcos transform their teams for generative AI (GenAI), since up-skilling is a difficult task?
- How can AI help evolve a telco from a utility company into something different?
- Is telco LLM a specific LLM or a generic LLM that is trained for telco scenarios?
- How many telcos are actually transforming into AI-native organisations?
First Broadcast Live: October 2024

Beth Cohen
SDN Network Product Strategy, Verizon Business Group

Martin Halstead
Senior Distinguished Technologist, Aruba Telco Solutions, Hewlett Packard Enterprise

Michael Clegg
Vice President and General Manager for 5G and Edge, Supermicro

Shujaur Mufti
Senior Manager, Global Partners Solution Architecture Telecom, Media, & Entertainment, Red Hat

Warren Bayek
Vice President, Technology, Wind River