To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/0KrAcwG-UsE?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Charlotte Kan, TelecomTV (00:07):
Well, good morning everyone, and thank you for joining us here at the ZTE stage on day four of MWC on what promises to be a very candid and very practical discussion. So over the past few days here at MWC, AI has been absolutely everywhere at the centre of all the conversations we have been having here. So today we want to explore the hard questions, the ones around deployment, operations, and sovereignty. I'm Charlotte Kan. I'm delighted to be with you to moderate this panel discussion titled Trilateral Shift: AI, AIOps, and the Sovereign Cloud Imperative. And the title is deliberately open because we are witnessing a three-way transformation here, three forces converging in telecoms, changing both architecture and operations. So fast, the fact that AI is no longer an add-on, it's becoming embedded across networks, operations, and importantly, also in decision-making. Second, AIOps is evolving really, really fast from automation to increasingly autonomous agent-based systems operating at scale.
(01:31):
And thirdly, this is happening against a backdrop of sovereignty pressures on data, cloud infrastructure, security, and control. And that's something that telcos need to integrate. So we are joined by a stellar panel this morning, bringing together perspectives from across the full telecom value chain from infrastructure to cloud platform, software, and operator experience. So let me introduce you to our wonderful speakers today. Sitting to my left, Dr Volkan Sevindik, who's Chief Technology Officer at StarHub, bringing the essential operator perspective here. Laurence Fejit, Director of Partner Sales, APAC at Red Hat. Quan Wang, who's Vice President at ZTE, Cristina Rodriguez, Vice President, Network and Edge at Intel. And finally, Wu Zhouxi who's Head of Cloud Solutions at Whale Cloud. And can we start this discussion with a round of applause, please, to warm up the room and our speakers here on stage. Thank you very much.
(02:40):
So we're going to start by grounding this discussion in a simple but very critical question around the real value of AI in telecoms and for telecom operators and how this value can be measured. So I'm going to turn to you, Volkan, to start with, how do we address the value of AI in the telco market right now in 2026?
Dr Volkan Sevindik, StarHub (03:03):
Okay, thank you. So the main value for us is at the agentic layer. And when I joined StarHub, I started full automation at our network operation centre. So for us, it's more like automating the processes, leading the OpEx reduction. So from that perspective, we determine the value in terms of how many processes we can automate using how many agents. Again, that directly correlates with the hardware investment we have to make at that layer. So I can get into more details, but this is mainly what I've been looking at.
Charlotte Kan, TelecomTV (03:46):
Thank you very much. Whale Cloud now, where do you see the value of AI in the telecom sector?
Wu Zhouxi, Whale Cloud (03:52):
All right. Thank you, Charlotte. I actually have a few, I would say, unconventional thoughts on AI value to the telco industry. So the first thing I think AI is and will be an important medium for telcos to exercise their social responsibility. It's kind of vague, right? Let me explain. So when we think of AI, when we interact with AI on a daily basis, we think it's a neutral or dual kind of agent. And when we put AI into the telecom setting, we think about higher efficiency, better experience, all that good stuff. It's all about rainbows and butterflies, right? But at least let me tell you some information or having it in the industry is that actually OpenAI have had its legal trouble with the authorities in 2025, apparently because during a period of time at least, their models were actually too warm, too agreeable, too encouraging whatever the question may be coming from.
(04:52):
So that resulted in some very negative impact to a particular person. So from a telco perspective, we are carrying the weight of delivering AI or incorporating AI into the hands of millions of customers. We simply cannot make that kind of mistake at that scale. So it is our responsibility to make sure the AI capability we incorporate into our service is responsible, secure, and be cautious of the social norms and cultures of the markets that we serve in. So this is the point number one. Point number two, I think mass adaptation of AI really pushed us telcos, thinking that we need to look at the long-term ROI of AI services. Maybe we can start my personal experience. When I first started to do online shopping back in my college days, that's a long time ago. All I think about is, oh, this is a great deal.
(05:48):
I can save money. At the end of 2025, when I look at my NOV for one of the e-commerce sites, I was shocked. How did I get here? How did I spend so much money? On a personal level, we know that if something became a trend, we need to look at the long-term costs. I think we see the same thing with cloud computing. Long-term, large-scale consumption of cloud resources can be very, very expensive. And we see when Mr Elon Musk purchased Twitter, now X. One of the first aggressive cost cutting measures is that they moved away from one of the hyperscalers because apparently the cost is too high. So we think the AI will follow a similar path because it is so good, it has so many potentials, we will see an exponential growth of AI power consumption, AI tokens in the future.
(06:37):
So we really need to look at the long-term TCO, not just the benefits. So what I'm trying to say is that actually the question is, how can we find the most efficient way of incorporating AI into telco business in a large, long-term pattern? So this is the point number two. I think point number three is that during the stay in Mobile World Congress 2026, I've had at least two occasions where leading telcos came to our booth and asked, can I have some AI powered B2B application that I can resell? So this gave me an inspiration. We think that AI is able to provide us an upgrade path to our existing B2B portfolio. Maybe we can incorporate agent in our call centre, connect this call centre agent to a public response agent so we can provide better coordination in terms of providing medical responses, something like that.
(07:34):
In the past without AI, this kind of integration via coding would be difficult, but with AI, it is possible. And this is something that telco industry has a unique advantage because all of the foundation model players, they focus on the capability of the models, not so much on the mass adaptation into actual application that impact millions of people. So that's my unconventional thoughts on the topic.
Charlotte Kan, TelecomTV (07:59):
So three key points here. First, you have to ensure that you develop and adopt AI in a very responsible manner. That's the first point. The second one, yes, value, but not at all costs. You have to think about the long term. And thirdly, I think it's around interoperability really and making sure that your full value chain benefits from it. Thank you very much for that. And I'd like to hear Quan Wang, your perspective here at ZTE on the value of AI in the telco sector. Where do you see it and how do you harness it?
Quan Wang, ZTE (08:35):
Okay. Because in ZTE, I'm in charge of the core network. So core network is something like central in the network. So I want to talk from the aspect from core network. I think there are several parts that introduce AI, we are bringing value to the telecom system. The first one is because core network we are provide the services to the customer. When we bring AI into this system, it will add a lot of new features to the telecom services like the AI new calling, AI noise cancelling, AI antifraud. So the customer user experience will be greatly improved. Yeah. The second is about, I think is the AI network because every data, even after that some tokens we are through the core network. So how to make use of AI to make the network more efficient is very important. Actually, ZTE provides some elements embedded with AI like the NWDAF, some function like smart DPI to make the network more efficient.
(09:55):
So this has helped the operator to save the whole TCO, especially the OpEx, I think. The third one is about, just you mentioned the AIOps because core network is, especially after the containerisation and virtualisation, the core network became very complex and it's very hard to do the operation and maintenance. And the stability of the core network is very important. So when we use some AI agent and even digital tools, which we already begin to use to help the operator to operate and maintain the network, to make these things much easier and much more accurate. So that's all. Yeah.
Charlotte Kan, TelecomTV (10:50):
So removing all the layers of complexity, thank you very much for that. For my next question now, I'm going to start with you, Laurence, at Red Hat, because in a telco end-to-end solution, different vendors provide hardware, cloud platform applications, and all sorts of related services. So what's the key point here to make it all very smooth and successful?
Laurence Fejit, Red Hat (11:13):
Yeah. No, so I think that the key to a smooth and successful collaboration across vendors really resides in the definition of an open common hybrid cloud platform that provides clear architectural boundaries, standardises lifecycle management, as well as provides some common shared processes across vendors. And by leveraging consistently Red Hat OpenShift as that common hybrid cloud platform, hardware vendors as well as application software providers and system integrator partners can really integrate, but also operate independently and while they seamlessly align on security, automation and data operations. So I think that in that respect, the benefits are really multifold, both from a vendor perspective and a customer perspective because it obviously reduces the vendor friction. It also minimises the integration efforts and it also shifts really the whole engagement into a more repeatable and platform-driven collaboration between the different players. From a customer perspective, I think this is also very beneficial because it allows customers to scale their network much quicker and also onboard innovation such as AI much faster into their network.
Charlotte Kan, TelecomTV (13:07):
Thank you very much for that. Cristina, I'd like to get Intel's perspective on this. How do you make this coordination smooth and successful?
Cristina Rodriguez, Intel Corporation (13:14):
Yeah. Well, first of all, I want to start saying that we have the technology today to deploy AI today. We have the technology in both the silicon, the platform, and certainly the AI use cases. And I'm going to talk a little bit about that. The way I look at it, what makes sense, the practicality, where can we use AI and what can we deploy today? I like to look at it in two groups. The first group is everything that is related to the radio algorithms. There is a lot that can be done today with the technology and the models that exist today when it comes to improving or taking the radio algorithms which were already very optimised to the next level. For example, link adaptation, channel estimation, beamforming, anything that will make the spectrum more efficient. And we're seeing that demos and we're seeing the operators looking into that already.
(14:17):
So that's one side. On the other side or the other group is anything that could make the network, and some of my colleagues have touched on that, the network more efficient, more optimised. For example, power management. We can use AI today to bring the power consumption down, right? That's one case. Using the capabilities of the silicon in the case of Intel, of course. I can speak about that, but also in the platform and just in the entire system. We can also use it for things like preemptive predictive maintenance. We can use it for self-healing. We actually have a really good demo with Red Hat talking agents, doing self-healing, debugging and self-healing of the network. So really, really cool stuff. So bottom line, we have the technology now. We can deploy. We don't have to wait for six years for sure. We can start deploying the AI use cases.
(15:24):
These use cases also are small models. You don't need to have any major components in the network. You don't need to consume a lot of power in the network. You have the small models and what you're doing is inference. You're not doing training. You're doing inference in a small model, can be deployed on live networks right now.
Charlotte Kan, TelecomTV (15:45):
Cristina, I'm very glad you brought up the point around reducing energy consumption because a few years ago we were having lots of conversations around sustainability here at MWC and it's not really the case because AI is literally eating not just the world, but all the conversations we're having here. So thank you for that. But I'd like to get your point of view at ZTE and at Whale Cloud as well around this question of smooth coordination between all the different parts that make the value chain.
Quan Wang, ZTE (16:15):
Okay. I think the key point is cooperation. It's from, how to say, from your deep mind, you usually want to do the openness because it's not about just saying we are open. It's about, how to say, this as a resource investment. For example, ZTE have full open labs worldwide to cooperate with Intel, with Red Hat, with Whale Cloud. Even when we release one version, every version, we are actually at our R&D centre, we are in deep close cooperation with Intel and with Red Hat together to prove that our version can work together smoothly. And before we release that version to deploy in the operators' network, we still invest a lot of integration engineers to do the pretest. So it's not about just the saying, it's about a lot of resource investment and ZTE is really doing this now and we already have many networks using the Intel CPU and using Red Hat platform worldwide.
(17:33):
And we are continuing cooperation with each other to together bring AI into this network and into this system. Yeah.
Laurence Fejit, Red Hat (17:41):
Thank you. All
Wu Zhouxi, Whale Cloud (17:43):
Right. Again, I have a few unconventional responses under OSA, but maybe with new explanations. So we think the mass AI adaptation in the telco industry, it's much more of a marathon than a hundred metre sprint. So we need to play the long game. So I think there are a few guiding principles that all of us in the industry share and abide by. So the first thing is that we think we need to do the right thing. For example, let's say because Whale Cloud, we are in the business of building sovereign cloud and sovereign AI cloud for our customers. Everybody has business targets and revenue targets to hit, but I think we should refrain from chasing after the biggest, fanciest, most expensive thing to our customers. The end result should be what is customer's requirements, what is the value realising? So we should work towards that. So do the right thing.
(18:36):
The second point is I think we should do things the right way. So for example, let's say when we deliver a project to our partners, instead of choosing the cheapest option, for example, in a project we have done in StarHub, we have chosen ZTE hardware, OCP, as well as Intel CPUs. We know that in the very beginning, they might not be the most affordable options, but I know in the mid to long-term TCO perspective, those are trusted partners. They have clear roadmap and they certainly have the ability to deliver on time. So that's my second point, do things the right way. And third point is we need to find our own niche. We need to find our own positioning. And from the telco industry, we are not maybe not at the very front when it comes to large language models, but we have a natural connection to our customers, natural connection to our enterprises, and we need to follow the rules and regulations of the local markets that we serve in.
(19:31):
Those might not be the factors that OpenAI will be thinking about, but this is the situation we are in. This is our advantage. This is the unique capability information that we have and we need to leverage on that. Find our niche, find our position in this AI long game. Thanks.
Charlotte Kan, TelecomTV (19:50):
Some very, very valid points here. Thank you very much for that. Another question now for you actually, Volkan at StarHub, because as AIOps moves from automation to autonomous, agent-based operations, what should we do to make AIOps solutions achieve production ready at scale?
Dr Volkan Sevindik, StarHub (20:17):
So my priority is more on the guardrails around the LLM, but as a corporation, we are not just using LLM. We are using small language models and non-language models to run on x86 architecture rather than GPU. So we are not really going after hype and doing a significant amount of GPU investments. We are using nano-models, doing one thing perfectly and wanting fully end to end and to be executed on x86 architecture. And of course, that comes with a huge number of agenting AI nano-models we have to run, but that's more like a communication and integration issue. But as a corporation, what we want, what we are doing today, leveraging our x86 architecture from Intel and various different chip manufacturers to leverage the current hardware first, rather than investing in NPUs and GPUs. And once we reach the limit in our current hardware, of course we might go to more neural processing and GPU level augmentation.
(21:32):
But what I believe is the current x86 infrastructure is more than enough to run a significant number of AI models. So that's the first thing. Second thing is, once you increase the number of agents, the communication between agents becomes an issue. How do you solve that? The way we solve it, of course, with our partners and Red Hat, we are developing one LLM engine to manage all these nano and small language models. LLM engine works like an orchestrator at more application layer to manage all these nano-models and small models at scale. So that's what we are doing today. That's what is in the production. The third part is, how can I create guardrails around the models, right? Because the model cannot really work independently, shouldn't make certain decisions independently. We have a policy engine also that is running on x86 architecture. Policy engine is really using, working like PAM, policy access management system to aim for certain policies on the decisions each nano-model should make or can make.
(22:49):
So from that perspective, starting from nano-models, going to small language models, using LLM as an orchestration and using another LLM, maybe a small scale LLM for policying all these systems and running them on current x86 architecture is what is working today. So for tomorrow, I don't know, maybe the complexity will increase on the LLM and complexity will increase on the SLM for us to move to GPU and NPU level architecture. But what I can see today is not really needed and current CPU architectures are advanced enough to handle certain tasks at scale. Well,
Charlotte Kan, TelecomTV (23:27):
That's the point you made earlier. It's not about investing left, right, and centre, but using existing resources and capabilities to deliver that change.
Cristina Rodriguez, Intel Corporation (23:36):
Yeah. Let me add to that. I totally agree with you. I think it's important that we have obviously the economical part of it, right? In order for AI to be deployed, it has to make economical sense too. And in order to do that, you need to figure out what is the right compute for the right workload. If you have a network today and you have an infrastructure today, which you can do the job and you can do the AI workload, use it. You don't need to deploy additional components. You don't need to expand more. You don't need to consume more power. So I think that's an important piece of making sure that we get this going and we do it in a way that makes economical sense.
Charlotte Kan, TelecomTV (24:27):
Thank you very much for that. Laurence.
Laurence Fejit, Red Hat (24:29):
Yeah, no, I was going to add onto that as well. So I think that the question really taps into several dimensions. There is obviously the shift from script to agent, but also I think the criticality of addressing the trust and transparency gap, because if autonomous networks are not predictable, they will fail at scale. Hence to make AIOps production ready at scale, while it transitions from automation to autonomous and agent-based operation, I think really requires grounding the intelligence into a cloud native process-driven control plane rather than having that distributed and siloed in different AI models. And by running AIOps agents on Red Hat OpenShift and Red Hat OpenShift AI, we help partners such as ZTE, Whale Cloud, but also telco customers to really scale autonomy across all their domains while maintaining explainability, maintaining compliance. And I think this is really important because that's really what will give the confidence to customers such as StarHub to move from the current AI experimentation stage that they are to full-fledged, trusted and fully scaled networks.
(26:19):
So to kind of summarise on this, I think that the key to scaling that is really to leverage the existing cloud native infrastructure and transition it or transform it into an AI native infrastructure. That's what will allow that shift and that full scale AIOps scaling of the AIOps into the network.
Charlotte Kan, TelecomTV (26:51):
So we are hearing it yet again. You have to leverage your existing resources, infrastructure, everything you've got in order to deliver that change to autonomous operations. We'd love to pursue this conversation, but it's very busy here at MWC. Of course, you're running from meetings to meetings. So I have one more question maybe to wrap up this conversation. And it's related to our theme here today, this trilateral shift, AI, AIOps, sovereign cloud, imperative, et cetera. I'd like to ask you all maybe to come up with a word, a sentence to tell us what you think success will look like to address this trilateral shift. What's key here? What's really important, starting with you, Volkan.
Dr Volkan Sevindik, StarHub (27:44):
For me, what's really important is selecting the right models and to be executed on your current infrastructure to resolve one issue very well, and after that scale up, rather than doing the hardware investment first and finding the models later and so on. Right now, industry is going to a direction that the models are selecting the hardware they want to be executed on. Models are so smart right now. There are certain research going on where researchers are developing these models and models are just changing itself based on the objective to run, for instance, on even a very small scale CPU or microcontroller. So I think the overall, the market will only expand if the models can run on the current architecture. And that also helps companies to adopt the models rather than doing the billions of dollars, significant GPU and hardware investment. So that's what I will say, use your current infrastructure first to solve one problem very well, and after that you can scale up easily once you prove the value.
(28:59):
Thank you.
Charlotte Kan, TelecomTV (29:00):
Thank you. Laurence.
Laurence Fejit, Red Hat (29:03):
So I would say that I see that as a gradual process in the sense that ... Yeah, no, and it's a gradual process that requires, I think, operational trust, but as well as architectural openness and gradual incremental autonomy, rather than trying to go from day one to full autonomy. I think it's really a journey rather than anything else. So that's what I would say. Yes. Thank you.
Quan Wang, ZTE (29:43):
Okay. For me, I think the word is open because open. I think the first reason is why we all sit here together to cooperate with each other. For operators, open is very important. That means operator like StarHub can choose different vendors from hardware, from platform, from the application. For the platform, that means that the platform can onboard different applications and cooperate with core networks such as ZTE. And for us, a lot of people talking about open source models. Actually, for the applications, we use different models we can choose and it's high efficient. And for the hardware, I think whatever from Intel, from StarHub, we also want more choice from Intel, Nvidia, AMD. Yeah. That thing is also very important for the open. So I think the keyword for me is the open and the cooperation.
Charlotte Kan, TelecomTV (30:53):
Thank you, Quan Wang. Cristina.
Cristina Rodriguez, Intel Corporation (30:56):
Yeah, I'll be brief. I'll say two things. Number one is step one. We need a fully virtualised network. We need a software defined network all the way from the cloud to the core, to the RAN, to the edge. I think the edge is going to become very important. We see a lot of momentum, a lot of use cases ready to go right now on the edge. And the second thing is the right compute for the right workload. Don't go trying to over ... Don't overdo it if you have the technology already deployed or if you have a more cost effective way to do your workload, do it. The right compute for the right workload. Thank you.
Wu Zhouxi, Whale Cloud (31:36):
All right. And I think my keyword is stay consistent. Stay consistent to the principles that I've just mentioned. It's about staying to your own lane, focus on the markets you really wanted to serve. For me personally, it's about providing flexible solutions, tiered solutions to customers who wanted to build sovereign AI, sovereign cloud at different scale. And the second thing is that we should stay consistent in terms of developing our technology, finding our core competence. So in the long game, we can sustain and build and serve the society and customers in the markets that we're in. Thank you.
Charlotte Kan, TelecomTV (32:16):
Thank you very much. So we promised you a very pragmatic, practical, and very open discussion. I think we delivered here this morning. A big round of applause, please for all our fantastic speakers here on stage. Lots of really important takeaways and points here, but thank you for sharing your insights with us this morning. And many thanks to you and the audience as well for your attention this morning. Enjoy the rest of your day, day four of MWC. Thank you.
Well, good morning everyone, and thank you for joining us here at the ZTE stage on day four of MWC on what promises to be a very candid and very practical discussion. So over the past few days here at MWC, AI has been absolutely everywhere at the centre of all the conversations we have been having here. So today we want to explore the hard questions, the ones around deployment, operations, and sovereignty. I'm Charlotte Kan. I'm delighted to be with you to moderate this panel discussion titled Trilateral Shift: AI, AIOps, and the Sovereign Cloud Imperative. And the title is deliberately open because we are witnessing a three-way transformation here, three forces converging in telecoms, changing both architecture and operations. So fast, the fact that AI is no longer an add-on, it's becoming embedded across networks, operations, and importantly, also in decision-making. Second, AIOps is evolving really, really fast from automation to increasingly autonomous agent-based systems operating at scale.
(01:31):
And thirdly, this is happening against a backdrop of sovereignty pressures on data, cloud infrastructure, security, and control. And that's something that telcos need to integrate. So we are joined by a stellar panel this morning, bringing together perspectives from across the full telecom value chain from infrastructure to cloud platform, software, and operator experience. So let me introduce you to our wonderful speakers today. Sitting to my left, Dr Volkan Sevindik, who's Chief Technology Officer at StarHub, bringing the essential operator perspective here. Laurence Fejit, Director of Partner Sales, APAC at Red Hat. Quan Wang, who's Vice President at ZTE, Cristina Rodriguez, Vice President, Network and Edge at Intel. And finally, Wu Zhouxi who's Head of Cloud Solutions at Whale Cloud. And can we start this discussion with a round of applause, please, to warm up the room and our speakers here on stage. Thank you very much.
(02:40):
So we're going to start by grounding this discussion in a simple but very critical question around the real value of AI in telecoms and for telecom operators and how this value can be measured. So I'm going to turn to you, Volkan, to start with, how do we address the value of AI in the telco market right now in 2026?
Dr Volkan Sevindik, StarHub (03:03):
Okay, thank you. So the main value for us is at the agentic layer. And when I joined StarHub, I started full automation at our network operation centre. So for us, it's more like automating the processes, leading the OpEx reduction. So from that perspective, we determine the value in terms of how many processes we can automate using how many agents. Again, that directly correlates with the hardware investment we have to make at that layer. So I can get into more details, but this is mainly what I've been looking at.
Charlotte Kan, TelecomTV (03:46):
Thank you very much. Whale Cloud now, where do you see the value of AI in the telecom sector?
Wu Zhouxi, Whale Cloud (03:52):
All right. Thank you, Charlotte. I actually have a few, I would say, unconventional thoughts on AI value to the telco industry. So the first thing I think AI is and will be an important medium for telcos to exercise their social responsibility. It's kind of vague, right? Let me explain. So when we think of AI, when we interact with AI on a daily basis, we think it's a neutral or dual kind of agent. And when we put AI into the telecom setting, we think about higher efficiency, better experience, all that good stuff. It's all about rainbows and butterflies, right? But at least let me tell you some information or having it in the industry is that actually OpenAI have had its legal trouble with the authorities in 2025, apparently because during a period of time at least, their models were actually too warm, too agreeable, too encouraging whatever the question may be coming from.
(04:52):
So that resulted in some very negative impact to a particular person. So from a telco perspective, we are carrying the weight of delivering AI or incorporating AI into the hands of millions of customers. We simply cannot make that kind of mistake at that scale. So it is our responsibility to make sure the AI capability we incorporate into our service is responsible, secure, and be cautious of the social norms and cultures of the markets that we serve in. So this is the point number one. Point number two, I think mass adaptation of AI really pushed us telcos, thinking that we need to look at the long-term ROI of AI services. Maybe we can start my personal experience. When I first started to do online shopping back in my college days, that's a long time ago. All I think about is, oh, this is a great deal.
(05:48):
I can save money. At the end of 2025, when I look at my NOV for one of the e-commerce sites, I was shocked. How did I get here? How did I spend so much money? On a personal level, we know that if something became a trend, we need to look at the long-term costs. I think we see the same thing with cloud computing. Long-term, large-scale consumption of cloud resources can be very, very expensive. And we see when Mr Elon Musk purchased Twitter, now X. One of the first aggressive cost cutting measures is that they moved away from one of the hyperscalers because apparently the cost is too high. So we think the AI will follow a similar path because it is so good, it has so many potentials, we will see an exponential growth of AI power consumption, AI tokens in the future.
(06:37):
So we really need to look at the long-term TCO, not just the benefits. So what I'm trying to say is that actually the question is, how can we find the most efficient way of incorporating AI into telco business in a large, long-term pattern? So this is the point number two. I think point number three is that during the stay in Mobile World Congress 2026, I've had at least two occasions where leading telcos came to our booth and asked, can I have some AI powered B2B application that I can resell? So this gave me an inspiration. We think that AI is able to provide us an upgrade path to our existing B2B portfolio. Maybe we can incorporate agent in our call centre, connect this call centre agent to a public response agent so we can provide better coordination in terms of providing medical responses, something like that.
(07:34):
In the past without AI, this kind of integration via coding would be difficult, but with AI, it is possible. And this is something that telco industry has a unique advantage because all of the foundation model players, they focus on the capability of the models, not so much on the mass adaptation into actual application that impact millions of people. So that's my unconventional thoughts on the topic.
Charlotte Kan, TelecomTV (07:59):
So three key points here. First, you have to ensure that you develop and adopt AI in a very responsible manner. That's the first point. The second one, yes, value, but not at all costs. You have to think about the long term. And thirdly, I think it's around interoperability really and making sure that your full value chain benefits from it. Thank you very much for that. And I'd like to hear Quan Wang, your perspective here at ZTE on the value of AI in the telco sector. Where do you see it and how do you harness it?
Quan Wang, ZTE (08:35):
Okay. Because in ZTE, I'm in charge of the core network. So core network is something like central in the network. So I want to talk from the aspect from core network. I think there are several parts that introduce AI, we are bringing value to the telecom system. The first one is because core network we are provide the services to the customer. When we bring AI into this system, it will add a lot of new features to the telecom services like the AI new calling, AI noise cancelling, AI antifraud. So the customer user experience will be greatly improved. Yeah. The second is about, I think is the AI network because every data, even after that some tokens we are through the core network. So how to make use of AI to make the network more efficient is very important. Actually, ZTE provides some elements embedded with AI like the NWDAF, some function like smart DPI to make the network more efficient.
(09:55):
So this has helped the operator to save the whole TCO, especially the OpEx, I think. The third one is about, just you mentioned the AIOps because core network is, especially after the containerisation and virtualisation, the core network became very complex and it's very hard to do the operation and maintenance. And the stability of the core network is very important. So when we use some AI agent and even digital tools, which we already begin to use to help the operator to operate and maintain the network, to make these things much easier and much more accurate. So that's all. Yeah.
Charlotte Kan, TelecomTV (10:50):
So removing all the layers of complexity, thank you very much for that. For my next question now, I'm going to start with you, Laurence, at Red Hat, because in a telco end-to-end solution, different vendors provide hardware, cloud platform applications, and all sorts of related services. So what's the key point here to make it all very smooth and successful?
Laurence Fejit, Red Hat (11:13):
Yeah. No, so I think that the key to a smooth and successful collaboration across vendors really resides in the definition of an open common hybrid cloud platform that provides clear architectural boundaries, standardises lifecycle management, as well as provides some common shared processes across vendors. And by leveraging consistently Red Hat OpenShift as that common hybrid cloud platform, hardware vendors as well as application software providers and system integrator partners can really integrate, but also operate independently and while they seamlessly align on security, automation and data operations. So I think that in that respect, the benefits are really multifold, both from a vendor perspective and a customer perspective because it obviously reduces the vendor friction. It also minimises the integration efforts and it also shifts really the whole engagement into a more repeatable and platform-driven collaboration between the different players. From a customer perspective, I think this is also very beneficial because it allows customers to scale their network much quicker and also onboard innovation such as AI much faster into their network.
Charlotte Kan, TelecomTV (13:07):
Thank you very much for that. Cristina, I'd like to get Intel's perspective on this. How do you make this coordination smooth and successful?
Cristina Rodriguez, Intel Corporation (13:14):
Yeah. Well, first of all, I want to start saying that we have the technology today to deploy AI today. We have the technology in both the silicon, the platform, and certainly the AI use cases. And I'm going to talk a little bit about that. The way I look at it, what makes sense, the practicality, where can we use AI and what can we deploy today? I like to look at it in two groups. The first group is everything that is related to the radio algorithms. There is a lot that can be done today with the technology and the models that exist today when it comes to improving or taking the radio algorithms which were already very optimised to the next level. For example, link adaptation, channel estimation, beamforming, anything that will make the spectrum more efficient. And we're seeing that demos and we're seeing the operators looking into that already.
(14:17):
So that's one side. On the other side or the other group is anything that could make the network, and some of my colleagues have touched on that, the network more efficient, more optimised. For example, power management. We can use AI today to bring the power consumption down, right? That's one case. Using the capabilities of the silicon in the case of Intel, of course. I can speak about that, but also in the platform and just in the entire system. We can also use it for things like preemptive predictive maintenance. We can use it for self-healing. We actually have a really good demo with Red Hat talking agents, doing self-healing, debugging and self-healing of the network. So really, really cool stuff. So bottom line, we have the technology now. We can deploy. We don't have to wait for six years for sure. We can start deploying the AI use cases.
(15:24):
These use cases also are small models. You don't need to have any major components in the network. You don't need to consume a lot of power in the network. You have the small models and what you're doing is inference. You're not doing training. You're doing inference in a small model, can be deployed on live networks right now.
Charlotte Kan, TelecomTV (15:45):
Cristina, I'm very glad you brought up the point around reducing energy consumption because a few years ago we were having lots of conversations around sustainability here at MWC and it's not really the case because AI is literally eating not just the world, but all the conversations we're having here. So thank you for that. But I'd like to get your point of view at ZTE and at Whale Cloud as well around this question of smooth coordination between all the different parts that make the value chain.
Quan Wang, ZTE (16:15):
Okay. I think the key point is cooperation. It's from, how to say, from your deep mind, you usually want to do the openness because it's not about just saying we are open. It's about, how to say, this as a resource investment. For example, ZTE have full open labs worldwide to cooperate with Intel, with Red Hat, with Whale Cloud. Even when we release one version, every version, we are actually at our R&D centre, we are in deep close cooperation with Intel and with Red Hat together to prove that our version can work together smoothly. And before we release that version to deploy in the operators' network, we still invest a lot of integration engineers to do the pretest. So it's not about just the saying, it's about a lot of resource investment and ZTE is really doing this now and we already have many networks using the Intel CPU and using Red Hat platform worldwide.
(17:33):
And we are continuing cooperation with each other to together bring AI into this network and into this system. Yeah.
Laurence Fejit, Red Hat (17:41):
Thank you. All
Wu Zhouxi, Whale Cloud (17:43):
Right. Again, I have a few unconventional responses under OSA, but maybe with new explanations. So we think the mass AI adaptation in the telco industry, it's much more of a marathon than a hundred metre sprint. So we need to play the long game. So I think there are a few guiding principles that all of us in the industry share and abide by. So the first thing is that we think we need to do the right thing. For example, let's say because Whale Cloud, we are in the business of building sovereign cloud and sovereign AI cloud for our customers. Everybody has business targets and revenue targets to hit, but I think we should refrain from chasing after the biggest, fanciest, most expensive thing to our customers. The end result should be what is customer's requirements, what is the value realising? So we should work towards that. So do the right thing.
(18:36):
The second point is I think we should do things the right way. So for example, let's say when we deliver a project to our partners, instead of choosing the cheapest option, for example, in a project we have done in StarHub, we have chosen ZTE hardware, OCP, as well as Intel CPUs. We know that in the very beginning, they might not be the most affordable options, but I know in the mid to long-term TCO perspective, those are trusted partners. They have clear roadmap and they certainly have the ability to deliver on time. So that's my second point, do things the right way. And third point is we need to find our own niche. We need to find our own positioning. And from the telco industry, we are not maybe not at the very front when it comes to large language models, but we have a natural connection to our customers, natural connection to our enterprises, and we need to follow the rules and regulations of the local markets that we serve in.
(19:31):
Those might not be the factors that OpenAI will be thinking about, but this is the situation we are in. This is our advantage. This is the unique capability information that we have and we need to leverage on that. Find our niche, find our position in this AI long game. Thanks.
Charlotte Kan, TelecomTV (19:50):
Some very, very valid points here. Thank you very much for that. Another question now for you actually, Volkan at StarHub, because as AIOps moves from automation to autonomous, agent-based operations, what should we do to make AIOps solutions achieve production ready at scale?
Dr Volkan Sevindik, StarHub (20:17):
So my priority is more on the guardrails around the LLM, but as a corporation, we are not just using LLM. We are using small language models and non-language models to run on x86 architecture rather than GPU. So we are not really going after hype and doing a significant amount of GPU investments. We are using nano-models, doing one thing perfectly and wanting fully end to end and to be executed on x86 architecture. And of course, that comes with a huge number of agenting AI nano-models we have to run, but that's more like a communication and integration issue. But as a corporation, what we want, what we are doing today, leveraging our x86 architecture from Intel and various different chip manufacturers to leverage the current hardware first, rather than investing in NPUs and GPUs. And once we reach the limit in our current hardware, of course we might go to more neural processing and GPU level augmentation.
(21:32):
But what I believe is the current x86 infrastructure is more than enough to run a significant number of AI models. So that's the first thing. Second thing is, once you increase the number of agents, the communication between agents becomes an issue. How do you solve that? The way we solve it, of course, with our partners and Red Hat, we are developing one LLM engine to manage all these nano and small language models. LLM engine works like an orchestrator at more application layer to manage all these nano-models and small models at scale. So that's what we are doing today. That's what is in the production. The third part is, how can I create guardrails around the models, right? Because the model cannot really work independently, shouldn't make certain decisions independently. We have a policy engine also that is running on x86 architecture. Policy engine is really using, working like PAM, policy access management system to aim for certain policies on the decisions each nano-model should make or can make.
(22:49):
So from that perspective, starting from nano-models, going to small language models, using LLM as an orchestration and using another LLM, maybe a small scale LLM for policying all these systems and running them on current x86 architecture is what is working today. So for tomorrow, I don't know, maybe the complexity will increase on the LLM and complexity will increase on the SLM for us to move to GPU and NPU level architecture. But what I can see today is not really needed and current CPU architectures are advanced enough to handle certain tasks at scale. Well,
Charlotte Kan, TelecomTV (23:27):
That's the point you made earlier. It's not about investing left, right, and centre, but using existing resources and capabilities to deliver that change.
Cristina Rodriguez, Intel Corporation (23:36):
Yeah. Let me add to that. I totally agree with you. I think it's important that we have obviously the economical part of it, right? In order for AI to be deployed, it has to make economical sense too. And in order to do that, you need to figure out what is the right compute for the right workload. If you have a network today and you have an infrastructure today, which you can do the job and you can do the AI workload, use it. You don't need to deploy additional components. You don't need to expand more. You don't need to consume more power. So I think that's an important piece of making sure that we get this going and we do it in a way that makes economical sense.
Charlotte Kan, TelecomTV (24:27):
Thank you very much for that. Laurence.
Laurence Fejit, Red Hat (24:29):
Yeah, no, I was going to add onto that as well. So I think that the question really taps into several dimensions. There is obviously the shift from script to agent, but also I think the criticality of addressing the trust and transparency gap, because if autonomous networks are not predictable, they will fail at scale. Hence to make AIOps production ready at scale, while it transitions from automation to autonomous and agent-based operation, I think really requires grounding the intelligence into a cloud native process-driven control plane rather than having that distributed and siloed in different AI models. And by running AIOps agents on Red Hat OpenShift and Red Hat OpenShift AI, we help partners such as ZTE, Whale Cloud, but also telco customers to really scale autonomy across all their domains while maintaining explainability, maintaining compliance. And I think this is really important because that's really what will give the confidence to customers such as StarHub to move from the current AI experimentation stage that they are to full-fledged, trusted and fully scaled networks.
(26:19):
So to kind of summarise on this, I think that the key to scaling that is really to leverage the existing cloud native infrastructure and transition it or transform it into an AI native infrastructure. That's what will allow that shift and that full scale AIOps scaling of the AIOps into the network.
Charlotte Kan, TelecomTV (26:51):
So we are hearing it yet again. You have to leverage your existing resources, infrastructure, everything you've got in order to deliver that change to autonomous operations. We'd love to pursue this conversation, but it's very busy here at MWC. Of course, you're running from meetings to meetings. So I have one more question maybe to wrap up this conversation. And it's related to our theme here today, this trilateral shift, AI, AIOps, sovereign cloud, imperative, et cetera. I'd like to ask you all maybe to come up with a word, a sentence to tell us what you think success will look like to address this trilateral shift. What's key here? What's really important, starting with you, Volkan.
Dr Volkan Sevindik, StarHub (27:44):
For me, what's really important is selecting the right models and to be executed on your current infrastructure to resolve one issue very well, and after that scale up, rather than doing the hardware investment first and finding the models later and so on. Right now, industry is going to a direction that the models are selecting the hardware they want to be executed on. Models are so smart right now. There are certain research going on where researchers are developing these models and models are just changing itself based on the objective to run, for instance, on even a very small scale CPU or microcontroller. So I think the overall, the market will only expand if the models can run on the current architecture. And that also helps companies to adopt the models rather than doing the billions of dollars, significant GPU and hardware investment. So that's what I will say, use your current infrastructure first to solve one problem very well, and after that you can scale up easily once you prove the value.
(28:59):
Thank you.
Charlotte Kan, TelecomTV (29:00):
Thank you. Laurence.
Laurence Fejit, Red Hat (29:03):
So I would say that I see that as a gradual process in the sense that ... Yeah, no, and it's a gradual process that requires, I think, operational trust, but as well as architectural openness and gradual incremental autonomy, rather than trying to go from day one to full autonomy. I think it's really a journey rather than anything else. So that's what I would say. Yes. Thank you.
Quan Wang, ZTE (29:43):
Okay. For me, I think the word is open because open. I think the first reason is why we all sit here together to cooperate with each other. For operators, open is very important. That means operator like StarHub can choose different vendors from hardware, from platform, from the application. For the platform, that means that the platform can onboard different applications and cooperate with core networks such as ZTE. And for us, a lot of people talking about open source models. Actually, for the applications, we use different models we can choose and it's high efficient. And for the hardware, I think whatever from Intel, from StarHub, we also want more choice from Intel, Nvidia, AMD. Yeah. That thing is also very important for the open. So I think the keyword for me is the open and the cooperation.
Charlotte Kan, TelecomTV (30:53):
Thank you, Quan Wang. Cristina.
Cristina Rodriguez, Intel Corporation (30:56):
Yeah, I'll be brief. I'll say two things. Number one is step one. We need a fully virtualised network. We need a software defined network all the way from the cloud to the core, to the RAN, to the edge. I think the edge is going to become very important. We see a lot of momentum, a lot of use cases ready to go right now on the edge. And the second thing is the right compute for the right workload. Don't go trying to over ... Don't overdo it if you have the technology already deployed or if you have a more cost effective way to do your workload, do it. The right compute for the right workload. Thank you.
Wu Zhouxi, Whale Cloud (31:36):
All right. And I think my keyword is stay consistent. Stay consistent to the principles that I've just mentioned. It's about staying to your own lane, focus on the markets you really wanted to serve. For me personally, it's about providing flexible solutions, tiered solutions to customers who wanted to build sovereign AI, sovereign cloud at different scale. And the second thing is that we should stay consistent in terms of developing our technology, finding our core competence. So in the long game, we can sustain and build and serve the society and customers in the markets that we're in. Thank you.
Charlotte Kan, TelecomTV (32:16):
Thank you very much. So we promised you a very pragmatic, practical, and very open discussion. I think we delivered here this morning. A big round of applause, please for all our fantastic speakers here on stage. Lots of really important takeaways and points here, but thank you for sharing your insights with us this morning. And many thanks to you and the audience as well for your attention this morning. Enjoy the rest of your day, day four of MWC. Thank you.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Panel Discussion
At MWC26, experts from StarHub, Red Hat, ZTE, Intel and Whale Cloud share practical strategies for embedding AI in telecom operations, enhancing network efficiency and addressing data sovereignty.
Featuring:
- Cristina Rodriguez, Vice President & GM, Network and Edge, Intel Corporation
- Laurence Feijt, Director Telco Partner Sales, APAC, Red Hat
- Quan Wang, Vice President and Deputy GM, Compute and Core Networks, ZTE Corporation
- Dr. Volkan Sevindik, Chief Technology Officer, StarHub
- Wu Zhouxi, Head of Cloud Solutions, Whale Cloud
Recorded March 2026
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.