SUSE’s open approach to modernising telco networks

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/fvaaZTj3dkA?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Clarence Reynolds, TelecomTV (00:08):
I'm Clarence Reynolds at MWC26. AI-native, cloud-native networks are moving from slides to deployments, and open source is at the heart of that shift. Gary Mackenzie, General Manager, Telco Business Unit at SUSE, joins us to discuss how operators can modernise without lock-in from core to edge. Gary, thanks for being with us today. So Gary, here at MWC, AI-native networks are everywhere. How is SUSE helping operators move from traditional infrastructure to truly AI-ready, cloud-native environments? Well,

Gary Mackenzie, SUSE (00:42):
It's a great question. I think one of the first things that occurs to me is that we're realising and we're talking to operators about how AI-native networks aren't just bolting GPUs onto existing networks. It's a large transition process. It's about how we build those from the ground up and how we deliver AI into networks in a sustainable fashion. We sometimes talk about those dimensions of for, on and with when it comes to AI and networks. And we're doing a lot of different things there, like how we enable AI to improve existing functions is a different conversation to how we enable telcos to deliver AI to their users. So there's a lot of different dimensions to the question. A lot of what we are doing is a lot of SUSE's general mission in telco is to do boring things well. And that's the sort of classical problem for an infrastructure company.

(01:33):
And we want to extend that to AI at the end of the day. So that means making sure that the infrastructure is there for AI, that it's seamless and integrated with our other products so that it's easy to run your new AI-based functions alongside traditional network functions on the same infrastructure. You have the same primitives in terms of management and deployment and such and the same operational workflows. That's a lot of what we're doing to make the AI-native networks real. It also does come with new features and new technology, how we build those into the product. So as I said, it's not about bolting it on top. And we do see, in some places, this sort of interim approach of an AI-native network being a shim layer on top of an existing network. And we think it's very much about building it actually into the functions at the base layer rather than doing it on top.

(02:23):
We think that's the longer-term scalable pattern that we'll see emerging, and that's what we're focused on at SUSE.

Clarence Reynolds, TelecomTV (02:28):
So telcos need to modernise, but they also need to keep costs in check. How does SUSE help them innovate faster without falling into vendor lock-in?

Gary Mackenzie, SUSE (02:38):
It's a traditional problem, really. The revenues remain stubbornly flat while we have all these new challenges. We have VMware modernisation, VMware replacement strategies. We have the need to open up new revenue streams, and indeed AI is a key one of those. How do you do that? The investment required in the AI gigafactories we're seeing is vast. So there's a number of ways we're doing that. And a lot of that is about getting more from what they have and being more efficient with the base processes. So we see that in standardising those workflows. I talked about ensuring we have the same primitives, whether we're deploying a network function or a new AI-based service that might be new to the customers and to the network. Having the same operational paradigms there, so we're not reinventing the wheel, we're not adding a new cost centre to support that is a key part of that.

(03:31):
And that's going to be one of the themes I think as we see AI being adopted. The people who are successful in that space are going to be the ones who do it in a way which is scalable and has an economic model aligned with it.

Clarence Reynolds, TelecomTV (03:44):
Gary, it seems that AI is reshaping operations from network performance to customer experience. How are you working with partners like Infosys to develop AI-driven transformation for service providers?

Gary Mackenzie, SUSE (03:56):
Well, it's a fascinating one. When we see AI going into service providers, going into networks, there's a lot of open questions still about who's going to provide that functionality. Will that come in net new start-ups, new vendors into the ecosystem? Will it come in functionality in the products from existing vendors? Or in the case of Infosys, is it going to come from GSIs who are building functionality to plug into the existing ecosystem? We have a great demo on the booth, which shows that in a very specific use case, which is for the RAN. Now, we know about 80% of power utilisation in typical networks goes into the radio network. And Infosys has built this demo, this proof of concept on top of the SUSE AI product. So we're providing the AI infrastructure underneath. They built this demo, which uses those models and a bunch of inputs from the radio data and from the training models to determine how they can shut down sectors or such at different times of day, different usage patterns to reduce that power usage.

(04:55):
It's a really interesting real-world use case. It doesn't require a step change in the entire network. It's something that can be plugged into existing environments, and it's something which is deliverable in a relatively quick timeframe. I think it's a really good example of how AI can improve networks today. And it's also a great example of how we can work with partners to deliver functionality.

(05:19):
I've said many times that the industry is a village and it does take that village to deliver these things rather than it just being one vendor doing this. I think that's a far more effective approach than being one vendor going out on their own to deliver this. Working together with our partners makes far more sense to us.

Clarence Reynolds, TelecomTV (05:36):
I'm seeing edge and distributed cloud as being big themes this year. What role does SUSE play in helping operators deploy and manage infrastructure at scale from core to edge?

Gary Mackenzie, SUSE (05:46):
So it feels like edge has been a big topic for at least five years, maybe ten now. I think the difference now is the concerns are real, the deployments are real. And what that means is we're starting to think about the practicalities of this. It's not what does an edge look like, but how do we do this not a hundred times in the lab, but 10,000 times in the real world? And not just how do we do it, but how do we maintain it? How do we upgrade it over time? How do we keep it operational for ... We're likely talking a decade or more in the field. So that's a big part of the puzzle that we're pleased to see being solved. It's the maturity of edge, I think, at this point, and the maturity of the workflows, deployments. And also there's a big connection to automation, which I'm hearing a lot about this MWC.

(06:36):
It seems to be one of the themes of the week, both automation in a traditional sense, but also what automation looks like in the AI age, if you like, and agentic approaches to automation. That's become a really interesting topic because when you have that sort of scale, you can't be deploying or managing those sites manually. It has to be an automated process. And plugging that in and being able to ensure you can do things in an automated, templated fashion, and you can do that for potentially thousands of nodes at once is the key for us. And SUSE's been a pioneer, I would say, in the edge space for a long time. So I think it gives us a unique advantage being able to do that both for the telco industry, but also for the broader edge, whether we look at industrial, transport, healthcare, there's a lot of different sectors with a lot of overlap in their requirements.

Clarence Reynolds, TelecomTV (07:23):
Sustainability and energy efficiency are now strategic priorities. How is SUSE helping operators build greener, more efficient networks?

Gary Mackenzie, SUSE (07:31):
I think it's important that we say sustainability and energy efficiency, it can be easy to link those back to ESG, but it's also a margin goal for operators. It also affects the bottom line. And in practical terms, there are a number of levers that we can pull there and that we're helping operators to pull. Driving up, one of the things we see as a big theme is driving up the utilisation levels of the environment. For a long time, networks have been sized for that peak demand on one day a year, maybe only for a few hours on one day a year. And we have running things at 20% utilisation for months at a time, and that's not a sustainable pattern. So finding ways using the agility we get from technology like Kubernetes to drive up the utilisation levels of the environments is a big part of that.

(08:21):
The hardware is going to be there, ensuring we're using it effectively is a big part of that. So that's certainly one lever. We've also done a number of pieces of work to look at energy efficiency and where energy goes in environments. We did some work with Orange and Dell previously, which we've shown at MWC in the past. And that highlights that when we talk about sustainability, it's not just necessarily the energy that goes into running the servers that makes up their carbon footprint. A lot of it goes into the manufacturing of those machines, particularly it's interesting, it varies by country. If you look at a country like France where a lot of the energy is from nuclear power, the energy is very clean. So a lot of the carbon footprint comes from manufacturing the servers. And when you dive into that, you see that the carbon footprint actually comes overwhelmingly from memory.

(09:11):
So you can suddenly be able to say to operators to help them realise that if you can find an application which uses less memory, you're materially impacting the carbon footprint of that application and actually being able to give those pointers to people is a big step change in how we can help people. So that we can't necessarily reduce the footprint ourselves, but we can tell people more accurately where the footprint is and how they might look at reducing it. And that's a part of the puzzle that's necessary. The observability is always key so that we have the data to act on.

Clarence Reynolds, TelecomTV (09:41):
Gary, thank you for your insights today.

Gary Mackenzie, SUSE (09:43):
Thank you very much for having me.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Gary Mackenzie, GM Telco Business Unit, SUSE

Gary Mackenzie of SUSE discusses how operators can modernise networks for AI and cloud-native deployments, focusing on open source, efficiency, and sustainability.

Recorded March 2026

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.