To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/gosrlWvRUrY?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Clarence Reynolds (00:01):
I'm Clarence Reynolds at MWC26. 12 months ago, AI-driven networks and edge intelligence were mostly theory. Today, they're live deployments delivering measurable gains. Cristina Rodriguez from Intel Corporation's wireless access network division and Fernando Castro Cristin from HPE's Telco Infrastructure Business Unit returned to Barcelona to discuss what's changed and where AI now fits across core to edge and what's next for the 5G core. Again-
Fernando Castro Cristin, HPE (00:30):
Good to see you.
Cristina Rodriguez, Intel (00:31):
Great to be here.
Clarence Reynolds (00:32):
Thank you both for being with us. So it's been a year since we were last in Barcelona. Has anything happened over the last year?
Fernando Castro Cristin, HPE (00:40):
Many things changed. So last year and even a little bit before, we were talking about new technologies. We were positioning ourselves with the great partnership that we have with Intel, all the new technologies at Xeon6 SoC technology. There was a plan. What I'm very proud of is our collaboration that we executed the plan with the promises that we made to the market, with launching systems. We have them available here. They are exactly how we wanted them to be. They address the market. They address the needs. And we are now executing what we promises. And I think in our business, Intelco is even more important because of the ... It's really critic for everybody. We cannot just talk and not execute. Our partnership is a demonstration that we announce, we work together, we execute, it's available. And we have seen a lot of interests for all the platforms.
(01:48):
I think it's right sizes. With Intel, we did all the engineering work to make it happen. It's the right size, the right technology at the right time. And I'm really proud of what our teams did.
Cristina Rodriguez, Intel (02:01):
I couldn't have said it any better.
Fernando Castro Cristin, HPE (02:02):
Thank
Cristina Rodriguez, Intel (02:03):
You. That is what we do. We do what we say we're going to do. A year ago, actually, we talked, we were here, the three of us, exactly a year ago. And we talked about it and we had an idea. We had just released our CM6 first wave, the 42 cores. And we were talking about our planners and what we needed to do and what the industry needed. Fast forward a year, we now have our 72 course version, the next wave. And they're already in your server, Gen 12 servers, ready for the industry. Tremain distraction in our customers. And this is something that is going to make a difference in the operators and in the telco industry.
Fernando Castro Cristin, HPE (02:47):
And talking about operators, it's an ecosystem. They are part of the innovation. So I'm really proud that not only we work together at the engineering level, but also in the go- to-market. We onboarded in the engineering process of our servers. We onboarded also customers to make sure that their requirement, their inputs were taken into consideration. When I looked to what has been made and where we are today versus one year ago, it's a tremendous work. So we used our knowledge, several generations already at the telco edge, several generations that we work with customers, all that feedback integrated, tremendous partnership with Intel, with Christina's team. And I think that we have now the results of, as you said, what they need now is now available. So all the transformation that we were talking about, hoping it's happening now. I'm not the only contributor to that. It's a set of key partners working together.
(04:00):
And I think that's one of the changes. And that's a change that we have been seeing in Telco for years. More and more people working together to achieve results that enables the transformation without a risk.
Cristina Rodriguez, Intel (04:14):
It's a collaboration that you and I have and our companies have. And then the collaboration that we together have with our customers, with the operators, with the teams, with the ecosystem, that's fantastic. And just to give an example of, because you were saying the kind of things that we have accomplished and what we're bringing to the industry, with this server, Gen 12 server, we are solving a problem, a total cost of ownership problem, a performance per power requirement. Before that generation, the operators needed to have at the telco edge, they need to deploy more than one server per se. With this generation, the Gen 12, we reduced that to one server per site. Wow. Now imagine the benefit that that bring to the operators. And that is quite an accomplishment in the industry.
Fernando Castro Cristin, HPE (05:14):
Every density is a key element of the innovation. It's not only the processing capabilities with the GM6 SOC, it's also the nick cuts. So the number, the throughput that we can extract from very dense systems, it's impressive. So you were mentioning about ROI and density is a key part of it. We have practical examples of moving from three servers to two and even to one. From three to one, it's extremely important. So density, the throughput, the nick cards that we have implemented, the processor, of course, it's a key part. I'm really proud of what the engineers have
Clarence Reynolds (06:00):
Been. Where does AI fit into your core to edge strategy and how does deploying AI at the edge differ from deploying it in a data center?
Fernando Castro Cristin, HPE (06:10):
AI. Everybody has to talk about it. I'm
Clarence Reynolds (06:13):
Trying
Fernando Castro Cristin, HPE (06:13):
To talk about the AI. So AI is a little bit like what we were talking about edge before. It means everything and nothing. I'm a practical guy and AI has and has been always something that has evolved to helping the operation, optimize the network. We have a demo here of optimizing the signal. So the first implementation that we see that is visible, that is demonstrated here is using the same technology, not extra overhead, but using the same technology modules driven by AI applications that can deliver benefits in the current architecture, not building AI factories below the antennas. But taking the benefits of all the AI engineering that has been done, deploy that, making sure that it's more efficient. You mentioned AOI. Try to get the density of the network being addressed, better signal processing, and at the end, better customer service. The quality has improved thanks to AI.
(07:27):
So AI for the RAM, it's a reality. It's not anymore something we want to do. It's a reality. Very true.
Cristina Rodriguez, Intel (07:35):
And it's exactly something that we can do now, that we can deploy now in live networks. We don't have to wait for safety. We don't have to wait anymore. We can do that. One of the things that we did working with our partners and working with our customers, when we were very intentional in our roadmap and every generation of our roadmap have gone better and better and more able to address all the requirements of the network of the present and the future. But one in particular, especially a feature that we have in our Con6 SOC. Well, let me summarize. We have in the C16C, we have, as I say, 72 ports. We have integrated internet that we talk about it. We can expand that with external net, but we have integrated 200 gig ports. We have all the security that we normally have in our CON, all the trust domain extension and secure platform.
(08:32):
And we have, and this is very important, we have AI built in. And by having that, we're able to do quite a bit of inference, AI inference in the network. When you're talking about the network and AI, and this is true in telco and in edge. And by the way, the same server can be deployed either in Telco Edge or Enterprise Edge. And when you look at the type of applications and uses of AI that you see at the telco and the edge in general, you're not going to be doing training there. You're going to be running models that are small models. And the type of inference that we can do, we can do nine billion parameters. That's a lot more than the models that you find in the run. We can do quite a bit of AI inference without having to use additional components, without having to spend more money in additional components or consume more power.
(09:28):
So it's really giving us what we need today that we can start and deploy and start getting the benefit of the technology, give us the learning and get us on our way to the future. It's fantastic.
Fernando Castro Cristin, HPE (09:42):
And I will add one more thing, if you don't mind. So all those modules, they also generate more traffic. And one thing that we achieved since last year is also the integration of Juniper. So the integration is done, all the employees are HP employees, the technology is integrated, and we start also now to work on specific workloads on those servers that include also routing technology and the combination you were asking earlier about what changed the combination with HP networking allows us to even go further down into those different inferencing modes that can generate traffic and we can secure that traffic. So also the Juniper part is accelerating all the capabilities we can deploy.
Clarence Reynolds (10:34):
Indeed. So let's talk about the core data center. Where are we now and where will we be next year?
Fernando Castro Cristin, HPE (10:40):
Wow. So I start.
Cristina Rodriguez, Intel (10:43):
Please.
Fernando Castro Cristin, HPE (10:44):
Everybody is very excited about RAN, but without code there's no run. And we have to remember also all the legacy of the core networks from IMS to now 5GSA and the future that will come with 6G, core is the core. The name itself says. There's a lot of innovation also in the core. There's a lot of refresh and new capabilities that are driven by containerized platforms and evolution of the software that allows that mobility that we were talking about earlier to happen also in the core. So there is a massive transformation also at the core level that is sustaining the potential that we have now with AI and the usage of AI into different locations of the network. So what we can expect, it's more agile, more dynamic, more redistribution of the different applications within the core, from the core to the edge, and a lot of security that will be implemented.
(12:03):
It was already very secure, but it will be even more secure with all the different assets that we have. So that's what I see, really modernization of the core, supporting more capabilities at the edge, supporting more AI inferencing. And when I say inferencing, I'm not saying inferencing on GPUs, on symferencing on CPU, like Christina just said, it's a reality today. So it's a mix. It will be GPUs, it will be CPUs, and we are proud to be able to enable all that.
Cristina Rodriguez, Intel (12:35):
It's the right compute for the right workload. Exactly. Let's match one with the other. And I, on the core, I'm going to say we're super proud of our CON6 ecosyding in your servers, high density computing, lower power, great total cost of ownership. We're reducing the amount of infrastructure that needs to be deployed because of the high performance per rack. Really, really proud of that. And when you think about the future, we're already working on the next one. We have it right here in our booth, the follow-up to the next, the CON six plus series. So we're super, super proud of what we're doing in core. We
Fernando Castro Cristin, HPE (13:23):
Had great customer successes with the current e-core platform. Last year, it was new. It's deployed. It's successfully deployed and the performance is impressive, e-course, performance impressive. So the benefits of the density and the lower power consumption was not impacting the performance. We were very surprised about how performant it is. So you can have evolution with lower cost in terms of operation, but not compromise of performance. So that has generated a lot of very positive success with customers already.
Clarence Reynolds (14:06):
Christina, Fernando, thank you so much.
Fernando Castro Cristin, HPE (14:09):
Always a pleasure.
Clarence Reynolds (14:10):
Let's do it again next year. Let's do it
Cristina Rodriguez, Intel (14:11):
Again.
Fernando Castro Cristin, HPE (14:12):
A year
Cristina Rodriguez, Intel (14:12):
From now.
Fernando Castro Cristin, HPE (14:13):
It's written.
Clarence Reynolds (14:14):
That's right.
Cristina Rodriguez, Intel (14:15):
Thank you so much.
I'm Clarence Reynolds at MWC26. 12 months ago, AI-driven networks and edge intelligence were mostly theory. Today, they're live deployments delivering measurable gains. Cristina Rodriguez from Intel Corporation's wireless access network division and Fernando Castro Cristin from HPE's Telco Infrastructure Business Unit returned to Barcelona to discuss what's changed and where AI now fits across core to edge and what's next for the 5G core. Again-
Fernando Castro Cristin, HPE (00:30):
Good to see you.
Cristina Rodriguez, Intel (00:31):
Great to be here.
Clarence Reynolds (00:32):
Thank you both for being with us. So it's been a year since we were last in Barcelona. Has anything happened over the last year?
Fernando Castro Cristin, HPE (00:40):
Many things changed. So last year and even a little bit before, we were talking about new technologies. We were positioning ourselves with the great partnership that we have with Intel, all the new technologies at Xeon6 SoC technology. There was a plan. What I'm very proud of is our collaboration that we executed the plan with the promises that we made to the market, with launching systems. We have them available here. They are exactly how we wanted them to be. They address the market. They address the needs. And we are now executing what we promises. And I think in our business, Intelco is even more important because of the ... It's really critic for everybody. We cannot just talk and not execute. Our partnership is a demonstration that we announce, we work together, we execute, it's available. And we have seen a lot of interests for all the platforms.
(01:48):
I think it's right sizes. With Intel, we did all the engineering work to make it happen. It's the right size, the right technology at the right time. And I'm really proud of what our teams did.
Cristina Rodriguez, Intel (02:01):
I couldn't have said it any better.
Fernando Castro Cristin, HPE (02:02):
Thank
Cristina Rodriguez, Intel (02:03):
You. That is what we do. We do what we say we're going to do. A year ago, actually, we talked, we were here, the three of us, exactly a year ago. And we talked about it and we had an idea. We had just released our CM6 first wave, the 42 cores. And we were talking about our planners and what we needed to do and what the industry needed. Fast forward a year, we now have our 72 course version, the next wave. And they're already in your server, Gen 12 servers, ready for the industry. Tremain distraction in our customers. And this is something that is going to make a difference in the operators and in the telco industry.
Fernando Castro Cristin, HPE (02:47):
And talking about operators, it's an ecosystem. They are part of the innovation. So I'm really proud that not only we work together at the engineering level, but also in the go- to-market. We onboarded in the engineering process of our servers. We onboarded also customers to make sure that their requirement, their inputs were taken into consideration. When I looked to what has been made and where we are today versus one year ago, it's a tremendous work. So we used our knowledge, several generations already at the telco edge, several generations that we work with customers, all that feedback integrated, tremendous partnership with Intel, with Christina's team. And I think that we have now the results of, as you said, what they need now is now available. So all the transformation that we were talking about, hoping it's happening now. I'm not the only contributor to that. It's a set of key partners working together.
(04:00):
And I think that's one of the changes. And that's a change that we have been seeing in Telco for years. More and more people working together to achieve results that enables the transformation without a risk.
Cristina Rodriguez, Intel (04:14):
It's a collaboration that you and I have and our companies have. And then the collaboration that we together have with our customers, with the operators, with the teams, with the ecosystem, that's fantastic. And just to give an example of, because you were saying the kind of things that we have accomplished and what we're bringing to the industry, with this server, Gen 12 server, we are solving a problem, a total cost of ownership problem, a performance per power requirement. Before that generation, the operators needed to have at the telco edge, they need to deploy more than one server per se. With this generation, the Gen 12, we reduced that to one server per site. Wow. Now imagine the benefit that that bring to the operators. And that is quite an accomplishment in the industry.
Fernando Castro Cristin, HPE (05:14):
Every density is a key element of the innovation. It's not only the processing capabilities with the GM6 SOC, it's also the nick cuts. So the number, the throughput that we can extract from very dense systems, it's impressive. So you were mentioning about ROI and density is a key part of it. We have practical examples of moving from three servers to two and even to one. From three to one, it's extremely important. So density, the throughput, the nick cards that we have implemented, the processor, of course, it's a key part. I'm really proud of what the engineers have
Clarence Reynolds (06:00):
Been. Where does AI fit into your core to edge strategy and how does deploying AI at the edge differ from deploying it in a data center?
Fernando Castro Cristin, HPE (06:10):
AI. Everybody has to talk about it. I'm
Clarence Reynolds (06:13):
Trying
Fernando Castro Cristin, HPE (06:13):
To talk about the AI. So AI is a little bit like what we were talking about edge before. It means everything and nothing. I'm a practical guy and AI has and has been always something that has evolved to helping the operation, optimize the network. We have a demo here of optimizing the signal. So the first implementation that we see that is visible, that is demonstrated here is using the same technology, not extra overhead, but using the same technology modules driven by AI applications that can deliver benefits in the current architecture, not building AI factories below the antennas. But taking the benefits of all the AI engineering that has been done, deploy that, making sure that it's more efficient. You mentioned AOI. Try to get the density of the network being addressed, better signal processing, and at the end, better customer service. The quality has improved thanks to AI.
(07:27):
So AI for the RAM, it's a reality. It's not anymore something we want to do. It's a reality. Very true.
Cristina Rodriguez, Intel (07:35):
And it's exactly something that we can do now, that we can deploy now in live networks. We don't have to wait for safety. We don't have to wait anymore. We can do that. One of the things that we did working with our partners and working with our customers, when we were very intentional in our roadmap and every generation of our roadmap have gone better and better and more able to address all the requirements of the network of the present and the future. But one in particular, especially a feature that we have in our Con6 SOC. Well, let me summarize. We have in the C16C, we have, as I say, 72 ports. We have integrated internet that we talk about it. We can expand that with external net, but we have integrated 200 gig ports. We have all the security that we normally have in our CON, all the trust domain extension and secure platform.
(08:32):
And we have, and this is very important, we have AI built in. And by having that, we're able to do quite a bit of inference, AI inference in the network. When you're talking about the network and AI, and this is true in telco and in edge. And by the way, the same server can be deployed either in Telco Edge or Enterprise Edge. And when you look at the type of applications and uses of AI that you see at the telco and the edge in general, you're not going to be doing training there. You're going to be running models that are small models. And the type of inference that we can do, we can do nine billion parameters. That's a lot more than the models that you find in the run. We can do quite a bit of AI inference without having to use additional components, without having to spend more money in additional components or consume more power.
(09:28):
So it's really giving us what we need today that we can start and deploy and start getting the benefit of the technology, give us the learning and get us on our way to the future. It's fantastic.
Fernando Castro Cristin, HPE (09:42):
And I will add one more thing, if you don't mind. So all those modules, they also generate more traffic. And one thing that we achieved since last year is also the integration of Juniper. So the integration is done, all the employees are HP employees, the technology is integrated, and we start also now to work on specific workloads on those servers that include also routing technology and the combination you were asking earlier about what changed the combination with HP networking allows us to even go further down into those different inferencing modes that can generate traffic and we can secure that traffic. So also the Juniper part is accelerating all the capabilities we can deploy.
Clarence Reynolds (10:34):
Indeed. So let's talk about the core data center. Where are we now and where will we be next year?
Fernando Castro Cristin, HPE (10:40):
Wow. So I start.
Cristina Rodriguez, Intel (10:43):
Please.
Fernando Castro Cristin, HPE (10:44):
Everybody is very excited about RAN, but without code there's no run. And we have to remember also all the legacy of the core networks from IMS to now 5GSA and the future that will come with 6G, core is the core. The name itself says. There's a lot of innovation also in the core. There's a lot of refresh and new capabilities that are driven by containerized platforms and evolution of the software that allows that mobility that we were talking about earlier to happen also in the core. So there is a massive transformation also at the core level that is sustaining the potential that we have now with AI and the usage of AI into different locations of the network. So what we can expect, it's more agile, more dynamic, more redistribution of the different applications within the core, from the core to the edge, and a lot of security that will be implemented.
(12:03):
It was already very secure, but it will be even more secure with all the different assets that we have. So that's what I see, really modernization of the core, supporting more capabilities at the edge, supporting more AI inferencing. And when I say inferencing, I'm not saying inferencing on GPUs, on symferencing on CPU, like Christina just said, it's a reality today. So it's a mix. It will be GPUs, it will be CPUs, and we are proud to be able to enable all that.
Cristina Rodriguez, Intel (12:35):
It's the right compute for the right workload. Exactly. Let's match one with the other. And I, on the core, I'm going to say we're super proud of our CON6 ecosyding in your servers, high density computing, lower power, great total cost of ownership. We're reducing the amount of infrastructure that needs to be deployed because of the high performance per rack. Really, really proud of that. And when you think about the future, we're already working on the next one. We have it right here in our booth, the follow-up to the next, the CON six plus series. So we're super, super proud of what we're doing in core. We
Fernando Castro Cristin, HPE (13:23):
Had great customer successes with the current e-core platform. Last year, it was new. It's deployed. It's successfully deployed and the performance is impressive, e-course, performance impressive. So the benefits of the density and the lower power consumption was not impacting the performance. We were very surprised about how performant it is. So you can have evolution with lower cost in terms of operation, but not compromise of performance. So that has generated a lot of very positive success with customers already.
Clarence Reynolds (14:06):
Christina, Fernando, thank you so much.
Fernando Castro Cristin, HPE (14:09):
Always a pleasure.
Clarence Reynolds (14:10):
Let's do it again next year. Let's do it
Cristina Rodriguez, Intel (14:11):
Again.
Fernando Castro Cristin, HPE (14:12):
A year
Cristina Rodriguez, Intel (14:12):
From now.
Fernando Castro Cristin, HPE (14:13):
It's written.
Clarence Reynolds (14:14):
That's right.
Cristina Rodriguez, Intel (14:15):
Thank you so much.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Cristina Rodriguez, Intel Corporation & Fernando Castro Cristin, HPE
HPE and Intel discuss how AI is transforming telecom networks from core to edge. HPE ProLiant Gen12 platforms powered by Intel Xeon 6 SoC (system-on-chip) processors enable AI inference at the edge, improve compute density and network performance, and help operators reduce power consumption and infrastructure footprint.
Featuring:
- Cristina Rodriguez, VP and GM, Network & Edge, Intel Corporation
- Fernando Castro Cristin, Vice President and GM Telco Infrastructure BU, HPE
Recorded March 2026
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.