To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/0-twD9FFCUg?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
I'm Clarence Reynolds at MWC. AI is shifting network value from pure connectivity to intelligent fabrics that power distributed GPU clusters and new service models. That means routing architecture has become the battleground for revenue and scale. AE Natarajan, GM, SVP at HPE Networking Routing Infrastructure Solutions, joins us to map the next era of AI native routing. AE, thank you for being with us today. From your perspective, what is the biggest shift in the AI connectivity landscape?
AE Natarajan, HPE Networking RIS (00:41):
Interesting. AI connectivity now is transforming itself in more than two dimensions. We always talked about AI where you scale up within a rack where you increase the power of the GPUs and everything else. You scale out within an AI cluster data center, but now you're talking about scale across. And the reason you have to do that is because no longer you can hold GPU clusters that need to do AI workloads. Whether it is ingest for training or it is for inferencing, now has to be geographically distributed because of power, because of space, because of all the other challenges. And that scaling model is interesting because networking and network connectivity becomes critical and core, like what you mentioned in terms of what AI is transforming.
Clarence Reynolds, TelecomTV (01:39):
So for service providers, where is the money in AI networking beyond just
AE Natarajan, HPE Networking RIS (01:47):
Transport?This is good because what is happening here with AI is AI has two parts to it where there is training which happens in GPU clusters, but then there is also inferencing. The service providers who cater to their customers, large enterprises, pick one, whether it's finance, whether it's manufacturing, whether it's any of them, all of them are embarking on this AI journey. What do they need? They need their models trained or they need inferencing. And in the case of both training of models as well as inferencing, they need things to work consistently, give them the best SLAs and provide the high utilization of the GPUs. And AI causes the network architecture to fundamentally change. And why is that? Because AI sends more symmetric traffic. Previously, you could just send a few pile of bits up to a Netflix server and it'll stream a 4K video at you.
(02:56)
Today in AI, whether for ingest or for inference, you could be sending rich content upstream and you would receive rich content downstream. So the links have to be synertric. The second aspect of it, as we are here in Barcelona, you just talked about jet lag. Guess what? Agentic AI doesn't fall asleep. It doesn't have jet lag, so it's always on. So the load is always on and the traffic is always on. And the third part of it is, back to my Netflix example, you no longer can cache traffic for AI. Netflix, you could cache traffic, move it down to the edge and actually provide all of those facilities when you saw buffering and things like that, that changes. So the network architecture fundamentally changes for AI. And the telcos over here do not have a challenge. They actually have a golden opportunity to enable this for their customers where they actually can bring in their customers and provide them the SLAs and the requirements that can facilitate AI in their enterprises, whether it's training or inference.
Clarence Reynolds, TelecomTV (04:08):
You mentioned what hyperscalers need. How does your PTX routing system help them with those needs?
AE Natarajan, HPE Networking RIS (04:16):
So the PTX routing system is built on three fundamental principles. First is sustainability, which we do across not just PTX, but every one of our elements. What does sustainability mean? The most compact routers, the most power efficient routers, and the routers that can last long and give you future proofing. So the PTX routers that we just announced recently, the 12K routers, is the most compact, high rating system that you get in the market today, both in terms of the chassis based system that we announced to the PTX 12K and also the fixed form factor systems. So they give you the highest, most compact routers for not just cloud providers, but for anybody who wants to build AI clusters. And you talked about distributed GPUs and how to connect them. This gives you the facility because everywhere that you need GPU connectivity, you need the most compact routers because space is limited.
(05:19)
You need it to be power efficient because power is always constrained and you need it to perform so that you can seamlessly transition from whether it's 400 gig to 800 gig or tomorrow from 800 to 1.60. And our PTX router does this in a seamless fashion. That is why what we are doing here at MWC is so exciting.
Clarence Reynolds, TelecomTV (05:40):
So AE, in the next 12 to 24 months, what should operators look for as AI networking matures and what is HP's North Star?
AE Natarajan, HPE Networking RIS (05:51):
Interesting because the next 12 to 24 months, we've been spending a lot of time in training and training models. Influence is starting to pick up and inference is much bigger. So this is going to continue to be a great inflection point for the networks. And you said it right, connecting these GPUs and the network becoming the most important thing, this is going to gain a lot of money term. People are talking about building networks that is 4x, 5X, 10X in the next three to five years. That is amazing. We have the best products that we announced with the PTX12K that we announced here at MWC is a seamless product that gives you 800 gig connectivity, the most dense form factors from fixed form factor to chassis based in the most densest way possible. And three or four months ago, we announced the MX301 as the edge inference router built as one RU, simple box for the edge, 1.60 running 400 gig with inline security, both in the MX as well as in PTX.
(06:57)
Remember, AI without security for your data is going to be critical. We have all of the ingredients, like I said, as a north star with our products because we've built it for future proof, with sustainability, with low power, with all the features, with security in line, and the ability to seamlessly migrate from 400, 800 to 1.60 and further on. That is amazing. And we have products both at the center and the edge.
Clarence Reynolds, TelecomTV (07:27):
It's an exciting future. It's
AE Natarajan, HPE Networking RIS (07:29):
Exciting. It's a really exciting future.
Clarence Reynolds, TelecomTV (07:31):
AE, thank you so much for your insights today. Thank you. Thank you.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
AE Natarajan, GM, SVP, HPE Networking RIS
As AI clusters expand beyond single datacentres, the network is becoming a primary determinant of AI performance and a new monetisation opportunity for AI infrastructure builders. AE Natarajan, SVP and general manager of HPE Routing Infrastructure Solutions, shares how telcos, hyperscalers and neo-clouds can design and monetise secure, deterministic AI network fabrics from edge to cloud. He explains how AI traffic patterns and infrastructure economics are evolving, why outcome-based connectivity and AI interconnect SLAs are gaining traction, and what it takes to scale distributed GPU fabrics reliably.
Recorded March 2026
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.