HPE accelerates AI workloads with cutting-edge datacentre solutions

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/8Ztoh641DWg?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Clarence Reynolds, TelecomTV (00:01):
I'm Clarence Reynolds at MWC26, and as AI workloads scale, the network is no longer just a utility. It has become the critical fabric that determines the performance and efficiency of distributed GPU clusters. Amit Sanyal, Senior Director of Product Marketing for Data Centre Networks at HPE, joins us to discuss how HPE is building the connective tissue for the next generation of AI. Amit, thank you for being with us today. What is the fundamental role that networking plays in determining AI performance and scale?

Amit Sanyal, HPE (00:34):
AI is a highly distributed computing problem where hundreds and thousands of GPUs come together. They cannot operate on their own. So networking plays a key role, and it's not only during training, but also during inferencing, especially now with reasoning models, agentic AI and mixture of experts. Even inferencing needs a lot of GPUs to interact with each other. And the networking characteristics for AI are different from traditional workloads. It needs much more bandwidth. There is a lot more congestion and latency and tail drops impact the performance of the GPU significantly. So you do not want to keep GPUs, which are very expensive resources, waiting for the network to finish. Networking is critical for the performance of AI.

Clarence Reynolds, TelecomTV (01:26):
What specific networking solutions does HPE offer to optimise these intensive AI workloads?

Amit Sanyal, HPE (01:32):
So we have a purpose-built networking for AI solution, which is open, which is AI-optimised for best performance and for simple operations. So what we do is we have a comprehensive portfolio of offerings that includes switches, routers, and security involving the QFX switches, the PTX routers, and SRX firewalls. In fact, we were the first to introduce 800G switches and routers to the market, and we have just recently introduced the next generation, which supports 1.6 terabits per port of switching. In addition, we have AI-optimised software that takes advantage of all the hardware and delivers congestion management, better load balancing, and all the features required to give you the best performance. On top of that, we simplify operations using Apstra, which is our intent-based networking to quickly design and deploy networks. And through a technology called Mist Technology, we actually use AIOps to simplify operations further.

Clarence Reynolds, TelecomTV (02:43):
So what are the key highlights of the AI networking demo that you are showcasing here at MWC?

Amit Sanyal, HPE (02:49):
Let me show you, here are the key things that we are showcasing to customers. So first, we are showcasing how the AI fabric operates with all the innovation inside our data centre. So this is essentially showing where we show all the innovations that we have done inside the data centre to deliver the best performance. The second use case that we offer is where we are now showing how, even when you have data centres that are disparate and they are talking to each other, or you are doing inferencing at the edge, that networking plays a key role. And finally, we are showcasing how we are managing the fabric and simplifying operations. So we can deploy these data centres quickly. We give you visibility and we are able to use AIOps to help troubleshoot and root cause problems and resolve them.

Clarence Reynolds, TelecomTV (03:51):
What can you share about the real-world customer deployments and the impact that your solutions are having?

Amit Sanyal, HPE (03:55):
Yes. We have a range of customers, starting from the large cloud providers and model builders to neo clouds and now even enterprises. Let me give you some examples. We have a large model builder that is deploying us to power their AI data centre. They have more than half a million GPUs, NVIDIA GPUs in their AI data centres, and they are powering all of that through our Juniper networking switches. A second example I will give you is of a neo cloud where that customer is the third largest customer by the number of their customers. We are powering both the AMD and NVIDIA GPUs, both the front-end network and the back-end network. More recently, we have another customer in Korea that is using us to basically, they have replaced the InfiniBand switches with our Ethernet-based switches, and they are showcasing that the performance is as good or better using Ethernet, which is an open standard that most people prefer to use.

(05:07):
So these are three examples where they are using us to power the AI applications to serve their end customers.

Clarence Reynolds, TelecomTV (05:15):
Amit, thank you for your insights today.

Amit Sanyal, HPE (05:18):
Thank you so much.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Amit Sanyal, Senior Director, Product Marketing for Data Center Networks, HPE

As AI workloads scale across distributed GPU clusters, networking becomes critical to performance. Amit Sanel explains how HPE delivers purpose-built AI fabrics with high-bandwidth switching, intelligent congestion management, and AIOps-driven operations to optimise AI training and inference across modern datacentres.

Recorded March 2026

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.