Arrcus CEO on rapid growth and AI inference network fabric

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/hk7hfzEU3cs?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Tony Poulos, TelecomTV (00:08):
Tony Poulos here for TelecomTV at MWC 2026. And it's about time we started to talk about AI networking. And to help me through that, I have Shekar Ayyar who is the CEO of Arrcus with me. Shekar, welcome.

Shekar Ayyar, Arrcus (00:22):
Thank you. Thank you for having me.

Tony Poulos, TelecomTV (00:24):
Look, in just a few years, Arrcus has gone from being basically an emerging network start-up to a central player in AI networking, and that's why I've come to talk to you. What are some of the key reasons for this growth?

Shekar Ayyar, Arrcus (00:36):
Well, first and foremost, you've got people that have an insatiable demand for data centres. So they're just kind of building out like crazy and these data centres are going out globally. People are just grabbing power, land everywhere and setting up data centres. And every time you build a data centre, you need the networking gear that goes into a data centre. Everything from top-of-rack switches to spines to leaves to connections between the data centres. And all of that is what we provide. So Arrcus essentially provides the software fabric for that. Second, you've got telcos. So right here at the show, MWC, you kind of see so many operators. The sad reality is that all these operators have built out their 5G networks or are building out their 5G networks, and then they are really puzzled on how they're going to monetise these 5G networks. And so here we come in, we bring in a fabric for networking that sits in their 5G environments and allows them to monetise new network services rapidly on top of this infrastructure.

(01:38):
And finally, of course, AI is the buzzword that everybody's been using, but it is a real change in how infrastructure is being built to accommodate the latest sort of answers, provision of inferences to people using technology which has never been available before. And we are excited about being the infrastructure providers for that.

Tony Poulos, TelecomTV (02:04):
Well, you've just introduced the Arrcus Inference Network Fabric or AINF. And as AI shifts from training to large scale inference, how would customers benefit from AINF?

Shekar Ayyar, Arrcus (02:18):
Yeah. So think about it as when you look at it from a training standpoint, you're really taking large models and training them in consolidated data centres. But now once you have these trained models, what do you do with that? You have to take that and convert that to results, convert that to inferences. And that conversion of inferences actually happens at the edge nodes or at distributed points in the network. Now, unlike in the web world where all you had to do was either keep a website up or it was not up. I mean, it was really a discrete event. In the case of AI inferencing, you have to really understand the specific result that you're trying to deliver. An autonomous vehicle needs to figure out whether it hits a passenger or not. An oil rig decides whether it is hot enough to shut down the heating or improve the cooling.

(03:15):
A retail store point of sale device has to understand whether something is legitimately being bought or being kind of taken through the cash register. And these are all very different events. They require different latencies. They require different throughput capabilities, probably different power requirements as well. And finally, you have geopolitical fencing based on sovereignty and data requirements and consumer privacy and so on. The reason I'm mentioning all this is because in the AI inferencing world, you have a richness of policy and we have now come to the conclusion that that policy cannot just live up here. It needs to go all the way down here to the networking level. So that's where the AI from the Arrcus Inference Network Fabric comes in with policy-aware routing that you can then go in and dial these policies individually in each of these instances. And that gives you complete fidelity of results transmission from your training nodes to your inferencing points.

Tony Poulos, TelecomTV (04:26):
Well, that makes a lot of sense, but you're also making several announcements here at MWC with customers and partners. What are the key highlights and what do they signal for the future?

Shekar Ayyar, Arrcus (04:36):
Yeah. So first and foremost, I'd like to point out an announcement we made today with our partner, Fujitsu, as well as the Onefinity team within Fujitsu. So if you take what I just said about AINF as the fabric for inferencing, that then needs to be accompanied by the right kind of hardware processing and the right kind of optical connectivity. That processing now comes from our partner, Fujitsu, where they have announced this chip called Monaka, and it's going to be available in 2027, but that processor is essentially an Arm-based purpose-built processor for applications like this. You couple that with high-speed optics from the Onefinity team at Fujitsu, and you put that together with AINF from Arrcus, and you've got a package that is the best in class for inferencing applications. The other announcement that we have made is with our partner Lightstorm, and Lightstorm is a big provider for connectivity in the Asia-Pacific region.

(05:39):
They do everything from subsea cables to taking that and translating that to enterprises as well as hyperscaler connectivity. And so now Lightstorm with their solution coupled with AINF from Arrcus brings the best in breed connectivity for the Asia-Pacific region. We also have two other hardware partner announcements, one with a company called UfiSpace, which is a Taiwanese device manufacturer and UfiSpace's devices built on Broadcom silicon along with Arrcus's ArcOS software is going to now enable everything from 800G+ port speeds for our customers. And then finally, our partner, Lanner, where we have announced together between Arrcus' software and Lanner's hardware, the ability for compute platforms to support SRv6-based applications for our customers. So this has been a really exciting show for us to take all these announcements and bring them out.

Tony Poulos, TelecomTV (06:43):
Well, it sounds very much that Arrcus is in the right place at the right time.

Shekar Ayyar, Arrcus (06:47):
Absolutely. Thank you so much.

Tony Poulos, TelecomTV (06:48):
Shekar, thank you very much.

Shekar Ayyar, Arrcus (06:50):
Wonderful to have you here.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Shekar Ayyar, Chairman and CEO, Arrcus

At MWC26, Arrcus CEO Shekar Ayyar talks to TelecomTV about the company’s rapid growth in AI networking, the drivers behind growing demand for datacentre networking and the challenges telcos face in monetising 5G networks. Ayyar also introduces the Arrcus Inference Network Fabric (AINF), designed to support large-scale AI inference at the network edge with policy-aware routing, and new partnerships with Fujitsu, Lightstorm, UfiSpace, and Lanner, for AI inference and connectivity hardware and software solutions.

Recorded March 2026

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.