Arrcus unveils AI networking partnerships and solutions

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/Up3NBrgFoZo?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Shekar Ayyar, Arrcus (00:07):
All right. Welcome everyone. This is our second year doing this at Arrcus, where we are calling a small group of you and making sure that we are running through the announcements that we have made at the show. We are pleased to run through this and we have some of our partners in attendance here as well. This is a listing of the different announcements that we have made to begin with, and then I am going to spend a little bit of time on each one of these. Just before the show, we announced something called the Arrcus Inference Network Fabric, or AINF. In addition to that, we also disclosed that last year, in 2023, we had a year where we had three times the amount of bookings that we had the year prior. So growth in Arrcus is proceeding well, driven by, I would say, three things.

(01:04):
One is the data centre build out and what is happening globally around data centres and how Arrcus is being used inside data centres as well as between data centres for things like top of rack switches, spines, leaves, as well as data centre interconnects. The second one is what you are seeing largely at this show, which is around telecom and the environment with carriers, and what they are doing with their 5G environments and building out solutions to monetise those 5G environments. So how do you actually create solutions that can be layered on top of the 5G environment? Particularly, the work that we have done jointly with SoftBank in Japan around SRv6 mobile user plane has been instrumental in that part of the business. The third component is AI. Related to this, we announced AINF, which is the fabric for inference networking.

(01:59):
In particular, this fabric allows for policy-rich AI networking. The idea here is that inferencing is going to require complex policies from one inferencing node to another. These policies have to be adjusted depending upon latency requirements, throughput requirements, and what you need to do in terms of power, as well as sovereign requirements in terms of geopolitical fencing. All of this needs to guide how traffic steering happens. My colleagues, Sanjay and Keyur, will come up and talk to you a little bit more about this after I am done. In relation to this, we were also pleased this morning to announce a partnership between Arrcus, Fujitsu, and the 1FINITY team within Fujitsu. There is a new processor called the Monaka Processor that Fujitsu has announced. This is essentially an Arm-based processor purpose-built to address applications like AI inferencing.

(03:09):
When you combine that with the optics from the 1FINITY team within Fujitsu and the Arrcus AINF fabric, and you put these together, this becomes the foundation for customers to deploy their AI inferencing networks. When we do the Q&A, my colleague from 1FINITY, Kobayashi-san, as well as, I think, Markisan is also here, but they represent the 1FINITY team. The next announcement I want to focus on is what we announced with Lightstorm, our partner in Asia-Pacific, and Amajit and his colleagues are here. This is a team that has built out a complete communications infrastructure for Asia-Pacific, leading from the subsea cable fibre right into the data centre infrastructure for racks, with their ability to orchestrate these environments with what they call the Polarin network as a software product.

(04:23):
That, combined with the Arrcus solution for AINF, is now going to become the footprint for how we are going to deploy connectivity to a number of these customers in the Asia-Pacific region and globally as well. This includes a number of hyperscalers that want connectivity solutions to connect into that region. Then, with our partners at UfiSpace, we have collaborated with them to come up with the right kind of AI-optimised networking platforms. Specifically, UfiSpace, as you might know, is a white box provider, and we are both working closely with companies like Broadcom, Nvidia, etc., in terms of being the silicon providers. UfiSpace packages and builds the box around it. Arrcus's ArcOS software goes on top of that. That combination then becomes a router or a switch that can substitute anything that you can get from one of the larger incumbents like Cisco, Juniper, Arista, or Huawei.

(05:36):
We are pleased every time we have an addition to our family of supported solutions from Arrcus and a partner like UfiSpace, because now we have a rich hardware compatibility list that can accommodate all of these parallel solutions compared to what the incumbents bring. Finally, I wanted to point out the announcement that we have with Lanner, and Jeans from Lanner is here. Lanner is essentially our compute infrastructure partner, and once again, they are building solutions for customers packaged with the Arrcus network operating system that are going to be supportive of this SRv6 MUP solution that we had earlier talked about. The idea would be to deliver applications, network slices, etc., in a communications environment, but with a prepackaged solution that would bring Lanner hardware along with Arrcus software and then package that together for our customers.

(06:44):
Thank you again, Jeans. With that, I have a few other slides that I am just going to run through quickly because I did mention all of this already. This is, again, the first announcement that we talked about. In particular, I also want to talk about this line here in terms of the integration that Arrcus's inference networking fabric has with the frameworks on top. Most of you will know that up on the layer seven world in the networking stack, you have, in the context of AI, LLMs that are resident on top. What we are doing is something to drop down the requirements from that LLM tier right down to the layer three networking tier. In doing that, we are working in concert with frameworks like vLLM, Triton and SGLang, and so on.

(07:37):
The benefits of this architecture are going to accrue to our customers in terms of lower time to first token, improved latency and better throughput, etc. The partnership with Monaka, as I said, is going to lead to a secure and more congestion-free environment for AI applications, particularly for physical AI applications, such as robotics and autonomous driving. The processor is built and packaged for that. In concert with NICs and optics that 1FINITY provides, it is going to allow us to do this with very low latency and long distances.

(08:26):
The partnership with Lightstorm, as we discussed, we are going to use Arrcus along with the Polarin NAS platform, and from the ability for this to improve response times, improve multi-site expansion, as well as reduce the total cost of ownership, we are pleased about being able to do this. We are going to start this process of addressing customers immediately. Finally, as we look at the two partnerships with UfiSpace as well as Lanner, the UfiSpace partnership, as I said, allows us to expand now to new Broadcom silicon. In this particular case, for allowing us to do ultra-low latency as low as 259 microseconds, and then taking it up to 1.6 terabits for high-density connectivity. This is progress for us on the disaggregated network front. Finally, with Lanner, as I mentioned for the SRv6 use cases, with our solution, we can translate the IP and GTP traffic into mobile edge routing.

(09:45):
We can take these services and provide them as programmable services on top of the Arrcus Lanner platform. Let me pause here and then call over Sanjay and Keyur to see if they want to add any additional detail on what we announced with AINF, and then we will call our partners over to address some Q&A.

(10:08):
Sanjay runs our marketing and products and Keyur as CTO and founder of Arrcus.

Sanjay Kumar, Arrcus (10:13):
Thank you, Shekar. We are pleased to be here showcasing our new announcement, which is the Arrcus Inference Network Fabric. As you look at AI, it is moving into the commercial phase, and that means you have agentic and physical AI being deployed by various industries, including retail, healthcare, automotive, and industrial. As a result of this, you now have a different paradigm. Training was building the intelligence and inferencing is about delivering intelligence. With these new applications, you have a number of new challenges. Number one to keep in mind is that inferencing is a very distributed phenomenon. Every query, every agentic interaction is happening across many different nodes, whether it is at the edge, various data centres, or in the cloud, which means there is an unprecedented scale in terms of the different nodes, as well as the kind of bandwidth that is required.

(11:21):
There are also many challenges that network operators and enterprises face in delivering inferencing results. Some of those are: how do you deliver those results faster? How do you ensure that your latency is lower for real-time applications that are being delivered at the edge, whether it is public safety services or entertainment? Then you have data and sovereignty requirements as well. The challenge there is how do you make sure that your data is compliant with all of the requirements from a sovereignty perspective? Then how do you account for capacity, whether it is power or compute, and how do you maximise the utilisation by pooling your resources? All of these challenges require a new network infrastructure and a new network architecture, which is where traditional networking solutions do not match up.

(12:29):
What we are doing is introducing a new architecture and a fabric called Inference Network Fabric. This gives operators policies, which are based on AI requirements around latency, sovereignty, power, capacity, and so on. We allow them to programme this so that we can then intelligently steer the traffic to the right location and be able to meet those requirements around latency, fast delivery of inferencing, and so on. By doing so, we are delivering measurable benefits in terms of lower time to first token, reduced latency, reduced cost per inference. We do this with a new approach in how we connect all of these different workloads together and deliver policies which can help you then intelligently steer that traffic. I would like to hand it over to Keyur now to talk a little more about the technology.

Keyur Patel, Arrcus (13:32):
Thank you. You said it well, Sanjay, that inferencing is highly distributed. As you look at this phenomenon, fundamentally what it requires of the network is that it needs to be a lot smarter. It is not just about forwarding data packets anymore. It is about understanding the intent behind forwarding this data packet. As Sanjay said, you may have different LLMs being distributed at different points in the network. They may have different requirements with regards to load, power, latency constraints, throughput, jitter requirements, or AI governance. How do you bring these policies together in a manner you build a scale fabric that at the end of the day is highly programmatic and gives you deep telemetry so that the network in itself can self-adjust.

(14:31):
This is what we have done with AINF. If you look at networks prior to this, they were not designed to incorporate policies like this. This is a fundamental shift that inferencing will bring in. Even more importantly, how do you make sure you have a network that can deploy inference workload as well as non-inference, non-AI workloads together in a manner that you can segregate and yet get quality of service out of the network.

Sanjay Kumar, Arrcus (15:07):
At the end of the day, it is about delivering business outcomes and measurable benefits. Research shows us that with this kind of approach, we will be able to drive about a 60% reduction in time to first token, and that improves the throughput for any operator, reduction of about 40% in terms of end-to-end latency in delivering real-time applications at the edge, reduction of about 30% in terms of cost per inference, and that makes it economically viable for network operators to deliver this as a service while maintaining all of the service level objectives, driving monetisation, and at the same time, doing it in a more efficient and cost-effective manner.

Keyur Patel, Arrcus (15:53):
If you look at it, this makes the network an important connectivity piece when it comes to deploying inference at scale. As you build the fabric like AINF, as you deploy that, it allows organisations to monetise inferencing in a manner that gives their customers a good user experience, which is what these organisations would be after.

Sanjay Kumar, Arrcus (16:24):
As you look ahead, we are pleased to have Lightstorm as a strategic customer that is going to use something like AINF to deploy in their own networks and deliver it as a service. Also, we are pleased to have the collaboration with Fujitsu and 1FINITY from a future-looking perspective, as Shekar was saying, with the Monaka processor that is purpose-built compute for inferencing, and then you have AINF that delivers the network, and you have 1FINITY for the optical transport for secure long-distance connectivity as well. With that kind of collaboration and combination, we will be able to deliver a secure sovereign AI infrastructure that is ready for customers to deploy. These are some things ahead for us, and we look forward to it.

Keyur Patel, Arrcus (17:21):
Needless to say, intelligent network.

Shekar Ayyar, Arrcus (17:24):
Thank you, Keyur. Thank you, Sanjay. Now I think we will open it up to Q&A. Maybe Amit, you can come over and Kobayashi-san, come on over. We have 1FINITY represented here. We have Lightstorm represented here, and then we will take questions for Arrcus as well.

Amajit Gupta, Lightstorm (17:43):
Thank you, Shekar, for allowing us to partner with you. We are privileged and we, as you know, build the physical infrastructure across Asia-Pacific. I can see Anthony because I spoke to him last year over there. We represent the pieces that you need to take AI to scale across. There is much talk about scaling up and scaling out, but scaling across basically means the inferencing that you need to do across data centres and large area networks. Our attention is focused on things like the physical layer, the fibre itself. We partner with companies such as Arrcus to look at the silicon layer, the software layer, and Shekar talked about the integration between Polarin and AINF, which we are pleased about, and the sum and substance of all this is actually making AI more pervasive and being able to disseminate it.

(18:47):
According to McKinsey, about 40% of data centre loads by 2030 is going to be inferencing or a little more than that. It tells you that with that kind of workload and distributed computing at play, the infrastructure, the silicon, the software, and the systems all need to sync together for AI to work. We have a contrarian theory to telecom networks. Most believe that most telecom networks of today cannot handle AI, just like they were not able to handle cloud in the past, and that is why Lightstorm was born five years ago. We are pleased to partner with Shekar and team, as well as with Fujitsu. We met them for the first time, and are pleased about the possibilities of what we can work together on. We are in the domain of DWDM and optics, and we hope to be able to work together with them too.

Yusuke Kobayashi, 1FINITY (19:39):
Thank you, Shekar. 1FINITY is a Fujitsu 1FINITY network company right now, that we have our optical and also mobile network as well. We started the partnership with Arrcus since last September, and so we are pleased to provide the full stack from layer one to layer three as well. We are starting to expand our partnership, including digital capability, including AI and computing, and also, as Shekar mentioned, we are starting to collaborate on the computing side as well, and we are pleased about that. Thank you, Shekar.

Shekar Ayyar, Arrcus (20:18):
Thank you. Thank you, Kobayashi-san. Thank you, Amajit, and thank you all for coming.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Press Conference

Shekar Ayyar, chairman and CEO of Arrcus, presents a recap of the company’s announcements made during MWC26, including the introduction of the Arrcus Inference Network Fabric (AINF), designed for policy-rich AI networking, and collaborations with Fujitsu and its 1FINITY team, Lightstorm in Asia Pacific, UfiSpace and Lanner, which focus on enabling AI inferencing, optimising datacentre and telecom environments, and supporting new hardware platforms.

Featuring (in order of appearance):

  • Shekar Ayyar, Chairman and CEO, Arrcus
  • Sanjay Kumar, VP of Product Management and Marketing, Arrcus
  • Keyur Patel, CTO and Founder, Arrcus
  • Amajit Gupta, Group CEO & MD, Lightstorm
  • Yusuke Kobayashi, Head of Arrcus Business Division, 1Finity

Recorded March 2026

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.