- Aria Networks was formed 15 months ago with the aim of developing a networking system specifically for AI workloads
- It has already commercially launched its AI-native system, dubbed the Deep Networking platform
- It has also raised $125m to fund the startup’s next stage
Palo Alto, California-based Aria Networks has launched an AI-native datacentre networking system, dubbed the Deep Networking platform, that it claims can support AI training and inference in a different and more efficient way to anything currently deployed in datacentres, and that pitch has attracted $125m in Series A funding from a group of private equity firms.
The company, formed only 15 months ago by executives that developed their skills at the likes of Arista, Cisco, Google, Juniper Networks, Pure Storage and more, has developed a system that has at its heart telemetry software that constantly collects and analyses system data in a way that enables AI infrastructure to produce tokens (the units of data processed, analysed and produced by large language models) more efficiently (in terms of time and cost) – and the Aria Networks team believes that approach is a gamechanger for AI factories as it makes the network a “multiplier” rather than a “limiter”.
Aria Networks says the Deep Networking platform has been “built from the ground up to maximise token efficiency,” which is “the defining metric of the AI factory era and the single best proxy for whether an AI cluster is delivering on its investment. Token efficiency directly relates to model flop utilisation (MFU) and cost per token – improvements in either translate directly into improvements in revenue. And, as tokens become the currency of intelligence, we empower operators to become the lowest-cost producers in the market – turning infrastructure efficiency into a competitive advantage.”
Mansour Karam, founder and CEO at Aria Networks, hammered home the message with a bold claim, stating: “The network has become a key obstacle in AI infrastructure. Deep Networking changes that – and the economics prove it: A 10% gain in tokens per second is a 10% gain in revenue.”
And as more and more telcos consider the prospect of developing their own AI factories, these are the types of datacentre tech developments that are increasingly looming into strategic view for telecom players.
Aria Networks stated: “AI factories seek solutions that will enable them to produce tokens more efficiently, at the lowest cost – so that they can enable the fastest production, as well as cheapest consumption of intelligence. Aria was built to unlock this leverage. Deep Networking is our answer, a fundamentally different approach that turns the network from a constraint into a competitive advantage…. when [the network] underperforms, it drags down everything else; when it’s optimised, it lifts the entire stack.”
And the focus on telemetry is clearly key. “Legacy networking solutions treat telemetry as an afterthought and rely on static configurations that were designed for a different era… Deep Networking changes that,” added Aria.
The telemetry system is just one part of the Deep Networking platform, though, and the Aria team’s message is that for the AI processing efficiencies to be realised, a core set of capabilities (hardware, AI agents and more) must be in place.
The company says Deep Networking comprises five pillars, “all of which must be present to deliver the desired outcome” and which get “smarter” each time they process a workload. Those pillars are:
AI-optimised hardware and “hardened” SONiC: Aria’s switch platform, built on Broadcom’s Tomahawk 5 and Tomahawk 6 chips, with both scale-out and scale-up switch configurations and running AI-native SONiC (software for open networking in the cloud) as its network operating system, delivers 800 gigabit Ethernet and 1.6 Tbit/s switching in liquid-cooled and air-cooled form factors.
Fine-grained, end-to-end telemetry: Aria claims its telemetry system, which collects data from switches and transceivers and presents it in a single unified view, boasts 100 to 10,000 times finer resolution than traditional tools.
Intelligent agents at every layer: Specialised AI agents “evaluate signals, extract insights and take action at the appropriate resolution – from the switching ASIC all the way up to cloud orchestration”.
Networking expertise built in: “Every agent and every decision is grounded in Deep Networking domain knowledge – the system doesn’t just see data, it understands what it means.” So, context is king here, which is something we also heard during MWC26.
Continuous updates: New capabilities are developed seamlessly and continuously, keeping the network at the forefront of performance for every new workload.
It’s also worth noting that this platform is designed to work with any AI processor platform – it’s AI-chip neutral.
There’s a great deal more details and insight about the platform in Aria’s announcement, but here’s its big picture takeaway.
“The result is a set of outcomes that [AI infrastructure] operators experience from day one: Intent-based configuration, seamless real-time performance optimisation, and an agentic partnership where operators have fine-grained telemetry at their fingertips and can collaborate with Aria’s agents in natural language to resolve issues and optimise performance. This is a fundamentally different way to operate networks. It transforms your network from a limiter into a multiplier – helping you maximise the utilisation of your AI cluster across training, inference and any accelerator architecture: GPUs, TPUs or custom silicon.”
According to Aria Networks, it now has a commercial product, investors and, according to this blog, customers (though none have been identified). The investors that stumped up the $125m are Sutter Hill Ventures, Atreides Management, Valor Equity Partners, and Eclipse Ventures.
The Aria Networks approach has also attracted support from some of the biggest names in the semiconductor and AI infrastructure sectors.
Hasan Siraj, vice president of product marketing for the Core Switching Group at Broadcom, stated: “Proprietary fabrics are a thing of the past. With its 1.6 Tbit/s launch, combined with a telemetry-centric software architecture, Aria Networks is proving that the highest-performance AI networks on the planet are being built on a foundation of open, scalable Ethernet, such as Broadcom’s Tomahawk 6 switch series.”
Prakash Sripathy, vice president at Supermicro, stated: “In my experience architecting high-performance fabrics for AI clusters, the biggest bottleneck is the ‘blind spots’ in the network. I’m personally impressed with how Aria Networks is moving beyond simple detection into true predictive orchestration. By leveraging microsecond-level telemetry, they don’t just alert you to congestion; they deliver the intelligence to anticipate and prevent it in an inherently bursty traffic environment. It’s a powerful shift to an active pilot ensuring maximum efficiency across the entire AI fabric.”
Shane Corban, senior director of product management for the Networking Technology and Solutions Group at AMD, commented: “As AI infrastructure evolves, efficiency and utilisation are becoming as critical as scale, placing new demands on the network for visibility, predictability and control. To meet these demands, AMD is committed to enabling customer choice through an open ecosystem. The AMD Pensando Pollara 400 AI NIC, deployed with Aria Networks, helps customers achieve improved performance, deeper insight and enhanced control over AI network infrastructure.”
- Ray Le Maistre, Editorial Director, TelecomTV
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.