DriveNets Network Cloud-AI is available for order with Broadcom Jericho 3-AI Based White Boxes

Ra’anana Israel DriveNets – a leader in innovative networking solutions – today announced that its Network Cloud-AI software, supporting Broadcom’s Jericho 3-AI based white boxes and Ramon 3 chipset, is available for order. This second AI networking solution from DriveNets is based on Network Cloud-AI which was introduced in May supporting white boxes based on Broadcom’s Jericho 2C+. Built on DriveNets’ Network Cloud – which is deployed in the world’s largest networks, DriveNets Network Cloud-AI supports the highest-performance, fully scheduled networking system based on open standard Ethernet. It therefore maximizes the utilization of AI infrastructures and improves the performance of large-scale AI workloads regardless of GPU or AI accelerator used, enabling hyperscalers and other AI infrastructure builders to build an open, multi-vendor AI infrastructure. DriveNets Network Cloud-AI was validated by leading hyperscalers in early trials as the top performing Ethernet solution for AI networking. 

Early availability of Jericho 3-AI based high-scale AI networking  

DriveNets Network Cloud-AI is based on OCP’s Distributed Disaggregated Chassis (DDC) architecture, providing a predictable lossless Ethernet connectivity that minimizes GPU idle cycles and maximizes the utilization of the AI infrastructure. The availability of Network Cloud-AI software and Jericho 3-AI based white boxes allows builders of AI infrastructures to have a highest-performance, fully scheduled networking system available in the market today, based on open standard Ethernet.  

“We are thrilled to see DriveNets maximizing the value of the DDC architecture in AI networking,” said Ram Velaga, senior vice president and general manager, Core Switching Group, Broadcom. “We designed J3AI to perform as the best networking ASIC for AI, in the market. DriveNets Network Cloud allows it to scale out to its maximum cluster capacity of 32K 800G ports, delivering ultimate AI performance at scale.” 

“As a premier network leader in design of open hardware platforms for data centers, enterprise and carrier networks, Accton is very excited by the introduction of DDC into AI-networking,” said Michael K T Lee, Sr. Vice President of Research and Development Center, “Teaming up with innovative companies such as DriveNets and Broadcom accelerates the growth of the AI ecosystem and enables the best-performing, open standard AI infrastructure.” 

Highest performance AI networking validated by an independent lab 

Independent testing by the leading scalable data center simulation lab Scala Computing validates the early hyperscaler trial findings that DriveNets Network Cloud-AI delivers the highest-performance Ethernet fabric for large-scale AI compute clusters.  

Scala’s test results proved that Network Cloud-AI improves Job Completion Time (JCT) performance by more than 10% for workloads running on a large-scale AI training cluster with 2000 GPUs, optimizing GPU resource utilization and reducing idle time. 

In addition, test results proved that Network Cloud-AI is optimized for GPU noisy neighbor scenarios, ensuring no performance impact on other AI jobs running in parallel. This is not the case with Ethernet CLOS solutions where all AI jobs running on the same node are impacted. A GPU noisy neighbor scenario occurs when the performance of one or more GPUs is negatively affected by the activity of other GPUs on the same node. This can happen for a variety of reasons, such as resource contention on the network or performance degradation on the NIC. 

The Network Cloud AI solution introduced today supports the highest JCT performance of 800G GPU/AI accelerators for Ethernet, enabling faster training and deployment of large-scale AI models. It was designed to support greater scale than tested to date of 32,000 GPUs/AI accelerators with 800G ports. 

“Scala Computing’s test results demonstrate that DriveNets Network Cloud-AI is the highest-performance Ethernet alternative for AI networking for hyperscalers and large organizations that are building large-scale AI clusters,” said Ido Susan, DriveNets co-founder and CEO. “DriveNets’ AI solution unlocks AI vendor dependency, enabling organizations building AI infrastructures to select any mix of AI chipset, network interface cards (NICs) or server OEM to meet their specific needs.”  

“The pace and magnitude of AI infrastructure buildouts continues to grow while standard Ethernet solutions struggle to deliver the performance and reliability that will maximize the utilization of the AI resources.” said Rob Zecha, Scala Computing COO. “Plans for AI cluster sizes expand faster than the ability of the AI infrastructure eco-system to support them. AI networking is a critical element that has a huge impact on the size of the AI cluster and the quality of its performance. The AI networking performance at scale demonstrated by DriveNets is exactly what our industry needs.” 

DriveNets Contribution to OCP DDC Standard Accepted 

As part of the effort to maintain DDC’s position as the most innovative, proven networking technology, the contribution to OCP DDC specification version 3 made by DriveNets alongside AT&T, HPE, Intel and UfiSpace – the newest version of the standard – has been accepted. The contribution is based on DriveNets Network Cloud architecture and addresses Ethernet Networking for high-performance AI workloads in huge-scale clusters.  

The Open Compute Project’s DDC specification continues to evolve and address new market needs with new solutions developed by DriveNets, other software and hardware vendors, and operators. It provides network operators and AI infrastructure builders with tools to make their networking operations more flexible and cost effective.  

Version 1 of the DDC spec was contributed by AT&T and focused on the distributed disaggregated model. Version focused on software and hardware interoperability. This latest addition – Version 3, adds support for new devices based on Broadcom’s Jericho 3-AI and Ramon 3 ASICs as well as advances the OCP DDC into Mobile networks. It also simplifies network management and addresses higher scale suited for AI networking. 

“The DDC architecture enables service providers and hyperscalers to maximize the scale and performance of their networks using standard networking white boxes,” said Vincent Ho, UfiSpace CEO. “Without the interdependencies of a complicated chassis-based architecture, we can accelerate our development cycles and introduce new hardware in record time, allowing our customers to accelerate the deployment of their infrastructures while optimizing their operations and cost.”

This content extract was originally sourced from an external website (DriveNets) and is the copyright of the external website owner. TelecomTV is not responsible for the content of external websites. Legal Notices

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.