AT&T submits its White Box Router design to the Open Compute Project

via Flickr © brand0con (CC BY-SA 2.0)

via Flickr © brand0con (CC BY-SA 2.0)

  • AT&T claims the White Box approach offers flexible, open design so SPs can meet next generation demands
  • It’s upgrading its core network links to 400 Gbits/s, a key capability for the 5G era, it claims
  • High capacity routers use a modular chassis design which adds costs and demands high precision
  • The White Box equivalent equips each box with its own power supply, fans and controllers, and the backplane connectivity is replaced with external cabling

When telco NFV was first seriously mooted, back around 2013/2014, I remember asking a vendor representative whether big routers would be virtualised or ‘white boxed’. “No,” he replied, “that’s just not appropriate” by which I took him to mean that on a cost-benefit basis virtualising a big router was pretty pointless as existing core router technology could do that specialised job at lower cost and greater reliability. It seemed pretty logical.

One day though...

If that was ever true - and I’m not sure that it was - it’s not now. AT&T, always one of the most enthusiastic and ambitious adopters of NFV, has just submitted its specs for a Distributed Disaggregated Chassis (DDC) white box architecture to the Open Compute Project (OCP). 

The DDC design is built around Broadcom’s Jericho2 family of merchant silicon chips, aimed specifically as configurable building blocks to construct service provider-class routers, ranging from single line card systems (often called ‘pizza boxes,’ says AT&T) to large, disaggregated chassis clusters. 

AT&T says it plans to apply the Jericho2 DDC design to the provider edge (PE) and core routers that comprise its global IP Common Backbone (CBB) the core network that carries all of its IP traffic. 

Additionally, the Jericho2 chips have been optimized for 400 gigabits per second interfaces – a key capability as AT&T updates its network to support 400G in the 5G era.

“The release of our DDC specifications to the OCP takes our white box strategy to the next level,” said Chris Rice, SVP of Network Infrastructure and Cloud at AT&T. “We’re entering an era where 100G simply can’t handle all of the new demands on our network. Designing a class of routers that can operate at 400G is critical to supporting the massive bandwidth demands that will come with 5G and fiber-based broadband services. We’re confident these specifications will set an industry standard for DDC white box architecture that other service providers will adopt and embrace.”

AT&T’s DDC white box design calls for three key building blocks:

  • A line card system that supports 40 x 100G client ports, plus 13 400G fabric-facing ports.

  • A line card system that support 10 x 400G client ports, plus 13 400G fabric-facing ports.

  • A fabric system that supports 48 x 400G ports. A smaller, 24 x 400G fabric systems is also included.

Traditional high capacity routers use a modular chassis design. In that design, the service provider purchases the empty chassis itself and plugs in vendor-specific common equipment cards that include power supplies, fans, fabric cards, and controllers. In order to grow the capacity of the router, the service provider can add line cards that provide the client interfaces. Those line cards mate to the fabric cards through an electrical backplane, and the fabric provides the connectivity between the ingress and egress line cards.

The same logical components exist in the DDC design. But now, the line cards and fabric cards are implemented as stand-alone white boxes, each with their own power supplies, fans and controllers, and the backplane connectivity is replaced with external cabling. 

This approach enables massive horizontal scale-out as the system capacity is no longer limited by the physical dimensions of the chassis or the electrical conductance of the backplane. Cooling is significantly simplified as the components can be physically distributed if required. The strict manufacturing tolerances needed to build the modular chassis and the possibility of bent pins on the backplane are completely avoided.

Four typical DDC configurations might include:

  • A single line card system that supports 4 terabytes per second (Tbps) of capacity.

  • A small cluster that consists of 1 plus 1 (added reliability) fabric systems and up to 4 line card systems. This configuration would support 16 Tbps of capacity.

  • A medium cluster that consists of 7 fabric systems and up to 24 line card systems. This configuration supports 96 Tbps of capacity.

  • A large cluster that consists of 13 fabric systems and up to 48 line card systems. This configuration supports 192 Tbps of capacity.

The links between the line card systems and the fabric systems operate at 400G and use a cell-based protocol that distributes packets across many links. The design inherently supports redundancy in the event fabric links failure.

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.