Monolithic Data Center interconnect is not always the best bet

September 18, 2018 - Denver: Moving huge amounts of data between two points is neither easy nor cheap, especially with the explosive growth of cloud networking and services. The ever-growing amount of data traffic will continue to strain and challenge data centers even more as we move closer to the reality of 5G. To address moving large amounts of data web-scale companies like Google, Facebook, Amazon have created their own ways to ease data flow and reduce costs. One trend is the disaggregation of hardware and the growth of purpose-built compact data center interconnect (DCI) equipment.

Compact DCI is typically a single rack unit box loaded with optical components with the sole purpose of moving vast amounts of data, quickly and efficiently. However, while compact DCIs may be a great option for big companies with large point-to-point demands, they are not always the best choice for every data center or service provider.

Why? Monolithic compact DCI boxes will get you a lot of bandwidth quickly and simply, but they have limitations. Compact DCIs are purpose built, like a NASCAR race car. But why buy a high-performance race car designed to go as fast as possible, when all you really need is a flexible, rock-solid daily driver that can get up and go with a little gas, or that can be modified to carry a bigger load by dropping the back seat or putting on a roof rack?

The Case for Little White Boxes

The concept of a compact DCI makes the most sense if you have a huge amount of Ethernet-based bandwidth being transported between two points. Web-scale network operators are pushing terabytes of data back-and-forth between their data centers with ever-increasing demand for more capacity and speed. In this instance, the priority is having the flexibility to put in DCI technology that will work regardless of who sells it. By merely adding another box they can add more capacity, and if it breaks they can simply swap it out. Not many operators outside of the large web-scale networks have that kind of demand or capability to simply deploy and replace terabits of transport technology regularly. But for those that do, the concept of an open and easily deployed compact DCI fits their business model.

The Telecom Infra Project (TIP), spearheaded by Facebook, was convened to develop a new approach to building and deploying telecom network infrastructure. By pushing for disaggregation and fully open interfaces, this project is looking to reduce costs and complexities by developing generic boxes that, in theory, should work regardless of whose network they are placed in and what other components with which they interface.

Although it is just one component of TIP’s mission, the Voyager Dense Wavelength Division Multiplexing (DWDM) optical platform is a compact DCI running components from a specific handful of vendors. In fact, they recently have been trialing the platform at a variety of operators, including Vodafone, showing that with an SDN controller, they can operate transparently in the network.

At the recent NGON & DCI Europe show in Nice, France, the topic of DCI, particularly Project Voyager, was part of many interesting and spirited conversations. The white box approach for many components of the telecommunications infrastructure is gaining momentum and helping drive more innovation within the industry. SDN is essential in ensuring these boxes can talk to each other and manage the traffic across the network, including troubleshooting and self-correcting as needed.

But It’s Not for Everyone

A compact DCI box is a great option for web-scale operators or larger Tier 1 carriers like Vodafone, AT&T, or Verizon. But what about smaller, regional telcos or National Research and Education Networks (NRENs) that need to add and build bandwidth over time?

For many carriers and data centers, the bandwidth needs today don’t justify the monolithic terabits of point-to-point data the compact DCI provides. Many hope to get there one day, but in most cases it makes more sense to put in a shelf that has expansion capabilities and has the flexibility to do other things optics allow you to do. This includes things like multiplex lower speed signals, flexibly route wavelengths, or easily change out optics.

Advances in technology have made it possible to easily add massive amounts of bandwidth to a network through a traditional card and chassis system. Telco chassis have moved beyond the old limits of 10G-based cards and power dissipation. Today’s systems can push 200G and 400G in a single card with upgradability to over a terabit per second on a card. This enables smaller providers and network operators to easily and affordably manage bandwidth, all while future-proofing with expandability in mind.

Rather than just embrace the latest technology, anyone exploring a DCI solution needs to approach it by looking at the problem they want to solve and then finding the best solution. Most of the time they’ll find that compact DCI isn’t right for them — at least not yet. In fact, there are some general misnomers in the industry about a chassis and card system versus DCI technology.

  • Size: A traditional telecom rack is not that much bigger than a DCI box. Yes, the DCI is one rack unit tall and the telecom rack maybe two, but in most cases the telecom box isn’t as deep. The traditional rack doesn’t always take up much more room and provides greater flexibility to easily upgrade the system, all without having to rip anything out and replace it. The savings from doubling capacity by upgrading cards in a chassis can easily outweigh savings from a more compact but non-upgradable platform in networks with changing needs.
  • Flexibility: While the DCI can achieve speeds of terabits-per-second, there isn’t much flexibility. As previously mentioned, the chassis and card system can easily scale from 1G up to 400G and more per wavelength. For customers whose demands vary in both rate and service type, the ability to combine all those services in a single chassis is hard to overcome with a compact DCI solution.
  • Cost: The chassis system can be much less expensive to purchase and get up and running when compared to compact DCI systems in networks that are not specifically optimized for Ethernet-based point-to-point massive bandwidth demands. Be wary of focusing on the technology over the solution, because sometimes the latest buzzword does not translate to the optimal solution.

Another key consideration is that in high-density urban areas like New York, London, or Paris, rack space is not cheap or easy to come by. In this instance, having a larger presence (e.g. additional rackspace) with the flexibility to meet bandwidth needs on day one, as well as expanding with future growth, may not be easily realized with a pair of DCI boxes. This also solves the conundrum of worrying about access to space for later use.

Generic or Branded?

With TIP and Voyager, the main concept is to move away from proprietary systems in favor of a generic box with open interfaces that will work no matter who makes it. However, while early trials have been generally favorable, there is a long way to go as it takes a great deal of time, energy, and effort to get the generic white boxes to work as promised.

While a few vendors are working with TIP, most others are opting to build their own branded boxes as a way of maximizing performance while leaving open the option of being used in white box type applications. By designing these branded boxes with open interfaces and working within industry multi-source agreements (MSAs) for operation, interoperability is assured for “white box” type applications while still offering enhanced performance modes and advanced software for those who require more capabilities.

Open DCI boxes require an open management interface. With generic interfaces it can be relatively easy to perform basic functions, but vendor innovations mean that new functions are continuously being developed faster than the generic interfaces. Also, the “generic” interface models (e.g. YANG models) can still vary from operator to operator. Current initiatives focused on making generic DCI boxes work together are, therefore, limited to basic performance levels. Rather than being able to fully optimize these systems, most are being operated at the lowest performance level to ensure interoperability. For example, interoperable 100Gbps systems generally are guaranteed to work over metro distances (80km) while branded solutions can operate over many hundreds of kilometers.

While a branded box may limit the interoperability factor, many vendors have developed integrated hardware and software solutions that can give smaller operators and networks the ability to easily install and manage capacity without having to spend extra time on managing and optimizing performance. Integrated test capabilities, integrated routing and recovery mechanisms, and advanced intelligent alarm analyses are a few of the innovations that are available from branded solutions to smaller operators, data centers, or NRENs who do not have their own in-house software development teams. Additionally, branded solutions typically come with a support system that allows network operators to make one call when network maintenance is required. White box solutions push those issues back onto the network operator.

The Role of SDN

Regardless of whether a network is built with generic white boxes or branded boxes, SDN is going to play a critical role. SDN is a must-have to ensure that performance issues and variances are addressed in real time. In fact, SDN is also an important development with both branded DCI boxes and chassis systems where more and more elements are being exposed for automation and control. Open interfaces are critical to SDN operations, regardless of the hardware, and even branded solutions must conform to the new open standards.

In massive data center networks, web-scale organizations do not require a full-fledged management system from vendors and prefer to get inside the box and directly control the components via their own in-house software. Compact DCI boxes are ideal for this case, and many operators wrongly believe that to see the benefits of SDN you need the compact DCI boxes. This isn’t entirely true. In fact, most chassis-based systems already have the open interfaces required to achieve the benefits of SDN control — with additional flexibility for applications beyond simple point-to-point Ethernet transport.


The web-scale network operators are making some strong moves in the optical industry due to their unique and massive demands. The solutions that are being developed based on their applications are influencing the optical industry, from the development of open SDN interfaces to the push for more compact and low-cost solutions. Yet, while many in the industry are justifiably excited about the promise and possibilities, the monolithic compact DCI format isn’t for everyone. Other advances in optical and packet networking mean there are affordable and expandable platform options that can better meet the needs of data centers and telecommunication providers. Before you go purchase your high-performance NASCAR-like DCI box, you might want to rethink it and aim closer to that rock solid and flexible daily driver.

This content extract was originally sourced from an external website (ECI Resources) and is the copyright of the external website owner. TelecomTV is not responsible for the content of external websites. Legal Notices

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.