Opening the Rack: Huawei pledges support at OCP Global Summit

Ian Scales
By Ian Scales

Mar 18, 2019

Source: Huawei

Source: Huawei

  • 5G’s arrival is catching attention in the data centre
  • Open Rack, and Huawei’s endorsement, is one important initiative
  • Another is the OCP Accelerator Module spec. designed to speed the adoption of AI

The Open Compute Project (OCP) Global Summit was held late last week at the San Jose Convention Center in California. OCP is an important part of the Telecom Infra Project (TIP) and the OCP Global Summit comes comes at a time of growing realization in the data centre world that 5G is going to make a material difference to the business. (see - End-to-end network demo of the interoperability of TIP open technologies).

First, where the 5G hype is fully taken on board, data centre operators are being briefed to expect a data uplift as mobile broadband continues to grow in line with the higher speeds accommodating data-hungry applications and content types. Then there’s an expected IoT explosion which, it’s believed, will apply a force multiplier.

There is usually the sensible caveat though: while the 5G wave might start to arrive towards the end of this year, it won’t be until at least the mid 2020s before it properly ramps up.

Secondly, the OCP Telco Project (and other similar initiatives and trends) will be instrumental in migrating much of the core telco infrastructure into the data centre environment - good news for the data centre and cloud ecosystems. The adoption of ‘cloud native’ virtualization by the telecoms industry is already pushing this process along although, again, the sensible caveat is that this may take some time and will be uneven on a global basis.

Amongst announcements at the OCP Global Summit was a pledge by Huawei to adopt Open Rack (an important project within OCP).

As with all TIP projects, the overall goal is to identify technology areas where open standards can assist the industry towards ever-greater scale and therefore greater cost efficiency in the core task of processing data. As Moore’s Law appears to be flattening (although this remains a point of contention) it’s inevitable that attention will turn to other areas to help  keep the cost of cloud processing lower and lower thus enabling forward momentum on better, faster, cheaper online services.

Open Rack, as its name implies, is a quest for yet more scale economies by formulating a ‘rack standard’ for data centres, so far a relatively overlooked area for collaboration. The objective is to integrate the rack into the data centre infrastructure via a “holistic design process that considers the interdependence of everything from the power grid to the gates in the chips on each motherboard.” says Huawei. The upshot will be (hopefully and eventually) a multi-vendor environment where rack infrastructure can be mixed and matched from different suppliers at ever lower price-points and better energy requirements in particular.

Open Rack has so far been adopted by some of the world’s largest hyperscale internet service providers such as Facebook, Google and Microsoft, it’s pointed out.

Huawei claims its adoption of the standard is designed to enhance the environmental sustainability of its new public cloud data centers by using less energy for servers, while driving operational efficiency by reducing the time it takes to install and maintain racks.

“Huawei’s decision is a great endorsement,” stated Bill Carter, Chief Technology Officer for the Open Compute Project Foundation. “Providing cloud services to a global customer base creates certain challenges. The flexibility of the Open Rack specification and the ability to adapt for liquid cooling, allows Huawei to service new geographies.”


More A.I.

The Open Compute Project (OCP) 2019 Global Summit, also saw Baidu, Facebook and Microsoft  (supported by other companies) define the OCP Accelerator Module (OAM) specification to increase the adoption of artificial intelligence (AI) accelerators to benefit the development of AI.

The OCP claims this specification is expected to shorten the development of AI accelerators and speed up large scale adoption.  Lack of interoperability among AI accelerators has led to slower development and increased time to adoption, it’s claimed, thus creating the demand for an open AI accelerator architecture like OAM.

 Baidu has already been steadily building an AI ecosystem and has successfully launched three generations of its own infrastructure foundation, X-MAN, which has accelerated Baidu’s AI strategy through “pioneering concepts such as hardware disaggregation, resource pooling, liquid cooling, hardware modularization and flexible topologies with unified architecture,” it claims.

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.