Fullscreen User Comments
Share on Twitter Share on Facebook Share on LInkedIn Share on GooglePlus

Loading…

Loading…

Loading…

Loading…

Loading…

Has Intel just moved the ‘edge’ out to the device?

coin-adj

Source: Intel

  • Intel's new deep learning chip may hold lessons for the telecoms industry
  • Loads of local processing power in a tiny, low power package 
  • How will it, and other onboarded capabilities, affect the outlook for low latency 5G?

Nothing to do with Bono’s noisy friend. We’re talking ‘network edge’ here, and the fact that Intel’s latest chip launch, and the chips to follow, might throw some new light onto the  commercial prospects for low latency 5G services.

This is important since the telecoms industry has a nasty habit of completing its technology-intensive, heavily-invested projects just as the market need it’s targeting is being met by something  better. Remember Low Earth Orbit (LEO) satellites?  GSM more or less put paid to them. You can build your own list at the bottom of this article.

First the chip in question and its capabilities

Intel has just introduced its “Movidius Myriad X vision processing unit”, which it claims is the world’s first system-on-chip shipping with a dedicated Neural Compute Engine for accelerating deep learning inferences at the edge. The SoC, says Intel, is “specifically designed to run deep neural networks at high speed and low power without compromising accuracy, enabling devices to see, understand and respond to their environments in real time.”

In fact the chip is the latest fruit from Intel’s Movidius division, a company it bought a year ago to propel it into deep learning, AI and the like. Intel is aiming  this technology (to start with) at drones, VR/AR headsets, robotics and smart cameras, where the tiny, low-powered SoC  can handle 4 trillion operations per second. That translates into seemingly magical capabilities around image recognition and learned responses. It might, for instance, be able to identify a child running out between two parked cars (should it be part of the on-board smarts of an autonomous vehicle) and instruct the car to take evasive action. Less dramatically it could make ‘inferences’ to make up for end-to-end latency in multi-user online games; or it could provide instant haptic/tactile feedback on distance surgery based on what it’s learned from similar slicings and dicings in the past.

Move over edge

In short, the chip looks as if it could have a crack at some of the ‘edge’ applications the designers of 5G technology and services are setting out to meet with their network solutions -  just think of the device itself as a ‘Fog Computing’ tendril, capable of handling a workload that might otherwise be executed in a nearby data centre.

Of course it’s not a straight comparison - I’m sure there are many capabilities and applications that a neural engine on a mobile device couldn’t tackle at all for the foreseeable future. Also critical applications - such as those driving autonomous cars - are going to want multiple backup systems involving high speed radio links working in tandem with onboard systems to double and triple-check the environment being negotiated.

So will the new capabilities as represented by Intel’s Myriad X VPU, chop low latency 5G off at the knees or (more likely) just change the balance of power by enabling more deep learning to take place beyond the network boundary, thus leaving 5G CSPs with less ‘leverage’ over content and application providers than they have up to now envisaged?  

Given that content providers are always reluctant to rely on CSPs and even more reluctant to give them money, it makes sense that many might look to on-board storage and processing to  work around the need to partner with them or worse, have their users forced to pay extra.  

Join The Discussion

x By using this website you are consenting to the use of cookies. More information is available in our cookie policy. OK