Intel has made edge Deep Learning as simple as plug-and-play with its “AI on a Stick”

Guy Daniels
By Guy Daniels

Jul 21, 2017

© Intel

© Intel

  • Movidius Neural Compute Stick now available for developers
  • USB-based deep learning inference kit and AI accelerator
  • Offers more than 100 gigaflops of performance in a 1W power envelope
  • Able to run real-time deep neural networks directly from the device

Intel has launched its Movidius Neural Compute Stick, which it says is the world’s first USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator. The thumb drive-sized device delivers dedicated deep neural network processing capabilities to a wide range of host devices at the network edge and, as Intel puts it, democratizes deep learning application development.

Designed for product developers, researchers and makers, the Movidius Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor. It is now available for purchase through select distributors for just $79 and for those lucky to be attending the Computer Vision and Pattern Recognition conference in Honolulu tomorrow it will be on full display.

“The Myriad 2 VPU (vision processing unit) housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks directly from the device,” said Remi El-Ouazzane, VP and GM of Movidius, an Dublin-based specialist chip company that Intel acquired last September. “This enables a wide range of AI applications to be deployed offline.”

According to Intel, machine intelligence development is fundamentally composed of two stages: training an algorithm on large sets of sample data via modern machine learning techniques; and running the algorithm in an end-application that needs to interpret real-world data. This second stage is referred to as “inference” and performing inference at the edge – or natively inside the device – brings several benefits in terms of latency, power consumption and privacy.

For example, code could be compiled to automatically convert a trained convolutional neural network (CNN) into an embedded neural network that is optimised to run on the onboard VPU. Applications can be fine-tuned with layer based performance metrics for industry-standard and custom neural networks for optimal real-world performance at ultra-low power. In addition, the Movidius Stick can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.

Movidius was established in Ireland around 11 years ago and secured a deal with Google in early 2016 to provide its specialist machine learning chips to upcoming Google devices, including its VR headset technology. It also supplies its system on a chip to power the autonomous capabilities of the popular DJI Phantom drone. With Intel making a determined push into deep learning, AI and augmented reality with its RealSense strategy, the technology and algorithms it acquired with Movidius should give it a competitive edge (pun intended of course) in this emerging area. AI on a Stick is a fascinating concept that could very well accelerate the development of AI/ML applications and services.

“We’re entering an era where devices must be smart and connected,” said Josh Walden, SVP and GM of Intel’s New Technology Group, speaking at the time of the acquisition. “When a device is capable of understanding and responding to its environment, entirely new and unprecedented solutions present themselves.”

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.