New AI photonic chip can detect, identify and classify two billion images per second

  • Works at close to the speed of light
  • Deep neural network mimics aspects of the human brain
  • Still at proof of concept stage but advancing quickly
  • Potential applications are exciting but concerning

Scientists at the University of Pennsylvania in the US have developed a remarkable scalable 9.3mm square microchip able to both detect and classify an image in less than a nanosecond. In other words, it can process almost two billion images a second. It’s both wonderful and scary at the same time. The technology is so fast and efficient because it directly processes light it receives from an ‘object’ thus obviating the traditional computing need for a big separate memory unit. It also does away with other time-and energy-consuming procedures and mechanisms, namely the need to convert optical signals to electrical pulses, the conversion of input data to binary format, and the limitations of clock-based computations. The result is massively increased processing speed. 

News of the breakthrough has been reported in Penn Today, the daily online university newsletter and, in greater detail, in Nature, the venerable and influential UK-based weekly scientific journal which, founded in 1869, features peer-reviewed research from a variety of academic disciplines, predominantly in science and technology, and is one of the world's most cited scientific journals.

The magazine’s abstract of the development explains: “Deep neural networks with applications from computer vision to medical diagnosis are commonly implemented using clock-based processors in which computation speed is mainly limited by the clock frequency and the memory access time. In the optical domain, despite advances in photonic computation, the lack of scalable on-chip optical non-linearity and the loss of photonic devices limit the scalability of optical deep networks.” 

It adds, “Here we report an integrated end-to-end photonic deep neural network (PDNN) that performs sub-nanosecond image classification through direct processing of the optical waves impinging on the on-chip pixel array as they propagate through layers of neurons. In each neuron, linear computation is performed optically and the non-linear activation function is realized opto-electronically, allowing a classification time of under 570 ps, which is comparable with a single clock cycle of state-of-the-art digital platforms.” 

Furthermore, “A uniformly distributed supply light provides the same per-neuron optical output range, allowing scalability to large-scale PDNNs. Two-class and four-class classification of handwritten letters with accuracies higher than 93.8% and 89.8%, respectively, is demonstrated. Direct, clock-less processing of optical data eliminates analogue-to-digital conversion and the requirement for a large memory module, allowing faster and more energy efficient neural networks for the next generations of deep learning systems.” 

The chip in some ways mimics the make-up of the human brain. Optical neurons are interconnected via waveguides and thus make up a deep network of many “neuron layers” through which passes data: As the information makes its way through these layers, they “see” and classify the input image into a learned category.

The University of Pennsylvania research team is led by Firooz Aflatouni, associate professor in electrical and systems engineering, with his colleagues postdoctoral fellow Farshid Ashtiani and graduate student Alexander Geers. The group tested the new photonic chip by classifying a set of 216 letters as either ‘p’ or ‘d’, while another set of 432 letters was designated as ‘p’, ‘d’, ‘a’, or ’t’.

Repeated experiments showed that the chip returned an accuracy of 93.8% and the deep neural network an accuracy of 89.8%. Aflatouni told the publication IEEE Spectrum, “Computation-by-propagation, where the computation takes place as the wave propagates through a medium, can perform computation at the speed of light.” And there’s the promise for the none-too-distant future. 

Of course, it’s early days and the experimental chip is a proof of concept (PoC) rather than anything approaching a commercially available product. However, development work continues apace and hopes are high that the technology will have far-reaching and game-changing effects within the next few years. 

Unfortunately, though, given the potential power of this new iteration of AI, and the apparently unstoppable trend of dictatorships relying on the construction and imposition of technological dystopias in various parts of the world, there is every likelihood that some of those effects will be malign and used for the control of individuals and the domination and repression of entire societies and not for the benefit of mankind.

Strong AI, still a pipe dream: Narrow AI, everywhere in daily life

Generically, artificial intelligence (AI) is divided into two basic categories: Narrow AI and artificial general intelligence (AGI). The latter is also referred to as “strong AI” and is classified as a machine with general intelligence at, or above, that of humans and able to use its general intelligence to solve any problem: Think science fiction androids with a full range of cognitive abilities – for now. 

The quest to define a universal algorithm for learning and acting in any environment is not new but creating a machine with AGI continues to be incredibly difficult and remains far in the future, for which some people believe mankind should be very grateful.

Meanwhile, narrow AI is everywhere. It works within limited parameters and contexts and manifests itself as an often impressive but very limited subset of a subset of the intelligence of humans. What narrow AI can do these days (and it’s improving daily), is perform a single task or a limited set of tasks with such efficiency, ad infinitum or ad nauseam, that a machine seems to be intelligent but isn’t. Examples of narrow AI that we’d all recognise include personal assistants, such as Alexa and Siri, self-driving vehicles, Google search and, of course, image recognition software.

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.