Devices

Google reveals the artificial intelligence in Pixel 2’s Visual Core

By Ian Scales

Feb 6, 2018

HDR on Samsung TV: Source Samsung

  • Google majors on the camera
  • First time producing its own custom silicon on Pixel 2
  • More AI to come?

Google included a co-processor capable of three trillion (yes, with a ‘tr’) operations per second in the Pixel 2 smartphone it launched last year. Called Pixel Visual Core the chip was Google’s first foray into producing its own silicon and probably won’t be its last.

The Pixel Visual Core has lain fallow since the Pixel 2’s launch but now, with a flourish, Google has announced its first stage intentions.

All that horsepower is to be aimed at prettifying pictures shot on the Pixel using some powerful post-processing software running on the phone. The phone already uses the Pixel Visual Core chip to beautify its own photos, shot using the native camera app, but to get the full commercial benefit Google needs to open up the capability to third parties. Why? Because when it comes to photo sharing it’s all about Snapchat, Instagram and WhatsApp.  Google hopes that photo-obsessed users will fall over themselves to buy the Pixel if the picture quality can be shown off on those sites, so it’s been working with the key photo sites to include the capability on their apps. Snapchat, Instagram and WhatsApp are therefore the first third parties to support the post-processing algorithm, called HDR+ (High Dynamic Range). Others are expected to follow.

HDR+ drives the 8-core ‘Pixel Visual Core’ hard. Google says that while it might have  been possible to run the algorithm on the main Snapdragon processor, that would have drained the battery (and possibly heated the phone to an alarming degree).

HDR itself has been around for a while. HDR+ is, as the name suggests, a step up. It doesn’t just floozy about doing a bit of colour correcting and the like but essentially manipulates up to ten versions of the same thing, disassembles each version into fragments, and then puts them all back together again having compared the segments to iron out any visual bugs caused by differences in luminance from one part of a picture to another. Google says the end result is pictures with a low level of noise and a high level of detail, especially in low light situations.

If this seems like a lot of horsepower to waste on something as trivial as pepping up snapshots, then you obviously don’t have teenage sons or daughters. The importance of the selfie profile picture has to be seen to be believed - they’ll run over hot coals to get a flawless shot.  

But Google isn’t going to stop with HDR+ and snapshots. The next big goal will be to apply the artificial intelligence that HDR+ exhibits (in the way it ‘learns’ how to rectify the photographs), to other tasks. This is so-called ‘on-board AI’ and watch carefully -  you can expect Google to use its own silicon to  enhance other applications using it as well.

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.

Subscribe

Cookies

TelecomTV uses cookies and third-party tools to provide functionality, personalise your visit, monitor and improve our content, and show relevant adverts.