- Google & Facebook to tackle the violent video problem
- New techniques to be employed to 'fingerprint' footage
- Talk of a 'non-profit' set up to do the sifting
Facebook, Google and possibly other big video sites, are working on an automated jihadi video fingerprinting system so that once a violent video is taken down it can’t simply be reposted again… at least not for long. That sounds like a sensible and perhaps even long overdue counter-measure to combat vile recruitment propaganda online... the sort that wallows in beheadings, burnings alive and so on.
Certainly governments - particularly the Obama administration - have long been urging the online giants to get to grips with the jihadi video problem as the online influence of ISIS has become more pressing and the wac-a-mole approach - where users flag up offensive content to have it taken down - becomes less adequate to the task. Postings just proliferate in response.
Now, according to Reuters, there is a concerted move for the companies to work together to develop automatic identification and removal processes, built on the systems developed to ID copyrighted content. While that technology can identify videos pre-fingerprinted by the content owners, it’s useless when it comes to combatting ISIS and the like since these content owners are hardly likely to fingerprint their own material.
So a way of defining existing in-video fingerprints has to be used. Observers suggest one avenue of approach might be PhotoDNA from Microsoft, which is used by the US National Center for Missing and Exploited Children to identify child pornography images.
Most likely there will have to be a range of techniques in harness, not to mention a large shared database to check known content against suspected content on the fly. And while full automation (without any human intervention) is likely to be a goal, the complexity of the problem and the ability of the posters to change tactics to thwart detection is likely to see at least some employees, still, with the unenviable task of watching videos to oversee the process (sounds like the worst job in the world).
To keep ahead of the terrorists the involved parties are being deliberately vague about the techniques they use now and the ones they are likely to use in the future, but it’s understood that discussion has included the possibility of setting up a non-profit to do the work, refine the techniques and maintain the database.
Hold on there, remember free speech
But there’s an obvious ‘cat out of the bag’ downside to automatic detection that has the reluctant censors such as Google and Facebook worried, hence the furtive implementation.
The problem is that the technique’s use will inevitably be urged upon the big players to remove material that despotic governments just don’t like. It’s the old mission creep problem - where do you draw the line between material that we can all, mostly, agree shouldn’t be available and material that is arguably just legitimate political free speech but is deemed dangerous/insulting/divisive by a particular regime?
Answers in code on a postcard.
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.