TelecomTV TelecomTV
  • News
  • Videos
  • Channels
  • Events
  • Network Partners
  • Industry Insights
  • Directory
  • Newsletters
  • Digital Platforms and Services
  • Open RAN
  • Cloud Native Telco
  • Telcos and Public Cloud
  • The Green Network
  • Private Networks
  • Open Telco Infra
  • 5G Evolution
  • Access Evolution
  • Edgenomics
  • Network Automation
  • 6G Research and Innovation
  • Security
  • More Topics
  • Network Partners
  • Industry Insights
  • Directory
  • Newsletters
  • |
  • About
  • Contact
  • |
  • Connect with us
  • Digital Platforms and Services
  • Open RAN
  • Cloud Native Telco
  • Telcos and Public Cloud
  • The Green Network
  • Private Networks
  • Open Telco Infra
  • 5G Evolution
  • Access Evolution
  • Edgenomics
  • Network Automation
  • 6G Research & Innovation
  • Security
  • Connect with TelecomTV
  • About
  • Privacy
  • Help
  • Contact
  • Sign In Register Subscribe
    • Subscribe
    • Sign In
    • Register
  • Search

Policy & Regulation

Policy & Regulation

Video giants plot automatic terrorist video detection

Ian Scales
By Ian Scales

Jun 27, 2016

via Flickr © quapan (CC BY 2.0)

via Flickr © quapan (CC BY 2.0)

  • Google & Facebook to tackle the violent video problem
  • New techniques to be employed to 'fingerprint' footage
  • Talk of a 'non-profit' set up to do the sifting

Facebook, Google and possibly other big video sites, are working on an automated jihadi video fingerprinting system so that once a violent video is taken down it can’t simply be reposted again…  at least not for long. That sounds like a sensible and perhaps even long overdue counter-measure to combat vile recruitment propaganda online... the sort that wallows in beheadings, burnings alive and so on.

Certainly governments - particularly the Obama administration - have long been urging the online giants to get to grips with the jihadi video problem as the online influence of ISIS has become more pressing and the wac-a-mole approach - where users flag up offensive content to have it taken down - becomes less adequate to the task. Postings just proliferate in response.  

Now, according to Reuters, there is a concerted move for the companies to work together to develop automatic identification and removal processes, built on the systems developed to ID copyrighted content.  While that technology can identify videos pre-fingerprinted by the content owners, it’s useless when it comes to combatting ISIS and the like since these content owners are hardly likely to fingerprint their own material.  

So a way of defining existing in-video fingerprints has to be used. Observers suggest one avenue of approach might be PhotoDNA from Microsoft, which is used by the US National Center for Missing and Exploited Children to identify child pornography images.  

Most likely there will have to be a range of techniques in harness, not to mention a large shared database to check known content against suspected content on the fly. And while full automation (without any human intervention) is likely to be a goal, the complexity of the problem and the ability of the posters to change tactics to thwart detection is likely to see at least some employees, still, with the unenviable task of watching videos to oversee the process (sounds like the worst job in the world).

To keep ahead of the terrorists the involved parties are being deliberately vague about the techniques they use now and the ones they are likely to use in the future, but it’s understood that discussion has included the possibility of setting up a non-profit to do the work, refine the techniques and maintain the database.

Hold on there, remember free speech

But there’s an obvious ‘cat out of the bag’ downside to automatic detection that has the reluctant censors such as Google and Facebook worried, hence the furtive implementation.

The problem is that the technique’s use will inevitably be urged upon the big players to remove material that despotic governments just don’t like. It’s the old mission creep problem - where do you draw the line between material that we can all, mostly, agree shouldn’t be available and material that is arguably just legitimate political free speech but is deemed dangerous/insulting/divisive by a particular regime?

Answers in code on a postcard.

Related Topics
  • Analysis & Opinion,
  • Facebook,
  • Google,
  • Net Neutrality,
  • News,
  • Policy & Regulation,
  • Security

More Like This

Security

What’s up with… Ericsson, Vodafone, Cellnex and Vapor IO

Jan 12, 2023

5G Evolution Summit

5G for fixed wireless access deployments

Oct 20, 2022

Access Evolution

What’s up with… FCC, Spirent, HPE and VMware

Aug 9, 2022

Access Evolution

Will Rosenworcel finally move and push Gigi Sohn into the FCC seat?

Aug 5, 2022

Security

FCC Acts to Stop International Robocall Scams

May 23, 2022

Email Newsletters

Stay up to date with the latest industry developments: sign up to receive TelecomTV's top news and videos plus exclusive subscriber-only content direct to your inbox – including our daily news briefing and weekly wrap.

Subscribe

Top Picks

Highlights of our content from across TelecomTV today

10:43

MWC23 interview: Mari-Noëlle Jégo-Laveissière, deputy CEO of Orange

12:45

MWC23 interview: Abdu Mudesir, Group CTO, Deutsche Telekom

9:26

MWC23 interview: Greg McCall, Chief Networks Officer, BT

TelecomTV
Company
  • About Us
  • Media Kit
  • Contact Us
Our Brands
  • DSP Leaders World Forum
  • Great Telco Debate
  • TelecomTV Events
Get In Touch
[email protected]
+44 (0) 207 448 1070
Connect With Us

  • Privacy
  • Cookies
  • Terms of Use
  • Legal Notices
  • Help

TelecomTV is produced by the team at Decisive Media.

© Decisive Media Limited 2023. All rights reserved. All brands and products are the trademarks of their respective holder(s).