Fullscreen User Comments
Share on Twitter Share on Facebook Share on LInkedIn Share on GooglePlus






Google harried into introducing fact-checking system for search


via Flickr © hragv (CC BY-ND 2.0)

  • Google tackles fake news problem withe tagging system
  • Invites outside organisations to rate 'truth' in stories
  • System rolling out now

Stung by the chorus of concern about fake news in the wake of the US presidential election, Google is going to roll out a ‘Fact Check’ tagging system that can be placed against news search results.

The tags won’t be about ‘calling out’ obvious falsehoods, but will attempt to assign degrees of veracity for fact-based (rather than opinion) pieces and the labels will appear next to news results in the way ‘highly cited’ appears today. Google is apparently going to adopt the sliding scale approach used by the fact-checking sites, such as PolitiFact, to reflect the complexity and subjectivity  inherent in the process. It’s thought the tags might register "True," "Mostly False," or "Pants on Fire!"

The problem, of course, is ‘who decides?’  Facebook approached the same problem recently by crowdsourcing a judgement, not on each individual story but on the source of the story. So content providers that consistently got marked down by readers for ‘clickbait’ stories (as the problem was described just 6 months ago) found their output getting further down Facebook’s news hierarchy. So that there was a way back for publishers who cleaned up their act,  they could rise again as their output quality improved.

I am not sure whether this approach has done much to remove the fake stories on Facebook. The other day I got the most extraordinary one about Bill Gates solving income inequality by inventing a trading algorithm that everyone could use and win with, thus spreading wealth around more evenly. Talk about the King Midus touch updated!

Google’s approach is to tag stories, but to outsource the tagging judgements to well established fact checking organisations such as Snopes and to have a range of fact checkers checking the same stories so a sort of truth consensus emerges.  

According to Google, “Even though differing conclusions may be presented, we think it’s still helpful for people to understand the degree of consensus around a particular claim and have clear information on which sources agree."

The clear danger with this approach is that rather than ‘calling out’ alternative facts as ‘lies’ it simply presents the readers with fact checkers with opposite world views and - as happened all the way up to and beyond the election - readers then pick the fact checkers that conform with their own world view, with users then fortified by the notion that if a story had been fact checked by Breitbart it is therefore clearly fit for human consumption. After all if you believe in conspiracy theories (and in Google being part of the conspiracy) a 'pants on fire' rating is likely to be seen as a badge of honour. 

Join The Discussion

x By using this website you are consenting to the use of cookies. More information is available in our cookie policy. OK