One Small File Can Help Distinguish Between Real & Fake News Online

Posted

JournalList.net is a non-profit membership organization that originated and currently maintains the trust.txt reference document. Trust.txt is a simple page of code with the goal that it would be used worldwide to distinguish legitimate news organizations from non-legitimate ones.

The idea of having a universal code that achieves mass adoption by digital platforms is not a new one. Back as early as 1994, before the internet reached widespread use, a company called Nexor initiated the robots.txt reference document to aid early search engines, like WebCrawler, Lycos and AltaVista, to crawl and index page contents correctly within any Website. Today it is estimated that over 90% of global sites have this file placed within their route directory that anyone can see (https://www.nytimes.com/robots.txt, https://www.cnn.com/robots.txt ). Just a few years ago, the ads.txt file was created by The Internet Advertising Bureau to assist online buyers with fraud detection.

Now comes Scott Yates, a lifelong journalist and entrepreneur. In the past three years, he has devoted most of his time and energy establishing JournalList.net with a mission to introduce and promote the adoption of a new reference document —Trust.txt. This is a  file that news publishers will add to their websites to signal their affiliations with other trusted news organizations.

In this 102nd episode of “E&P Reports,” Publisher Mike Blinder speaks with JournalList.net founder and Executive Director Scott Yates about their mission to have the Trust.txt reference document adopted and utilized by journalistic sites worldwide as a means to establish their authority as a trusted, credible, legitimate news outlet.

Comments

No comments on this item Please log in to comment by clicking here