Shoptalk

Facebook’s ‘Fake News’ Problem Won’t Be Solved by Banning Trolls

Posted

If Facebook wants to show us they’re opposed to hate speech, banning individual trolls is about as effective as a Band-Aid on a sucking chest wound—painful, messy, and worse than useless.

In 2016, when I was managing editor at fact-checking site Snopes.com, we agreed to partner with Facebook to stop a major problem that was rapidly approaching crisis mode. This back when disinformation was still called “fake news,” and no one yet had any idea of the scope or the power of what they were seeing on social media.

I had some trepidation, but I thought it might be a good-faith effort to stop stories that were using fearmongering and lies to corrode not just policy but the most intimate social relationships. I thought we had a chance of changing the system from within.

Instead, I discovered Facebook’s role in a genocide in the western Myanmar state of Rakhine—which, at the time, the company appeared to downplay. I saw this not just as a massive human rights violation on its own, but also a terrible warning for what was in store for the rest of the world.

The model for what happened in Myanmar was simple: Facebook set up a deal with local mobile phone companies to exclude use of its platforms from data restrictions. Many people in the country then got their news stories directly from Facebook, making it fertile ground for algorithmic experimentation on individual and crowd behavior. “Burma is experiencing an ugly renaissance of genocidal propaganda,” Matthew Smith, co-founder of human rights organization Fortify Rights, said in 2017. “And it spreads like wildfire on Facebook.”

False stories about Myanmar’s Rohingya Muslims played on a decades-long pattern of discrimination against the ethnic group in the region. Rumors spread that they were rapists and thieves, and posts urged they be shot or exterminated.

The results have been devastating. These stories, many of which were pushed on Facebook by government officials, were used to justify driving hundreds of thousands of Rohingya out of their homes and sparking a massive refugee crisis. Since then, Facebook has taken some steps to control the vitriol on its platform, but the efforts so far cannot undo the damage.

Facebook and other social media’s true problem is a hellish combination of disinformation, an ever-weakening journalism industry, algorithmic clustering, and sophisticated dark advertising using psychographic research to bombard already-identified users with false or frightening imagery—all in the service of “engagement” revenue.

Which brings me to the most recent ban by Facebook of high-profile individuals that it says spread anti-Semitic content. While I applaud moderation of all corrosive content and other consequences for spreading hateful speech, banning a few people would not have been a solution even if the network had implemented it years ago.

You can already see that those same individuals are able to parlay this ban into charges of personal censorship that are ludicrous, reactionary—and incredibly effective in certain quarters.

What is particularly insidious is how this all relies on a fundamental misunderstanding of free speech. When someone speaks in a way that intimidates another individual or group into silence, then that speech ceases to be free. Social media hasn’t learned that yet.

That Facebook has taken any action speaks volumes about how far we have come. But if we don’t keep pushing, it will end here and we’ll all be stuck in a toxic soup of hoaxes and fake stories in a dystopian alternate universe created by algorithms. This will permanently destroy democracies around the world.

In the meantime, if Facebook (and Twitter, and YouTube, and others) truly wish to change for the better, here is what they must do.

Hire more moderators, ethicists and historians. Train them to be ruthless about pruning back disinformation and propaganda. Supporting corrosive disinformation that silences others is not supporting free speech. Make users opt-in to social media algorithms and show us exactly why we see the stories and posts that we see, and give us the power to adjust them.

Facebook must atone for its sins toward journalism. Even before its “pivot to video” metrics fraud—hugely destructive, whether it was intentional or not—Facebook devastated small news organizations that rely on ad revenue and goodwill to survive.

My suggestion: Stop paying fact-checkers directly, which has helped effectively politicize fact-checking. Instead, put money into an independent and transparent foundation to be distributed to newsrooms as annual grants. While $100,000 a year might not mean much to Facebook, that could be everything to a small-town newspaper.

This will require soul-searching and a large cultural shift in Silicon Valley. But humans aren’t faceless meat-sacks who exist solely as moneymakers for Big Tech. Every human deserves a basic modicum of human dignity, not rank exploitation.

Brooke Binkowski is a longtime award-winning journalist and managing editor of TruthOrFiction.com. Before Truth or Fiction, she ran Snopes.com and spent more than a decade covering immigration and the U.S.-Mexico border. This edited piece was originally published at USA Today.

Comments

No comments on this item Please log in to comment by clicking here