Critical Thinking: With the Proliferation of ‘Fake News’ Appearing on Social Media Sites, Should Companies Like Facebook and Twitter Be Protected?

Like & Share E&P:
RSS
Follow by Email
Facebook
Facebook
Google+
http://www.editorandpublisher.com/columns/critical-thinking-with-the-proliferation-of-fake-news-appearing-on-social-media-sites-should-companies-like-facebook-and-twitter-be-protected/
LinkedIn

CriticalThinking-Web-Mar17

Section 230 of the 1996 Communications Decency Act protects websites from content it posts that’s created by third parties. With the proliferation of “fake news” appearing on social media sites such as Facebook and Twitter, should the law be reexamined?

 

Parker Richards_r2_webParker Richards, 21, junior, Dartmouth College (Hanover, N.H.)

Richards currently serves as the editorial page editor for the student-run newspaper, The Dartmouth. He has also written for the Nantucket Inquirer and Mirror in Massachusetts and the New York Observer.

Speech is not action and speakers are not at fault for the crimes of their audience. That is a fundamental principle of a free press. I could not, for instance, be held legally liable if, upon reading this article, you decided to burn down your local fast food joint. So too is it unreasonable to hold websites liable for hosting political content that might inspire individuals to act violently.

Recent lawsuits against Facebook, Google and other social media platforms claim that, in allowing content from fake news to Islamic State propaganda to be shared, they must accept liability for the actions of the individuals who buy into those doctrines. Even without the protection of Section 230 of the 1996 Communications Decency Act, it would be absurd under our First Amendment protections to call for legal damages against the publisher or host website of any kind of press coverage, political writings or videos. Such liability would fundamentally undermine the freedom of the press intrinsic to American democracy.

Regardless of the protections of Section 230, most lawsuits against fake news or even Islamic State-produced recruitment videos would fail the Brandenburg test, the Supreme Court precedent stemming from the 1969 court case Brandenburg v. Ohio. In that historic case, the court held that political speech must incite or produce imminent lawless action in order to fall outside the limits of the First Amendment.

Even fake news, while potentially dangerous, must not be made illegal, for such an action would impugn the dignity of a society based on the freedom to express ideas—and to publish those ideas. No matter the unsavory content of what we see on our laptops, the responsibility lies with us as readers to be discerning; we are the ones failing, not Facebook or Google.

The late, great Justice Hugo Black once said that “an unconditional right to say what one pleases about public affairs is what I consider to be the minimum guarantee of the First Amendment.” That includes speech that you and I may not like, including fake news and, yes, videos and articles posted by terrorist organizations. Every restriction put on the press, every freedom stripped from our speech, constitutes in a truly absolute sense an attack on what defines America, what makes us able to claim to be “the home of the free.”

 

Jamie KellyPhoto by Elizabeth HackenburgJamie Kelly, 37, managing editor, Williston Herald (Williston, N.D.)

Kelly spent a decade in journalism before taking a detour into public relations. He returned to the industry last year as the managing editor of the Herald.

 

When my newsroom got a call a few weeks ago that five children had died in a school bus crash, we were terrified.

The source? A Facebook post.

After a quick web search, I found the site the post referred to, and it was full of stories about fatal crashes, with the only difference being the name of the city in the headline. So it was fake news.

We were outraged.

The problem is that outrage rarely makes for good public policy.

That’s one reason I’m wary about any changes to Section 230 of the 1996 Communications Decency Act. That section, also known as the Safe Harbor provision, protects online platforms from being prosecuted or sued for what third parties post on those platforms.

Fake news isn’t anything new—it far predates the internet—but when its producers take advantage of social media sites or search engines, they can make their fake news reach far more people that anyone in 1996 could have imagined.

But would changing the law so that websites become responsible for what others publish really do that? And even if it would, what else might it do?

Let me try an admittedly imperfect analogy. Tabloids like the National Enquirer regularly run stories that are either libelous or nearly so. What if the stores and newsstands that carried those tabloids were liable for the offending stories?

Without a distribution method, the tabloids might go out of business, or vendors might decide that carrying any publication is too great a risk.

Websites are not exactly the same as those vendors, but the analogy is still useful.

This might be overly optimistic, but it’s actually not that hard to figure out whether a story is a hoax through fact checking and critical thinking. I very much hope that people can be taught that, but on the other hand, we’ve all been falling for hoaxes for centuries.

Another problem is that for the most part fake news isn’t illegal. The First Amendment protects lies just as much as it protects truth, and libel falls under pre-existing laws.

I’m leery of leaving the solution entirely to the companies that have helped caused the problem in the first place. But I’m far more leery of changing a law that could stifle speech or require companies to filter things posted online.

Published: March 14, 2017

One thought on “Critical Thinking: With the Proliferation of ‘Fake News’ Appearing on Social Media Sites, Should Companies Like Facebook and Twitter Be Protected?

  • March 14, 2017 at 7:04 am
    Permalink

    Publishers also have Section 230 immunity with information on its websites – like reader comments – where some people have threatned others or made deragotory comments. The difference between newspaper publishers and Facebook or Google for example, is that publishers generally take down these offending or false comments when they receive a complaint. Platform distributors believe they are not “media” companies so therefore they have no control over “fake news” claiming that the “algorithm” decides. But, the algorithm is developed by human beings. Both of these platform distributors can do a better job of downgrading fake news and upfliting real news, produced by real journalists. Our democracy will be a lot better off if they did.

    Reply

Comments:

Your email address will not be published. Required fields are marked *