An incoming phone call on your landline would knock you offline. Netscape was the most used browser. Amazon was in its infancy and sold only books. Google, Facebook and Twitter had yet to spring into existence. In 1996, the internet was an entirely different animal than it is today. It was in 1996 that the Communications Decency Act was signed into law by President Bill Clinton. Contained in the law is a provision titled Section 230.
Section 230 laid the groundwork for much of the internet we know today. In a nutshell, Section 230 provides that, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” the effect being that platforms, like Facebook, Twitter, Reddit and YouTube are largely protected from legal liability arising from content published by users. These legal protections allowed for social media platforms to grow and innovate.
It is unlikely that in 1996, Congress could have conceived of exactly what the internet would look like today or how large a part that social media platforms would play in our personal and civic lives. Innovation has proceeded at a breakneck pace, outrunning, in some ways, regulation. Now, the bruising political battles of the past four years, many of which have been waged online, have given rise to a new appetite to regulate platforms on both sides of the aisle. That is where bi-partisan agreement ends, however.
Democrats and Republicans have signaled distinct motivations for wanting to revisit Section 230. In practical terms, the provision largely shields platforms from incurring liability should a user post content that would otherwise create liability for a publisher. Importantly, Section 230 also provides platforms with legal protection when they choose to moderate content. In broad strokes, Democrats believe that platforms aren’t doing enough to moderate disinformation and hate speech, while Republicans are arguing that platforms are censoring conservative perspectives.
What has resulted so far have been a series of largely unproductive hearings with tech CEOs. Their opening statements provided some insight into Big Tech’s thinking. Facebook’s Mark Zuckerberg argued that the platforms should be more regulated specifically focusing on transparency around the process by which they moderate content. Twitter CEO Jack Dorsey strongly cautioned against removing Section 230, while also leaning heavily on transparency as a solution to questions about moderation. Google’s CEO Sundar Pinchai also advocated to keep Section 230 in place, citing it as fundamental to Google’s mission of providing information to users. Beyond that, questioning during the hearings was more about partisan posturing than unearthing solutions to very real issues.
To illustrate the problem, just this past week former Trump advisor, Steve Bannon was banned from Twitter for calling for the beheadings of several people, including Dr. Anthony Fauci. Mark Zuckerberg, on the other hand, reportedly told his team that Bannon’s comments were not enough to boot him from the platform. The point being, there is a clear lack of transparency and framework, though tech CEOs are vaguely advocating for both, while also attempting to answer charges of censorship from the right.
From a regulatory perspective, however, we do have a pretty good idea of where the chips would likely fall regarding Section 230. Under a Democratic Administration and Congress, tech platforms would face increased pressure to moderate and fact-check content. But, it’s unclear what direction the winds might blow under the incoming Biden Administration, which will likely contend with a divided Congress that is approaching the problem from entirely different political angles.
What If Platforms are Treated Like Publishers?
Section 230 does not treat publishers, like local news sites, and platforms, like Facebook and Twitter, the same way. Publishers are only protected insofar as they have a comments section or similar function that allow users to post their own content. The content that journalists create is not shielded from liability. They are held to a higher standard.
Section 230 allowed social media platforms to grow their user base and develop into hubs of information. As social media platforms grew, they began to monetize their platforms, like traditional publishers, with advertising. The problem for publishers is, the playing field wasn’t level. Publishers, by and large, are required to fact go through an editorial process that consumes time and resources. Platforms, on the other hand, act as a conduit for content created by others. Add in algorithms, design, and yes, content development, you have something that was able to pull traffic away from traditional publishers without having many of the same legal responsibilities.
Historically, platforms didn’t really do too much in the way of human moderation. Then the 2016 presidential election happened, the Cambridge Analytica scandal broke and authorities uncovered foreign election interference on platforms. All of a sudden, the spotlight was on social media in a way that it never had been before. Even with somewhat scattered attempts at moderation, there still is not a real regulatory framework to tackle the issue, which is where the debate over Section 230 arises.
But, what if we treat social media platforms more like publishers? Would it curb speech in a way that is inconsistent with an open internet? I don’t think it would. We need to find a balance between responsibility and the free flow of information. The reality is, more and more Americans get their news and other critical information from social media. Without a framework for more effective moderation, Americans are getting an increasingly steady diet of disinformation and misinformation that is given credibility by its spread on social media. This is having a deleterious effect on American civic life and is leading to increased polarization.
Imagine if publishers were able to freely spread disinformation that masquerades as “news” without liability. I think most people would agree that would be a bad thing. This debate isn’t about censorship, it’s about ensuring that platforms have a stronger framework to moderate content. Free and open public debate is a principle that undergirds American Democracy, but we cannot have that if we do not have a shared reality.
Michael Shapiro is publisher and CEO of http://TAPinto.net, a network of 80+ franchised online local news sites in New Jersey, New York and Florida.