Would Proposed Changes to Section 230 Cause More Harm Than Good?

Section 230 of the Communications Decency Act limits the legal liability of websites that allow their users to post their own content and also allows those sites to moderate user-generated content. Section 230 obviously protects the large social media platforms like Facebook and Twitter, but it also protects less obvious online platforms, such as information sites like Wikipedia, personal blogs and even local newspapers with comments sections, review sites like Yelp, and many others. Jennifer Huddleston, Director of Innovation and Technology Policy at the American Action Forum, points out that by protecting such sites and allowing them to moderate user content, Section 230 has facilitated “an explosion of online speech over the last two decades.”[1] However, Section 230 has been under consistent attack by both Republicans and Democrats who are unhappy about online platforms’ decisions concerning content moderation. Yet, the changes both sides propose making to Section 230 are likely to cause more harm than they would actually remedy.

On one side, Republicans allege that social media platforms simply take advantage of Section 230’s protections to silence conservative voices online. President Trump and other Republicans have called for changes to Section 230 that would require websites to remain viewpoint neutral in all moderation decisions and would allow these sites to be sued for unfairly moderating political content.[2] On the other side, Democrats argue that Section 230 has caused online platforms to react too slowly in removing harmful content or misinformation. These Democrats have called for changes to Section 230 that would subject online platforms to liability if they fail or are too slow to take down harmful user content. Yet, while both sides tend to agree on the value of online speech, both of their approaches would have a massively detrimental impact on the future of online speech.

Such changes to Section 230 would likely lead to one of three possible outcomes – (1) online platforms would over-moderate user content and thereby silence legitimate and important speech in the process; (2) online platforms would decide not to moderate user content at all, resulting in an increase in fake news and misinformation; or (3) online platforms would either be crippled by litigation related to user generated content or would no longer allow user content on their sites at all, which would severely inhibit the free exchange of ideas and information. If we consider how online platforms are likely to react to the proposed changes to Section 230, it becomes clear that those changes are not the best ways of addressing the problems each side seeks to remedy.

If the Democrats’ approach were put into practice, online platforms would feel pressure to over-regulate user content, which would undoubtedly silence legitimate and important speech in the process. After all, it would be much better for these sites to be over-zealous in their moderation decisions than to run the risk of facing costly litigation and penalties for failing to moderate speech that could even possibly be considered harmful or misleading. While this would likely result in the removal of the content targeted by Democrats, it would almost certainly result in the removal of socially important and valuable information as well. Important social movements and information – such as #MeToo, BLM, Save the Children, and more – could not have gained the traction they did without Internet users’ ability to freely create and share online. However, much of the information shared for each of those movements faced accusations of being harmful, fake news, and of misinforming the public. Is it really a stretch to wonder if any of those movements would have been able to take off the way they did if online platforms were simultaneously facing huge incentives to over-regulate user content? Yet, on the other side, the republican approach would also result in significant undesirable consequences.

If the Republicans’ approach were put into practice, online platforms would feel the opposite pressure to under-regulate user content, which would undoubtedly allow in harmful content that would otherwise have been removed. While it would be nice if political speech were clearly distinguishable from politically neutral speech, we live in a world filled with much more gray than black and white. Today, any and all issues can be (and seemingly are) politicized in one way or another. Rather than face costly litigation and penalties for moderating content that could be considered political speech, online platforms are far more likely to under-moderate or even cease to moderate user content at all. Republicans may only intend to prevent online platforms from unfairly targeting conservative viewpoints, but in reality, their proposals would allow in additional fake news and misinformation they otherwise would want removed. In the end, online platforms – especially social media sites – would be oversaturated with fake news, misinformation, and possibly even violent or overly-sexual content. Yet, the only other option involving changes to Section 230 would be a combination of both the Republicans’ and Democrats’ approaches.

So, in the alternative, let’s say that Republicans and Democrats come to an agreement on changes to Section 230 that addresses the concerns of both sides. These changes could stipulate liability for online platforms that appear to target political speech of either side as well as liability for failing to remove certain types of harmful content or misinformation. In this scenario, online platforms would face both pressures to over and under regulate user content at once. These online platforms would thus face increased litigation because they would not be able to avoid incidental violations by either over-regulating or under-regulating user content. And while liability could be premised on failing to make “good faith” efforts, which would eliminate liability for completely incidental violations, it would in no way prevent lawsuits attempting to establish a lack of good faith. Thus, it would not save online platforms from tremendous litigation costs, even if most claims are unsuccessful.

Some websites, like newspapers, blogs, etc. could feasibly exist without user content, so these sites would likely choose to simply remove their comments sections rather than face the potential litigation and penalties associated with user-generated content. Then there’s the larger social media sites, like Facebook and Twitter, that do not have the option of removing all user-generated content because it is central to their entire existence. These sites may be able to survive the costs of increased litigation, but it will likely cripple their ability to improve or innovate and will also present massive barriers to entry for any would-be competitors. Smaller sites that also cannot exist without user-generated content, such as Yelp, etc. may not be able to survive the increased litigation costs and may be forced to cease operation. In other words, many sites would simply choose to eliminate the option for user content, while others would cease to operate entirely. The only remaining sites would find it much more difficult to innovate and new would-be competitors would likely be barred from entering and providing new products or services. Thus, such changes to Section 230 would severely inhibit the free exchange of ideas and information and would arguably create more harm than they avoid.

While the issues targeted by both Republicans and Democrats may be valid and worth our best efforts to remedy, it is time that both sides open up to the idea that removing the liability shield under Section 230 may not be the best way to address those issues.

[1] Jennifer Huddleston, Does Content Moderation Need Changes to Section 230?, American Action Forum: Insight (June 18, 2020), https://www.americanactionforum.org/insight/does-content-moderation-need-changes-to-section-230/.

[2] Jennifer Huddleston, Tech Policy and the 2020 Election, Part 2: Online Speech, Net Neutrality, and Data Privacy, American Action Forum: Insight (August 18, 2020), https://www.americanactionforum.org/insight/tech-policy-and-the-2020-election-part-2-online-speech-net-neutrality-and-data-privacy/.