Politics

Section 230 and Complications with Regulating Big Tech

Overview

There have been calls from across the ideological spectrum to modify Section 230 of the Communications Decency Act (CDA), which provides a liability shield for internet platforms to manage the user content hosted on their sites. While conceptually appealing to many, any alterations to Section 230 would be quite difficult to implement and may have substantial unintended consequences. With that said, compelling social media companies to be more transparent with respect to their moderation would allow users to make better informed decisions regarding their participation on such platforms.   

Introduction

Section 230 of the Communications Decency Act allows social media platforms to moderate content that the company deems “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Importantly, social media companies are not required to protect all free speech, but must ensure that their moderation of protected speech is conducted in good faith.

The CDA does this by offering widespread legal protections for internet platforms against content that is published by third-party users. This means that social media giants like Facebook and Twitter are not able to be sued over what gets shared—or removed—on their websites. Since the bill was passed in 1996, only a few amendments to Section 230 have been adopted, which merely require platforms to remove illegal content such as copyright infringement or content enabling sex trafficking.

Policy Stances and Calls for Reform

Calls to amend the law on a more fundamental level have come from across the aisle, but for widely different reasons. Some policy proposals would require social media companies to moderate potentially dangerous or harmful content, such as conspiracy theories or disinformation. These measures include limiting or eliminating the liability shield provided by Section 230, making social media platforms liable when they fail to moderate extremist content. President Biden has taken an especially strong stance on this issue, calling for Section 230 to be wholly revoked in an interview with the New York Times.

Conservative lawmakers also seek to limit or eliminate the liability shield, not in response to the proliferation of disinformation or conspiracy theories, but instead due to suspicions of ideological bias in current moderation practices. Some measures were already taken by the previous administration, most notably in the Trump-era Executive Order (EO) entitled “Preventing Online Censorship,” which called upon the Federal Communications Commission (FCC) to rewrite Section 230 to limit the liability of platforms that “clearly reflect political bias.”

There are legitimate questions as to whether the Executive branch even has the authority to amend Section 230, as different FCC commissioners have taken various stances on the question. With that said, it remains to be seen if the incoming administration will use regulatory means to amend Section 230, or if they would leave it to Congress. Given President Biden’s relative silence on the issue since the campaign trail, the latter seems to be the more likely possibility.

While calls from both sides of the aisle have gone so far as to eliminate Section 230 altogether, any impactful limitations would likely be difficult to implement and yield a number of unintended consequences.

Technical Difficulties

Although most social media users would agree that the ideals of moderating disinformation and protecting against ideological biases are both noble goals, tackling these efforts is far easier said than done.

While it may seem obvious what constitutes disinformation, drawing the line between deliberate attempts to spread false information and genuine, yet incorrect, statements are exceedingly challenging. As a report from the Library of Congress astutely recognizes, “While the dangers associated with the viral distribution of disinformation are widely recognized, the potential harm that may derive from disproportional measures to counter disinformation should not be underestimated.”

Succinctly, how are we to differentiate between disinformation and the truth? Though there are a number of online resources that purport to help consumers distinguish ‘fake news’ from real news, they use loose criteria such as ‘consider the source,’ ‘read beyond the headline,’ and ‘check your biases.’ While such lines of counsel are theoretically helpful, they are practically useless. Given that this is the criteria offered to consumers, and that social media companies would be required to conduct moderation on a sitewide level, there is nothing to suggest that the companies possess an adequate means of identifying and limiting disinformation.

As for ensuring that social media platforms are not biased in their current moderation tendencies, there is also extreme difficulty in carrying this out. Contrary to many conservative claims, there is a host of evidence, that suggests conservative voices flourish on social media. Specifically, the article cites CrowdTangle, a social media monitor that found, “Trump has captured 91% of the total interactions on content posted by the US presidential candidates… Biden has captured only 9%.” The article also notes that FOX and Breitbart are the two most engaged news sources, followed by CNN, ABC, and NPR.

Whether or not systemic bias in social media companies’ moderation is afoot, there are certainly legitimate questions surrounding how, exactly, platforms choose to flag and remove certain content. While each companies’ policies are spelled out explicitly in their terms of service, increasing transparency with respect to enforcement of violations should be a high priority in future legislation. With that said, determining the criteria that constitutes bias is nearly impossible, and proposed solutions—such as Rep. Ro Khanna’s support for the Fairness Doctrine to be applied to social media companies—would significantly change the ways in which social media operates.

There is also the integral question of enforcement: if enacted, would it be the government’s role or the company’s role to ensure both that moderation is ideologically equitable and disinformation is adequately quelled? Both possibilities are questionable at best, as either the FCC or Facebook becomes the leading arbiter of truth and fairness.

Removing Section 230 protections for companies that fail to moderate disinformation and/or practice bias, though understandable in principle, would be very difficult to execute.

Consequences of Regulation

As many technology policy experts have pointed out, eliminating the protections offered by Section 230 would result in far more moderation, not less. Even creating Section 230 carveouts that the vast majority of lawmakers deem reasonable—such as the SESTA-FOSTA Acts, which hold social media platforms liable for enabling sex trafficking—are difficult to implement and enforce. And while additional moderation may seem desirable for those seeking to limit the impact of disinformation campaigns, many of the unique benefits offered by social media platforms would likely be lost.

More precisely, if social media platforms could be held liable for content published on their websites, they would begin to police any content that could be considered inflammatory or offensive. While this would certainly limit some of the extremist content that eventually led to the riot at the Capitol, it would necessarily limit social media’s capacity as a tool for social organization. As consumers of social media are well aware, this capacity has allowed for increased awareness of racial disparities in law enforcement practices, called attention to sexual harassment in the workplace, and has allowed students to campaign for preventing gun violence. With increased liability comes increased policing of potentially provocative content—and with that level of policing, social media as we know it could no longer operate.

In essence, those seeking to regulate social media companies are faced with two divergent, yet equally undesirable paths. On one hand, legislators could opt for widespread limitations on Section 230 protections, which would be impossible to implement without excessive moderation.

Conversely, lawmakers could remove the good faith clause, making social media platforms truly open forums. This route, however, poses a number of additional complications. Such measures reduce a given company’s ability to moderate illegal content and creates a less desirable platform for users. Furthermore, this would likely precipitate a number of legal challenges questioning the government’s power to dictate the business practices of private companies.

Conclusion

It is fair to claim that there needs to be more oversight and transparency with respect to the decision-making of large social media companies, offering clarity on what gets moderated and what does not. However, current proposals centered around Section 230 carveouts—for either failing to moderate or moderating too heavily—would certainly do more harm than good.