Navigate to News section

America’s Censored Speech Platforms

How to make our public commons accountable to more than a few controlling shareholders

by
Nadine Strossen
June 15, 2021
Original photo: Michael Reynolds-Pool/Getty Images
Original photo: Michael Reynolds-Pool/Getty Images
Original photo: Michael Reynolds-Pool/Getty Images
Original photo: Michael Reynolds-Pool/Getty Images
collection
This article is part of Free Speech and the First Amendment.
See the full collection →︎

As someone committed to robust freedom of speech for all—including for those who own communications platforms, and those who communicate on them—I am vexed by the monopolistic dominance of Google, Facebook, and Twitter, and their increasing restrictions on controversial speech and speakers.

On the one hand, I am wary of government interference with the platforms’ editorial judgments, including any government compulsion to carry content that they don’t want. As the Supreme Court has held in analogous cases, a government requirement that a platform must include speech that it prefers to exclude constitutes a First Amendment violation that is at least as bad as—and maybe even worse than—a government requirement that the platform must exclude speech.

On the other hand, I am wary of the unbridled censorial powers that the social media giants have been wielding at an almost unfathomable scale, dwarfing the censorship powers that even some tyrannical governments have exercised. Facebook’s most recent quarterly “Community Standards Enforcement Report” reported that each day it removed or restricted (through measures such as warning labels or downranking) approximately 462,000 Facebook and Instagram posts that it considered hate speech.

Recognizing that the subjective concept of “hate speech” vests any enforcing authority with essentially unfettered discretion to punish unpopular speech and speakers, all U.S. Supreme Court justices in modern history have concurred that government censorship of such expression—based solely on its hateful content—would violate the First Amendment. As for when government may censor hateful messages, the Supreme Court has concurred on the legality of doing so only in particular circumstances: when hate speech poses direct, specific, and serious dangers, such as intentional incitement of imminent violence.

By contrast, most of the 4,000 communications removed or otherwise suppressed in one day, on one platform, under just one of many content moderation standards, were presumably constitutionally protected forms of speech. Worse yet, this subjective hate speech standard has been enforced to suppress all manner of valuable speech about matters of public concern from social justice activists in the United States, to human rights activists in foreign countries, to political candidates and government officials around the world.

Private corporations are censoring what has become a public commons in ways that would be blatantly illegal if done by the state.

The reason the tech giants may suppress any speech—even speech that is constitutionally protected against government restrictions and widely considered important—is that the First Amendment’s ban on government actions “abridging the freedom of speech” almost never applies to nongovernment actors. Thus, Twitter’s Terms of Service can declare: “We may suspend or terminate your account or cease providing you with [services] at any time for any or no reason.” It is only the government, the Supreme Court has affirmed, that may never do the same.

Conservative politicians and media outlets have garnered a lot of attention for how social media censorship has impacted their side in particular, but there are abundant examples of how it has suppressed voices on the other side too, including Black Lives Matter activists. To convey frustration with being unjustifiably restricted on Facebook, members and supporters of such groups often refer to “Facebooking while Black” and “Getting Zucked.” A 2020 piece in The Intercept accused Facebook of “equat[ing] violent white supremacist militias with antiracist organizing” in its purge of both. No matter where we fall on the political spectrum, all of us should be concerned that private corporations are censoring what has become a public commons in ways that would be blatantly illegal if done by the state.

My free speech ideal, which the Supreme Court has largely enforced in the context of government regulation, is that all mature individuals could make their own choices about what speech they do and do not convey or receive (parents could make these choices on behalf of their young children). In other words, they would not be consigned to conveying or receiving only the speech deemed fit for them by any central gatekeepers, whether governmental Big Brother or Silicon Valley tycoons.

The unique promise of the internet flows precisely from its decentralized character, which theoretically enables everyone to communicate with everyone else on a peer-to-peer basis. In the internet’s early days, the Supreme Court celebrated and protected this promise in its landmark 1997 Reno v. American Civil Liberties Union (ACLU) decision, striking down government restrictions on online speech that would have been unconstitutional in print or other media. Congress also sought to further this promise through the law that is now widely known as Section 230, whose liability shield for most third-party content encouraged platforms to avoid the strict screening and gatekeeping approaches of traditional media. By providing a liability shield for any content restrictions that platforms did choose to enforce, Section 230 sought to encourage a multiplicity of content moderation policies, thus enhancing users’ freedom of choice. The goal of promoting user empowerment was signaled by Section 230’s title in the U.S. House of Representatives: The Internet Freedom and Family Empowerment Act (1995-1996).

Alas, the user empowerment ideal has been foiled by the increasing “platformization” of the internet since 2010, with the result that a tiny number of tech titans now control such a vast flow of online communications that, for all practical purposes, anyone who seeks to influence public opinion or policy must communicate on their platforms. The Supreme Court recognized as much when it declared in Packingham v. North Carolina (2017): “While in the past there may have been difficulty in identifying the most important places … for the exchange of views, today the answer is clear. It is cyberspace … and social media in particular.”

The unparalleled speech-suppressive powers of the dominant online platforms endanger not only individual liberty, but also self-government in our democratic republic, which depends on robust exchanges of ideas and information. As the Supreme Court stated in Garrison v. Louisiana (1964), “Speech concerning public affairs is more than [individual] self-expression; it is the essence of self-government.” Regardless of one’s views about former President Donald Trump or the content of his social media posts, the Twitter and Facebook bans on him are therefore alarming. The person duly elected to the most powerful office in the world was summarily exiled from platforms essential to the dissemination and discussion of his ideas by megacompanies accountable only to a few controlling shareholders. In that significant sense, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey continue to wield more unchecked power than Trump ever did or could; among the three, only Trump could be voted out of office, impeached, and sued for violating the First Amendment.

The ACLU’s Senior Legislative Counsel Kate Ruane summed it up in a statement following Trump’s deplatforming in January:

[I]t should concern everyone when [these] companies wield the unchecked power to remove people from platforms that have become indispensable for the speech of billions. . . . President Trump can turn to his press team or Fox News to communicate with the public, but others will not have that luxury—including many Black, Brown, and LGTBQ activists who have been censored by social media companies.

The key question for free speech advocates now is how to facilitate the user empowerment ideal on dominant platforms without undermining the platforms’ own right to design and enforce their preferred content moderation policies. This challenge has provoked an outpouring of brainstorming among academics and activists. These experts have not come to a consensus, however, or even firmly endorsed specific approaches. 

Some experts have argued that perhaps the giant platforms should be treated as government actors, bound to honor users’ First Amendment rights because, in effect, they are performing governmental functions. Others maintain that the pertinent Supreme Court precedents weigh against this argument. Moreover, some free speech advocates have argued that requiring all online platforms to uniformly adhere to the First Amendment would undermine the ultimate goal of facilitating user empowerment and choice. For example, the Electronic Frontier Foundation (EFF), a leading digital rights group, has advocated for user freedom to choose among platforms with a range of content moderation policies, including policies that restrict certain constitutionally protected speech, such as graphic nudity or violence. For users who find such speech personally objectionable, being exposed to it could impair their online experience, perhaps to the point of deterring them from using these platforms at all. In short, platforms’ current freedom of choice regarding content moderation policies, which follows from their status as private companies, also contributes to users’ freedom of choice. 

A related approach is to avoid treating the dominant social media platforms as government actors, but instead subject them to one or more nonconstitutional doctrines that apply to certain private sector entities providing essential public services. For example, the longstanding common law concept of a “common carrier,” which has been embodied in statutes, requires certain transportation and communications networks considered essential infrastructure to be equally open to all. Likewise, the old common law concept of “public accommodations”—which has also been incorporated into anti-discrimination statutes—has long prohibited private places generally open to the public, like hotels and restaurants, from discriminating against particular members of the public.

In a recent concurring opinion, Associate Justice Clarence Thomas raised these concepts as potentially warranting limits on the dominant platforms’ content moderation policies. Although Thomas did not conclusively endorse a particular regulatory approach, he did conclude that “There is a fair argument that some digital platforms are sufficiently akin to common carriers or places of public accommodation” to be subject to regulations limiting their rights to exclude would-be users.

Any regulations of the type Thomas suggested would have to survive First Amendment review. Again, First Amendment rights include protection against government compulsion to host expression, and the Supreme Court has struck down requirements that certain platforms—including newspapers and parades—must host expression they would prefer not to. That said, the court has upheld such requirements when imposed on other platforms, including cable TV networks and shopping malls.

Constitutional law experts have offered plausible arguments for and against the idea that the platforms’ First Amendment rights would be violated by imposing some “must carry” duties, on the logic that they should be deemed common carriers or public accommodations. As with every potential approach, the devil here is also in the details. Even if it were conclusively determined that dominant platforms should—and constitutionally could—be subject to speech regulation, that would still leave open challenging questions about the precise nature of the regulations and how they would be enforced.

Because freedom of speech entails each individual’s right to decide for themselves what information to convey or receive, we should limit platform practices that undermine this freedom. Perhaps our greatest concern should be the fact that the dominant platforms engage in pervasive surveillance of our online communications and actions, which they then use to “micro-target” advertisements, and to rank and curate the content we receive in order to maintain our undivided attention. In addition to subverting users’ free speech, this nonconsensual surveillance also violates the privacy right not to share information about our communications or other aspects of our personal lives without informed consent.

Without directly regulating the platforms’ content moderation or curation policies, government could impose procedural-type requirements to ensure that they abide by basic consumer protection and privacy principles in carrying out their moderation and curatorial functions. Key concepts here are transparency, notice, and consent. Companies must be fully transparent about the terms and implementation of their content policies. User consent should be a prerequisite for platform surveillance of online communications, and for any algorithmically determined content curation. If a user’s expression is deemed subject to restriction, they should receive prompt and detailed notice of the specific policy they allegedly violated, and an opportunity to appeal the decision. In addition, the platforms should provide users with detailed reports about the aggregate enforcement of their content moderation policies.

As the old adage goes, sunshine is the best disinfectant. Imposing transparency requirements on tech giants’ moderation and curation practices could have a substantive impact. After all, as service-business entities, the dominant platforms have financial incentives to respond to pressures from their customers, their employees, the media, and the peoples political representatives. In fact, when particular content moderation policies or decisions have come to light, the platforms in question have revised them in response to user critiques. They have even restored previously removed expression and speakers after facing a critical mass of popular objection.

User empowerment would likewise be enhanced if each platform provided a range of options. Platforms could offer their users various filtering alternatives, permitting users to make choices about such matters as categories of content they do not wish to see, criteria for determining the ranking of their content feed, and preferred privacy settings.

Additional technological approaches for promoting user control are “interoperability,” whereby the dominant platforms enable other software providers to interoperate with their key features, and “delegability,” whereby users could enable other software providers to act on their behalf—for example, by implementing content moderation and curation alternatives to those that the platform offers. Another type of interoperability, which is often called “data portability,” permits users to take their data and their social networks to competing platforms. Such data portability is often required in the telecommunications industry. Because interoperability facilitates competition, it is regularly included among the remedies for antitrust violations.

Determining whether a company has undue market power or otherwise violates antitrust law—and if so, what the appropriate remedies should be—involves complex legal issues. Multiple aspects of the tech sector are consistent with monopoly power, which inhibits fair competition: increased concentration, rising profit margins, declining entry, and low investment relative to profits. Whatever the ultimate conclusions might be, there is certainly ample basis for subjecting the dominant companies to close antitrust scrutiny.

Section 230 has become central to debates about social media regulation, but there seems to be some confusion about how impactful repeal would actually be. To repeat, platforms have no First Amendment duty to host any speech at all, and forcing them to host any speech they don’t want would violate their own First Amendment rights—unless they are deemed to be government actors or common carriers. Therefore, even if Section 230 were repealed—so that platforms no longer had statutory immunity from user lawsuits challenging their speech restrictions—platforms would nonetheless remain immune from claims that their speech restrictions violate users’ First Amendment rights. 

As the great journalist H.L. Mencken observed, “There is always a well-known solution to every human problem—neat, plausible, and wrong.” Mencken’s insight certainly applies to the serious problem of enforcing free speech rights on social media platforms. That is why I recommend a cautious approach, encouraging the critical examination of various potential strategies over time to foster freedom of speech for platforms and users alike. I recognize the downside of acting “with all deliberate speed,” but the contrasting approach—to move fast and break things—is what created our current plight. We should entrust free speech and democratic discourse with pointing us toward appropriate controls over both government and big tech, not the other way around.

Nadine Strossen was national president of the American Civil Liberties Union from 1991 to 2008 and is professor emerita at New York Law School.