Is the First Amendment Obsolete?

New free expression challenges from “troll armies,” “flooding,” and propaganda robots that aim to distort or drown out disfavored speech.

By Tim Wu September 1, 2017

Emerging Threats

An essay series exploring new or intensifying threats to the system of free expression

The First Amendment was a dead letter for much of American history. Unfortunately, there is reason to fear it is entering a new period of political irrelevance. We live in a golden age of efforts by governments and other actors to control speech, discredit and harass the press, and manipulate public debate. Yet as these efforts mount, and the expressive environment deteriorates, the First Amendment has been confined to a narrow and frequently irrelevant role. Hence the question—when it comes to political speech in the twenty-first century, is the First Amendment obsolete?

The most important change in the expressive environment can be boiled down to one idea: it is no longer speech itself that is scarce, but the attention of listeners. Emerging threats to public discourse take advantage of this change. As Zeynep Tufekci puts it, “censorship during the Internet era does not operate under the same logic [as] it did under the heyday of print or even broadcast television.” 1 1. Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest 226 (2017). Instead of targeting speakers directly, it targets listeners or it undermines speakers indirectly. More precisely, emerging techniques of speech control depend on (1) a range of new punishments, like unleashing “troll armies” to abuse the press and other critics, and (2) “flooding” tactics (sometimes called “ reverse censorship ”) that distort or drown out disfavored speech through the creation and dissemination of fake news, the payment of fake commentators, and the deployment of propaganda robots. 2 2. A third, slightly older technique—control of speech platforms—is also used to regulate speech, but it is not the subject of this paper. See, e.g. , Lawrence Lessig, What Things Regulate Speech: CDA 2.0 vs. Filtering , Berkman Klein Center for Internet & Society at Harvard University (May 12, 1998), https://cyber.harvard.edu/works/lessig/what_things.pdf; Jack Goldsmith and Tim Wu, Who Controls the Internet? Illusions of a Borderless World (2006); Jack M. Balkin, Old-School/New-School Speech Regulation , 127 Harv. L. Rev. 2296 (2014). One reason is that these techniques have already been subject to extensive scholarly attention. The other is that laws that require speech platforms to control speech are usually subject to First Amendment scrutiny. See, e.g. , Reno v. American Civil Liberties Union , 521 U.S. 844 (1997). As journalist Peter Pomerantsev writes, these techniques employ “information . . . in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.” 3 3. Peter Pomerantsev, The Menace of Unreality: How the Kremlin Weaponizes Information, Culture and Money , Interpreter (Nov. 22, 2014), http://www.interpretermag.com/the-menace-of-unreality-how-the-kremlin-weaponizes-information-culture-and-money.

The First Amendment first came to life in the early twentieth century, when the main threat to the nation’s political speech environment was state suppression of dissidents. The jurisprudence of the First Amendment was shaped by that era. It presuppose s an information-poor world, and it focuses exclusively on the protection of speakers from government, as if they were rare and delicate butterflies threatened by one terrible monster.

But today, speakers are more like moths—their supply is apparently endless. The massive decline in barriers to publishing makes information abundant, especially when speakers congregate on brightly lit matters of public controversy. The low costs of speaking have, paradoxically, made it easier to weaponize speech as a tool of speech control. The unfortunate truth is that cheap speech may be used to attack, harass, and silence as much as it is used to illuminate or debate. And the use of speech as a tool to suppress speech is, by its nature, something very challenging for the First Amendment to deal with. In the face of such challenges, First Amendment doctrine seems at best unprepared. It is a body of law that waits for a pamphleteer to be arrested before it will recognize a problem. Even worse, the doctrine may actually block efforts to deal with some of the problems described here.

It may sound odd to say that the First Amendment is growing obsolete when the Supreme Court has an active First Amendment docket and there remain plenty of First Amendment cases in litigation. So that I am not misunderstood, I hasten to add that the First Amendment’s protection of the press and political speakers against government suppression is hardly useless or undesirable. 4 4. There may, moreover, be more work to be done now in areas such as libel law. Given the raft of libel-trolling suits that burden small presses, stronger and faster First Amendment protection has arguably become necessary. With the important exception of cases related to campaign finance, 5 5. See, e.g. , Citizens United v. FEC , 558 U.S. 310 (2010). however, the “big” free speech decisions of the last few decades have centered not on political speech but on economic matters like the right to resell patient data 6 6. Sorrell v. IMS Health Inc ., 564 U.S. 552 (2011). or the right to register offensive trademarks. 7 7. Matal v. Tam , 137 S. Ct. 1744 (2017). The safeguarding of political speech is widely understood to be the core function of the First Amendment. Many of the recent cases are not merely at the periphery of this project; they are off exploring some other continent. 8 8. The merits of the recent, economic-rights case law is not the subject of this paper. Suffice it to say that these rulings have some academic supporters and many detractors. See, e.g. , Amanda Shanor, The New Lochner , 2016 Wis. L. Rev. 133 (2016); Jeremy K. Kessler, The Early Years of First Amendment Lochnerism , 116 Colum. L. Rev. 1915 (2016); Samuel R. Bagenstos, The Unrelenting Libertarian Challenge to Public Accommodations Law , 66 Stan. L. Rev. 1205 (2014); Leslie Kendrick, First Amendment Expansionism , 56 Wm. & Mary L. Rev. 1199 (2015). The apparent flurry of First Amendment activity masks the fact that the Amendment has become increasingly irrelevant in its area of historic concern: the coercive control of political speech.

What might be done in response is a question without an easy answer . One possibility is simply to concede that the First Amendment, built in another era, is not suited to today’s challenges. Instead, any answer must lie in the development of better social norms, adoption of journalistic ethics by private speech platforms, or action by the political branches. Perhaps constitutional law has reached its natural limit.

On the other hand, in the 1920s Justices Oliver Wendell Holmes and Louis Brandeis and Judge Learned Hand also faced forms of speech control that did not seem to be matters of plausible constitutional concern by the standards of their time. If, following their lead, we take the bolder view that the First Amendment should be adapted to contemporary speech conditions, I suggest it may force us to confront buried doctrinal and theoretical questions, mainly related to state action, government speech, and listener interests. We might, for instance, explore “accomplice liability” under the First Amendment. That is, we might ask when the state or political leaders may be held constitutionally responsible for encouraging private parties to punish critics. I suggest here that if the President or other officials direct, encourage, fund, or covertly command attacks on their critics by private mobs or foreign powers, the First Amendment should be implicated.

Second, given that many of the new speech control techniques target listener attention, it may be worth reassessing how the First Amendment handles efforts to promote healthy speech environments and protect listener interests. Many of the problems described here might be subject to legislative or regulatory remedies that would themselves raise First Amendment questions. For example, consider a law that would bar major speech platforms and networks from accepting money from foreign governments for materials designed to influence American elections. Or a law that broadened criminal liability for online intimidation of members of the press. Such laws would likely be challenged under the First Amendment, which suggests that the needed evolution may lie in the jurisprudence of what the Amendment permits.

These tentative suggestions and explorations should not distract from the main point of this paper, which is to demonstrate that a range of speech control techniques have arisen from which the First Amendment, at present, provides little or no protection. In the pages that follow, the paper first identifies the core assumptions that proceeded from the founding era of First Amendment jurisprudence. It then argues that many of those assumptions no longer hold, and it details a series of techniques that are used by governmental and nongovernmental actors to censor and degrade speech. The paper concludes with a few ideas about what might be done.

Core Assumptions of the Political First Amendment

As scholars and historians know well, but the public is sometimes surprised to learn, the First Amendment sat dormant for much of American history, despite its absolute language (“Congress shall make no law . . . .”) and its placement in the Bill of Rights. 9 9. The First Amendment was even silent when Congress passed its first laws restricting speech in 1798, not long after the adoption of the Bill of Rights and with the approval of many of the framers. This fact, among others, has long been slightly embarrassing to would-be “originalists” who by disposition would like to believe in a strong First Amendment. Robert Bork was rare among the first wave of originalists in calling attention to the Amendment’s unpromising early history. See Robert H. Bork, Neutral Principles and Some First Amendment Problems , 47 Ind. L.J. 1, 22 (1971). It is an American “tradition” in the sense that the Super Bowl is an American tradition—one that is relatively new, even if it has come to be defining. To understand the basic paradigm by which the law provides protection, we therefore look not to the Constitution’s founding era but to the First Amendment’s founding era, in the early 1900s.

As the story goes, the First Amendment remained inert well into the 1920s. The trigger that gave it life was the federal government’s extensive speech control program during the First World War. The program was composed of two parts. First, following the passage of new Espionage and Sedition Acts, 10 10. Sedition Act of 1918, Pub. L. No. 65-150 (1918); Espionage Act of 1917, Pub. L. No. 65-24 (1917). men and women voicing opposition to the war, or holding other unpopular positions, were charged with crimes directly related to their speech. Eugene Debs, the presidential candidate for the Socialist Party, was arrested and imprisoned for a speech that questioned the war effort, in which he memorably told the crowd that they were “fit for something better than slavery and cannon fodder.” 11 11. Debs v. United States , 249 U.S. 211, 214 (1919).

Second, the federal government operated an extensive domestic propaganda campaign. 12 12. See, e.g. ,Alan Axelrod, Selling the Great War: The Making of American Propaganda (2009); James R. Mock and Cedric Larson, Words That Won the War: The Story of the Committee on Public Information, 1917–1919 (1968). The Committee on Public Information, created by Executive Order 2594, was a massive federal organization of over 150,000 employees. Its efforts were comprehensive and unrelenting. As George Creel put it: “The printed word, the spoken word, the motion picture, the telegraph, the cable, the wireless, the poster, the sign-board—all these were used in our campaign to make our own people and all other peoples understand the causes that compelled America to take arms.” 13 13. George Creel, How We Advertised America: The First Telling of the Amazing Story of the Committee on Public Information That Carried the Gospel of Americanism to Every Corner of the Globe 5 (1920). The Committee on Public Information’s “division of news” supplied the press with content “guidelines,” “appropriate” materials, and pressure to run them. All told, the American propaganda effort reached a scope and level of organization that would be matched only by totalitarian states in the 1930s. 14 14. As described in Tim Wu, The Attention Merchants (2016), and sources cited therein.

The story of the judiciary’s reaction to these new speech controls has by now attained the status of legend. The federal courts, including the Supreme Court, widely condoned the government’s heavy-handed arrests and other censorial practices as necessary to the war effort. But as time passed, some of the most influential jurists—including Hand, followed by Brandeis and Holmes—found themselves unable to stomach what they saw, despite the fact that each was notably reluctant to use the Constitution for anti-majoritarian purposes. 15 15. On this tension, see Vincent Blasi, Rights Skepticism and Majority Rule at the Birth of the Modern First Amendment (2017) (unpublished manuscript) (on file with author). Judge Hand was the only one of the three to act during wartime, 16 16. See Masses Pub. Co. v. Patten , 244 F. 535, 543 (S.D.N.Y.), rev’d, 246 F. 24 (2d Cir. 1917) (granting a preliminary injunction to the publisher of The Masses , a revolutionary journal that the Postmaster General intended to withhold from the mails because it featured cartoons and text critical of the draft); see also Vincent Blasi, Learned Hand and the Self-Government Theory of the First Amendment: Masses Publishing Co. v. Patten, 61 U. Colo. L. Rev. 1 (1990). but eventually the thoughts of these great judges (mostly expressed in dissent or in concurrence) became the founding jurisprudence of the modern First Amendment. 17 17. See, e.g. , Whitney v. California , 274 U.S. 357, 372 (1927) (Brandies, J., concurring) ; Abrams v. United States , 250 U.S. 616, 624 (1919) (Holmes, J., dissenting) . To be sure, their views remained in the minority into the 1950s and 60s, but eventually the dissenting and concurring opinions would become majority holdings, 18 18. In cases like Dennis v. United States, 341 U.S. 494 (1951), and Brandenburg v. Ohio , 395 U.S. 444 (1969). and by the 1970s the “core” political protections of the First Amendment had become fully active, achieving more or less the basic structure we see today.

Left out of this well-known story is a detail quite important for our purposes. The Court’s scrutiny extended only to part of the government’s speech control program: its censorship and punishment of dissidents. Left untouched and unquestioned was the Wilson Administration’s unprecedented domestic propaganda campaign. This was not a deliberate choice, so far as I can tell (although it does seem surprising, in retrospect, that there was no serious challenge brought contesting the President’s power to create a major propaganda agency on the basis of a single executive order). 19 19. See, e.g. , Encyclopedia of the American Constitution: Supplement I 585 (Leonard W. Levy, Kenneth L. Karst and Adam Winkler eds., 1992) (describing the absence of a constitutional challenge to the Committee on Public Information). Yet as a practical matter, it was probably the propaganda campaign that had the greater influence over wartime speech, and almost certainly a stronger limiting effect on the freedom of the mainstream press.

I should also add, for completeness, that the story just told only covers the First Amendment’s protection of political speech, or what we might call the story of the “political First Amendment.” 20 20. Cf. Alexander Meiklejohn, The First Amendment Is a n Absolute , 1961 Sup. Ct. Rev. 245, 255 (arguing that the First Amendment “protects the freedom of those activities of thought and communication by which we ‘govern’”); Bork, supra , note 9, at 27-28 (defining political speech as “ speech concerned with governmental behavior, policy or personnel, whether the governmental unit involved is executive, legislative, judicial, or administrative”). Later, beginning in the 1950s, the Court also developed constitutional protections for non-political speech, such as indecency, commercial advertising, and cultural expression, in landmark cases like Roth v. United States 21 21. 354 U.S. 476 (1957). and Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council, Inc. 22 22. 425 U.S. 748 (1976). The Court also expanded upon both who counted as a speaker 23 23. See, e.g. , First Nat’l Bank of Boston v. Bellotti , 435 U.S. 765 (1978). and what counted as speech 24 24. See, e.g. , Buckley v. Valeo , 424 U.S. 1 (1976). —trends that have continued into this decade. 25 25. The trend is summarized well in Morgan N. Weiland, Expanding the Periphery and Threatening the Core: The Ascendant Libertarian Speech Tradition , 69 Stan. L. Rev. 1389 (2017). I mention this only for making the boundaries of this paper clear: it is focused on the kind of political and press activity that was the original concern of those who brought the First Amendment to life. 26 26. In other words, this is a paper about speech and reporting concerned with how we are governed, which includes political criticism, campaigning, and public debates over policy or specific regulatory or legislative initiatives. By focusing on the political First Amendment, I am not taking the position that other domains of the First Amendment are unimportant. Cf. Robert Post, Meiklejohn ’s Mistake: Individual Autonomy and the Reform of Public Discourse , 64 U. Colo. L. Rev. 1109 (1993).

Let us return to the founding jurisprudence of the 1920s. In its time, for the conditions faced, it was as imaginative, convincing, and thoughtful as judicial writing can be. The jurisprudence of the 1920s has the unusual distinction of actually living up to the hype. Rereading the canonical opinions is an exciting and stirring experience not unlike re-watching The Godfather or Gone with the Wind. But that is also the problem. The paradigm established in the 1920s and fleshed out in the 1960s and 70s was so convincing that it is simply hard to admit that it has grown obsolete for some of the major political speech challenges of the twenty-first century.

Consider three main assumptions that the law grew up with. The first is an underlying premise of informational scarcity. For years, it was taken for granted that few people would be willing to invest in speaking publicly. Relatedly, it was assumed that with respect to any given issue—say, the war—only a limited number of important speakers could compete in the “marketplace of ideas.” 27 27. A metaphor suggested, though not actually used, by Justice Holmes. See Vincent Blasi, Holmes and the Marketplace of Ideas , 2004 Sup. Ct. Rev. 1, 4 (2004). The second notable assumption arises from the first: listeners are assumed not to be overwhelmed with information, but rather to have abundant time and interest to be influenced by publicly presented views. Finally, the government is assumed to be the main threat to the “marketplace of ideas” through its use of criminal law or other coercive instruments to target speakers (as opposed to listeners) with punishment or bans on publication. 28 28. This corresponds to Balkin’s “Old School” speech regulation techniques. See Balkin, supra note 2, at 2298. Without government intervention, this assumption goes, the marketplace of ideas operates well by itself.

Each of these assumptions has, one way or another, become obsolete in the twenty-first century, due to the rise in importance of attention markets and changes in communications technologies. It is to those phenomena that we now turn.

II. Attentional Scarcity and the Economics of Filter Bubbles

As early as 1971, Herbert Simon predicted the trend that drives this paper. As he wrote:

in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it. 29 29. Herbert A. Simon, Designing Organizations for an Information-Rich World , in Computers, Communications, and the Public Interest 37, 40-41 (Martin Greenberger ed., 1971).

In other words, if it was once hard to speak, it is now hard to be heard. Stated differently, it is no longer speech or information that is scarce, but the attention of listeners. Unlike in the 1920s, information is abundant and speaking is easy, while listener time and attention have become highly valued commodities. It follows that one important means of controlling speech is targeting the bottleneck of listener attention, instead of speech itself. 30 30. Consider that information—including speech—is not actually received or processed unless it attracts the fickle attention of the listener. As William James first pointed out in the 1890s, and as neuroscientists have confirmed, the brain ignores nearly everything, paying attention to a very limited stream of information. William James, The Principles of Psychology 403-04 (1890). At a minimum, the total capacity for attention is limited by time—168 hours a week—which becomes of particular relevance when the listeners in question are members of Congress, regulators, or others who are the supposed customers in the marketplace for good policy ideas.

Several major technological and economic developments over the last two decades have transformed the relative scarcity of speech and listener attention. The first is associated with the popularization of the Internet: the massive decrease since the 1990s in the costs of being an online speaker, otherwise known (in Eugene Volokh’s phrase) as “cheap speech,” or what James Gleick calls the “information flood.” 31 31. See, e.g. , James Gleick, The Information: A History, a Theory, a Flood (2011); Eugene Volokh, Cheap Speech and What It Will Do , 104 Yale L.J. 1805 (1995). Using blogs, micro-blogs, or platforms like Twitter or Facebook, just about anyone, potentially, can disseminate speech into the digital public sphere. This has had several important implications. As Jack Balkin, Jeffrey Rosen, and I myself have argued, it gives the main platforms—which do not consider themselves to be part of the press—an extremely important role in the construction of public discourse. 32 32. See, e.g. ,Balkin, supra note 2; Jeffrey Rosen, The Deciders: The Future of Privacy and Free Speech in the Age of Facebook and Google , 80 Fordham L. Rev. 1525 (2012); Tim Wu, Is Filtering Censorship? The Second Free Speech Tradition , Brookings Institution (Dec. 27, 2010), https://www.brookings.edu/research/is-filtering-censorship-the-second-free-speech-tradition. Cheap speech also makes it easier for mobs to harass or abuse other speakers with whom they disagree.

The second, more long-term, development has been the rise of an “attention industry”—that is, a set of actors whose business model is the resale of human attention. 33 33. See generally Wu, supra note 14. Traditionally, these were outfits like broadcasters or newspapers; they have been joined by the major Internet platforms and publishers, all of which seek to maximize the amount of time and attention that people spend with them. The rise and centrality of advertising to their business models has the broad effect of making listener attention ever more valuable.

The third development is the rise of the “filter bubble.” 34 34. See Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (2011). Scholarly consideration of filtering came earlier . See, e.g ., Cass Sunstein, Republic.com (2001); Dan Hunter, Philippic.com , 90 Cal. L. Rev. 611 (2002) (reviewing Sunstein’s Republic.com ); Elizabeth Garrett, Political Intermediaries and the Internet “ Revolution , ” 34 Loy. L.A. L. Rev. 1055 (2001). This phrase refers to the tendency of attention merchants or brokers to maximize revenue by offering audiences a highly tailored, filtered package of information designed to match their preexisting interests. Andrew Shapiro and Cass Sunstein were among the first legal writers to express concern about filter bubbles (which Sunstein nicknamed “the Daily Me” 35 35. Borrowing a term popularized by Nicholas Negroponte, the founder of MIT’s Media Lab. See Nicholas Kristof, The Daily Me , N.Y. Times (Mar. 18, 2009), http://www.nytimes.com/2009/03/19/opinion/19kristof.html. ). Over the 2010s, filter bubbles became more important as they became linked to the attention-resale business model just described. A platform like Facebook primarily profits from the resale of its users’ time and attention: hence its efforts to maximize “time on site.” 36 36. See Lauren Drell, Why “Time Spent” Is One of Marketing’s Favorite Metrics , Mashable (Dec. 13, 2013), http://mashable.com/2013/12/13/time-spent-metrics/#8apdGV9yugq3. That, in turn, leads the company to provide content that maximizes “engagement,” which is information tailored to the interests of each user. While this sounds relatively innocuous (giving users what they want), it has the secondary effect of exercising strong control over what the listener is exposed to, and blocking content that is unlikely to engage.

The combined consequence of these three developments is to make listener attention scarce and highly fought for. As the commercial and political value of attention has grown, much of that time and attention has become subject to furious competition, so much so that even institutions like the family or traditional religious communities find it difficult to compete. Additionally, some form of celebrity, even “micro-celebrity,” has become increasingly necessary to gain any attention at all. 37 37. See Wu, supra note 14, at 303. Every hour, indeed every second, of our time has commercial actors seeking to occupy it one way or another.

Hopefully the reader (if she hasn’t already disappeared to check her Facebook page) now understands what it means to say that listener attention has become a major speech bottleneck. With so much alluring, individually tailored content being produced—and so much talent devoted to keeping people clicking away on various platforms—speakers face ever greater challenges in reaching an audience of any meaningful size or political relevance. I want to stress that these developments matter not just to the hypothetical dissident sitting in her basement, who fared no better in previous times, but to the press as well. Gone are the days when the CBS evening news might reach the nation automatically, or whatever made the front cover of the New York Times was known to all. The challenge, paradoxically, has only increased in an age when the President himself consumes so much of the media’s attention. 38 38. See Tim Wu, How Donald Trump Wins by Losing , N.Y. Times (Mar. 3, 2017), https://www.nytimes.com/2017/03/03/opinion/sunday/how-donald-trump-wins-by-losing.html?_r=1. The population is distracted and scattered, making it difficult even for those with substantial resources to reach an audience.

The revolutionary changes just described have hardly gone unnoticed by First Amendment or Internet scholars. By the mid-1990s, Volokh, Kathleen Sullivan, and others had prophesied the coming era of cheaper speech and suggested it would transform much of what the First Amendment had taken for granted. (Sullivan memorably described the reaction to the Internet’s arrival as “First Amendment manna from heaven.” 39 ) Lawrence Lessig’s brilliant “code is law” formulation suggested that much of the future of censorship and speech control would reside in the design of the network and its major applications. 40 40. Lessig, supra note 2; see also Lawrence Lessig, Code and Other Laws of Cyberspace (1999). Rosen, Jack Goldsmith, Jonathan Zittrain, Christopher Yoo, and others, including myself, wrote of the censorial potential that lay either in the network infrastructure itself (hence “net neutrality” as a counterweight) or in the main platforms (search engines, hosting sites, and later social media). 41 41. See Jonathan Zittrain, Internet Points of Control , 44 B.C. L. Rev. 653 (2003); Christopher S. Yoo, Free Speech and the Myth of the Internet as an Unintermediated Experience , 78 Geo. Wash. L. Rev. 697 (2010); Jeffrey Rosen, Google’s Gatekeepers , N.Y. Times (Nov. 28, 2008), http://www.nytimes.com/2008/11/30/magazine/30google-t.html; Goldsmith and Wu, supra note 2. The use of infrastructure and platforms as a tool of censorship has been extensively documented overseas 42 42. The Open Net Initiative, launched as a collaboration between several universities in 2004, was and is perhaps the most ambitious documentation of online censorship around the world. See Evan M. Vittor, HLS Team to Study Internet Censorship , Harvard Crimson (Apr. 28, 2004), http://www.thecrimson.com/article/2004/4/28/hls-team-to-study-internet-censorship. and now also in the United States, especially by Balkin. 43 43. See generally Balkin, supra note 2 (describing “new school” speech control). Finally, the democratic implications of filter bubbles and similar technologies have become their own cottage industries. 44 44. See, e.g. , R. Kelly Garrett, Echo Chambers Online?: Politically Motivated Selective Exposure Among Internet News Users , 14 J. Computer-Mediated Comm. 265 (2009); W. Lance Bennett and Shanto Iyengar, A New Era of Minimal Effects? The Changing Foundations of Political Communication , 58 J. Comm. 707 (2008); Sofia Grafanaki, Autonomy Challenges in the Age of Big Data , 27 Fordham Intell. Prop. Media & Ent. L.J. 803 (2017).

Yet despite the scholarly attention, no one quite anticipated that speech itself might become a censorial weapon, or that scarcity of attention would become such a target of flooding and similar tactics. 45 45. For a notable partial exception, see Danielle Keats Citron, Cyber Civil Rights , 89 B.U. L. Rev. 61 (2009) (discussing online hate mob attacks on women and other vulnerable groups). While the major changes described here have been decades in the making, we are nonetheless still in the midst of understanding their implications for classic questions of political speech control. We can now turn to the ways these changes have rendered basic assumptions about the First Amendment outmoded.

Obsolete Assumptions

Much can be understood by asking what “evil” any law is designed to combat. The founding First Amendment jurisprudence presumed that the evil of government speech control would be primarily effected by criminal punishment of publishers or speakers (or the threat thereof) and by the direct censorship of disfavored presses. These were, of course, the devices used by the Espionage and Sedition Acts in the 1790s and variations from the 1910s through the 1960s. 46 46. For a full account of the speech-restrictive measures taken by the U.S. government during wartime, see Geoffrey Stone, Perilous Times: Free Speech in Wartime, From the Sedition Act of 1798 to the War on Terrorism (2004). On the censor’s part, the technique is intuitive: it has the effect of silencing the speaker herself, while also chilling those who might fear similar treatment. Nowadays, however, it is increasingly not the case that the relevant means of censorship is direct punishment by the state, or that the state itself is the primary censor.

The Waning of Direct Censorship

Despite its historic effectiveness, direct and overt government punishment of speakers has fallen out of favor in the twenty-first-century media environment, even in nations without strong free speech traditions. This fact is harder to see in the United States because the First Amendment itself has been read to impose a strong bar on viewpoint-based censorship. The point comes through most clearly when observing the techniques of governments that are unconstrained by similar constitutional protections. Such observation reveals that multiple governments have increasingly turned away from high-profile suppression of speech or arrest of dissidents, in favor of techniques that target listeners or enlist government accomplices. 47 47. See generally Tufekci, supra note 1.

The study of Chinese speech control provides some of the strongest evidence that a regime with full powers to directly censor nonetheless usually avoids doing so. In a fascinating ongoing study of Chinese censorship, Gary King, Jennifer Pan, and Margaret Roberts have conducted several massive investigations into the government’s evolving approach to social media and other Internet-based speech. 48 48. Gary King, Jennifer Pan, and Margaret E. Roberts, How Censorship in China Allows Government Criticism but Silences Collective Expression , 107 Am. Pol. Sci. Rev. 326 (2013); Gary King, Jennifer Pan, and Margaret E. Roberts, How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument , 111 Am. Pol. Sci. Rev. 484 (2017) [hereinafter King et al., 2017 APSR]. What they have discovered is a regime less intent on stamping out forbidden content, but instead focused on distraction, cheerleading, and preventing meaningful collective action. For the most part, they conclude, the state’s agents “do not censor posts criticizing the regime, its leaders, or their policies” and “do not engage on controversial issues.” 49 49. King et al., 2017 APSR, supra note 48, at 496 (emphasis omitted). The authors suggest that the reasons are as follows:

Letting an argument die, or changing the subject, usually works much better than picking an argument and getting someone’s back up . . . . [S]ince censorship alone seems to anger people, the [Chinese] program has the additional advantage of enabling the government to actively control opinion without having to censor as much as they might otherwise. 50 50. Id. at 497.

A related reason for avoiding direct speech suppression is that under conditions of attentional scarcity, high-profile government censorship or the imprisonment of speakers runs the risk of backfiring. The government is, effectively, a kind of celebrity whose actions draw disproportionate attention. And such attention may help overcome the greatest barrier facing a disfavored speaker: that of getting heard at all. In certain instances, the attention showered on an arrested speaker may even, counterintuitively, yield financial or reputational rewards—the opposite of chill.

In Internet lore, one term for this backlash potential is the Streisand effect. 51 51. The term was coined in an article about a cease-and-desist letter sent by Marco Beach Ocean Resort to Urinal.net—a “site [that] has hundreds of fans who regularly submit pictures of urinals they take from locations all over the world”—threatening legal action unless the website stopped mentioning the resort’s name alongside photos from its bathroom. The cease-and-desist letter prompted more attention than the original posts on Urinal.net. Mike Masnick, Since When Is It Illegal to Just Mention a Trademark Online? , Techdirt (Jan. 5, 2005), https://www.techdirt.com/articles/20050105/0132239.shtml. Named after celebrity Barbra Streisand, whose lawyer’s efforts to suppress aerial photos of her beachfront resort attracted hundreds of thousands of downloads of those photos, the term stands for the proposition that “the simple act of trying to repress something . . . online is likely to make it . . . seen by many more people.” 52 52. Id. However, the Streisand example may be obscuring that many other cease-and-desist letters—even issued by celebrities—never attract much attention. To be sure, the concept’s general applicability might be questioned, especially with regard to viral dissemination, which is highly unpredictable and rarer than one might imagine. 53 53. See Sharad Goel et al., The Structural Virality of Online Diffusion , 62 Mgmt. Sci. 180 (2016). Even still, the possibility of creating attention for the original speaker makes direct censorship less attractive, given the proliferation of cheaper—and often more effective—alternatives.

As suggested in the introduction, those alternatives can be placed in several categories: (1) online harassment and attacks, (2) distorting and flooding, or so-called reverse censorship, and (3) control of the main speech platforms. (The third topic is included for completeness, but it has already received extensive scholarly attention. 54 54. See supra note 2. The potential for requiring Internet intermediaries to control speech was something a number of people noticed early in its history. As Lessig observed in 1998, we had already begun to live in an era in which it was clear that networks might be designed to filter some speech and leave others untouched, or make intermediaries liable for carrying “forbidden” content. See Lessig, supra note 2. Around that same time, Congress undertook what Balkin later called “collateral censorship” techniques: requiring search engines and others to block copyrighted materials on request, and requiring hosts to prevent minors from accessing indecent content. See, e.g. , Online Copyright Infringement Liability Limitation Act, Pub. L. No. 105-304 (1998); Communications Decency Act of 1996, Pub. L. No. 104-104 (1996). The potential for foreign governments to rely on targeting search engines, ISPs, and major hosting sites as a technique of control was also recognized . See Goldsmith and Wu, supra note 2. By now, it has become common knowledge that platforms like Google and Facebook exert a major influence on the speech environment, and the techniques of targeting intermediaries have evolved considerably. For a comprehensive survey of such techniques, see Balkin, supra note 2; see also Seth F. Kreimer, Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link , 155 U. Pa. L. Rev. 11 (2006). ) These techniques are practiced to different degrees by different governments abroad. Yet given that they could be used by U.S. officials as well 55 55. In fact, recent reports suggest that President Trump and his associates may have already engaged in the distraction and cheerleading techniques described above. See, e.g. , Oliver Darcy, Lawsuit: Fox News Concocted Seth Rich Story with Oversight from White House , CNN (Aug. 2, 2017), http://money.cnn.com/2017/08/01/media/rod-wheeler-seth-rich-fox-news-lawsuit/index.html; Taylor Link, P resident Trump Gave a Shout Out to an Apparent Twitter Bot, Hasn’t Removed the Retweet , Salon (Aug. 6, 2017), http://www.salon.com/2017/08/06/president-trump-gave-a-shout-out-to-an-apparent-twitter-bot-hasnt-removed-the-retweet. —and that they pose a major threat to the speech environment whether or not one’s own government is using them—all are worth exploring in our consideration of whether the First Amendment, in its political aspects, is obsolete.

Troll Armies

Among the newer emerging threats is the rise of abusive online mobs who seek to wear down targeted speakers and have them think twice about writing critical content, thus making political journalism less attractive. Whether directly employed by, loosely associated with, or merely aligned with the goals of the government or particular politicians, the technique relies on the low cost of speech to punish speakers.

While there have long been Internet trolls, in the early 2000s the Russian government pioneered their use as a systematic speech control technique with the establishment of a “web brigade” (Веб-бригады), often called a “troll army.” Its methods, discovered through leaks and the undercover work of investigative reporters, 56 56. Max Seddon, Documents Show How Russia’s Troll Army Hit America , BuzzFeed News (June 2, 2014), https://www.buzzfeed.com/maxseddon/documents-show-how-russias-troll-army-hit-america?utm_term=.dcXBvNmo9#.xj5842QBx; Pomerantsev, supra note 3; Russia Update: Questions About Putin’s Health After Canceled Meetings & Vague Answers , Interpreter (Mar. 12, 2015), http://www.interpretermag.com/russia-update-march-12-2015/#7432. range from mere encouragement of loyalists, to funding groups that pay commentators piecemeal, to employing full-time staff to engage in around-the-clock propagation of pro-government views and attacks on critics. 57 57. Peter Pomerantsev, Inside The Kremlin ’s Hall of Mirrors , Guardian (Apr. 9, 2015), https://www.theguardian.com/news/2015/apr/09/kremlin-hall-of-mirrors-military-information-psychology.

There are three hallmarks of the Russian approach. The first is obscuring the government’s influence. The hand of the Kremlin is not explicit; funding comes from “pro-Kremlin” groups or nonprofits, and those involved usually disclaim any formal association with the Russian state. 58 58. Pomerantsev, supra note 3. In addition, individuals sympathetic to the cause often join as de facto volunteers. The second is the use of vicious, swarm-like attacks over email, telephone, or social media to harass and humiliate critics of Russian policies or President Putin. While the online hate mob is certainly not a Russian invention, 59 59. Cf. Citron, supra note 45. its deployment for such political objectives seems to be a novel development. The third hallmark is its international scope. Although these techniques have mainly been used domestically in Russia, they have also been employed against political opponents elsewhere in the world, including in the Ukraine and in countries like Finland, where trolls savagely attacked journalists who favored joining NATO (or questioned Russian efforts to influence that decision). 60 60. Andrew Higgins, Effort to Expose Russia’s “Troll Army” Draws Vicious Retaliation , N.Y. Times (May 30, 2016), https://www.nytimes.com/2016/05/31/world/europe/russia-finland-nato-trolls.html. Likewise, these tactics have been deployed in the United States, where paid Russian trolls targeted the 2016 presidential campaign. 61 61. See Rachel Roberts, Russia Hired 1,000 People to Create Anti-Clinton “Fake News” in Key US States During Election, Trump-Russia Hearings Leader Reveals , Independent (Mar. 30, 2017), https://www.independent.co.uk/news/world/americas/us-politics/russian-trolls-hilary-clinton-fake-news-election-democrat-mark-warner-intelligence-committee-a7657641.html; Natasha Bertrand, It Looks like Russia Hired Internet Trolls to Pose as Pro-Trump Americans , Business Insider (July 27, 2016), http://www.businessinsider.com/russia-internet-trolls-and-donald-trump-2016-7.

Soviet-born British journalist Peter Pomerantsev, who was among the first to document the evolving Russian approach to speech control, has presented the operative questions this way:

[W]hat happens when a powerful actor systematically abuses freedom of information to spread disinformation? Uses freedom of speech in such a way as to subvert the very possibility of a debate? And does so not merely inside a country, as part of vicious election campaigns, but as part of a transnational military campaign? Since at least 2008, Kremlin military and intelligence thinkers have been talking about information not in the familiar terms of “persuasion,” “public diplomacy” or even “propaganda,” but in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze. 62 62. Pomerantsev, supra note 3; see also Pomerantsev, supra note 57.

Over the last two years, the basic elements of the Russian approach have spread to the United States. As in Russia, journalists of all stripes have been targeted by virtual mobs when they criticize the American President or his policies. While some of the attacks appear to have originated from independent actors who borrowed Russian techniques, others have come from the (paid) Russian force itself; members of the Senate Select Committee on Intelligence have said that over 1,000 people on that force were assigned to influence the U.S. election in 2016. 63 63. See Roberts, supra note 61 . The degree to which trolls operating in the United States are funded by or otherwise coordinated with the Russian state is a topic of wide speculation. However, based on leaked documents and whistleblower accounts, at least some of the attacks on Trump critics in 2016 and 2017 were launched by Russia itself. See, e.g. , Alexey Kovalev, Russia ’s Infamous “ Troll Factory ” Is Now Posing as a Media Empire , Moscow Times (Mar. 24, 2017), https://themoscowtimes.com/articles/russias-infamous-troll-factory-is-now-posing-as-a-media-empire-57534. For certain journalists in particular, such harassment has become a regular occurrence, an ongoing assault. As David French of the National Review puts it: “The formula is simple: Criticize Trump—especially his connection to the alt-right—and the backlash will come.” 64 64. David French, The Price I’ve Paid for Opposing Donald Trump , National Review (Oct. 21, 2016), http://www.nationalreview.com/article/441319/donald-trump-alt-right-internet-abuse-never-trump-movement.

Ironically, while sometimes the President himself attacks, insults, or abuses journalists, this behavior has not necessarily had censorial consequences in itself, as it tends to draw attention to the speech in question. In fact, the improved fortunes of media outlets like CNN might serve as a demonstration that there often is a measurable Streisandeffect. 65 65. Stephen Battaglio, Trump’s Attacks On CNN Aren’t Hurting It One Bit , L.A. Times (Feb. 16, 2017), http://www.latimes.com/business/hollywood/la-fi-ct-cnn-zucker-20170216-story.html. We are speaking here of a form of censorial punishment practiced by the government’s allies, which is much less newsworthy but potentially just as punitive, especially over the long term.

Consider, for example, French’s description of the response to his criticisms of the President:

I saw images of my daughter’s face in gas chambers, with a smiling Trump in a Nazi uniform preparing to press a button and kill her. I saw her face photo-shopped into images of slaves. She was called a “niglet” and a “dindu.” The alt-right unleashed on my wife, Nancy, claiming that she had slept with black men while I was deployed to Iraq, and that I loved to watch while she had sex with “black bucks.” People sent her pornographic images of black men having sex with white women, with someone photoshopped to look like me, watching. 66 66. French, supra note 64.

A similar story is told by Rosa Brooks, a law professor and popular commentator, who wrote a column in late January of 2017 that was critical of President Trump and speculated about whether the military might decline to follow plainly irrational orders, despite the tradition of deference to the Commander-in-Chief. After the piece was picked up by Breitbart News, where it was described as a call for a military coup, Brooks experienced the following. Her account is worth quoting at length:

By mid-afternoon, I was getting death threats. “I AM GOING TO CUT YOUR HEAD OFF………BITCH!” screamed one email. Other correspondents threatened to hang me, shoot me, deport me, imprison me, and/or get me fired (this last one seemed a bit anti-climactic). The dean of Georgetown Law, where I teach, got nasty emails about me. The Georgetown University president’s office received a voicemail from someone threatening to shoot me. New America, the think tank where I am a fellow, got a similar influx of nasty calls and messages. “You’re a fucking cunt! Piece of shit whore!” read a typical missive.

My correspondents were united on the matter of my crimes (treason, sedition, inciting insurrection, etc.). The only issue that appeared to confound and divide them was the vexing question of just what kind of undesirable I was. Several decided, based presumably on my first name, that I was Latina and proposed that I be forcibly sent to the other side of the soon-to-be-built Trump border wall. Others, presumably conflating me with African-American civil rights heroine Rosa Parks, asserted that I would never have gotten hired if it weren’t for race-based affirmative action. The anti-Semitic rants flowed in, too: A website called the Daily Stormer noted darkly that I am “ the daughter of the infamous communist Barbara Ehrenreich and the Jew John Ehrenreich, ” and I got an anonymous phone call from someone who informed me, in a chillingly pleasant tone, that he supported a military coup “to kill all the Jews.” 67 67. Rosa Brooks, And then the Breitbart Lynch Mob Came for Me , Foreign Policy (Feb. 6, 2017), http://foreignpolicy.com/2017/02/06/and-then-the-breitbart-lynch-mob-came-for-me-bannon-trolls-trump.

The angry, censorial online mob is not merely a tool of neo-fascists or the political right, although the association of such mobs with the current Administration merits special attention. Without assuming any moral equivalence, it is worth noting that there seems to be a growing, parallel tendency of leftist mobs to harass and shut down disfavored speakers as well. 68 68. See, e.g. , Katharine Q. Seelye, Protesters Disrupt Speech by “Bell Curve” Author at Vermont College , N.Y. Times (Mar. 3, 2017), https://www.nytimes.com/2017/03/03/us/middlebury-college-charles-murray-bell-curve-protest.html?_r=0.

Some suppression of speech is disturbing enough to make one wonder if the First Amendment and its state action doctrine (which holds that the Amendment only applies to actions by the state, not by private parties) are hopelessly limited in an era when harassment is so easy. Consider the story of Lindy West, a comedian and writer who has authored controversial columns, generally on feminist topics. By virtue of her writing talent and her association with The Guardian, she does not, like other speakers, face difficulties getting heard. However, she does face near-constant harassment and abuse. Every time she publishes a controversial piece, West recounts, “the harassment comes in a deluge. It floods my Twitter feed, my Facebook page, my email, so fast that I can’t even keep up (not that I want to).” In a standard example, after West wrote a column about rape, she received the following messages: “She won’t ever have to worry about rape”; “No one would want to rape that fat, disgusting mess”; and many more. 69 69. But even more cruelly: Someone—bored, apparently, with the usual angles of harassment—had made a fake Twitter account purporting to be my dead dad, featuring a stolen, beloved photo of him, for no reason other than to hurt me. The name on the account was “ PawWestDonezo, ” because my father’s name was Paul West, and a difficult battle with prostate cancer had rendered him “donezo” (goofy slang for “done”) just 18 months earlier. “Embarrassed father of an idiot,” the bio read. “Other two kids are fine, though.” His location was “Dirt hole in Seattle.” Lindy West, What Happened When I Confronted My Cruelest Troll , Guardian (Feb. 2, 2015), https://www.theguardian.com/society/2015/feb/02/what-happened-confronted-cruellest-troll-lindy-west. As West observes: “It’s a silencing tactic. The message is: you are outnumbered. The message is: we’ll stop when you’re gone.” 70 70. Id. Eventually, West quit Twitter and other social media entirely.

It is not terribly new to suggest that private suppression of speech may matter as much as state suppression. For example, John Stuart Mill’s On Liberty seemed to take Victorian sensibilities as a greater threat to freedom than anything the government might do. 71 71. J.S. Mill, On Liberty and Other Writings 69 (Stefan Collini ed., Cambridge University Press 1989) (1859) (“These tendencies of the times cause the public to be more disposed than at most former periods to prescribe general rules of conduct, and endeavour to make every one conform to the approved standard.”). But what has increased is the ability of nominally private forms of punishment—which may be directed or encouraged by government officials—to operate through the very channels meant to facilitate public speech.

Reverse Censorship, Flooding, and Propaganda Robots

Reverse censorship, which is also called “flooding,” is another contemporary technique of speech control. With roots in so-called “astroturfing,” 72 72. Se e generally Adam Bienkov, Astroturfing: What Is It and Why Does It Matter? , Guardian (Feb. 8, 2012), https://www.theguardian.com/commentisfree/2012/feb/08/what-is-astroturfing. it relies on counter-programming with a sufficient volume of information to drown out disfavored speech, or at least distort the information environment. Politically motivated reverse censorship often involves the dissemination of fake news (or atrocity propaganda) in order to distract and discredit. Whatever form it takes, this technique clearly qualifies as listener-targeted speech control.

The Chinese and Russian governments have led the way in developing methods of flooding and reverse censorship. 73 73. See Goldsmith and Wu, supra note 2 (including an earlier investigation into Chinese censorship innovations). China in particular stands out for its control of domestic speech. China has not, like North Korea, sought to avoid twenty-first-century communications technologies. Its embrace of the Internet has been enthusiastic and thorough. Yet the Communist Party has nonetheless managed to survive—and even enhance—its control over politics, defying the predictions of many in the West who forecast that the arrival of the Internet would soon lead to the government’s overthrow. 74 74. Predictions of the Communist Party’s downfall at the hands of the Internet are surveyed in Chapter 6 of Goldsmith and Wu, supra note 2. Among the Chinese methods uncovered by researchers are the efforts of as many as two million people who are paid to post on behalf of the Party. As King, Pan, and Roberts have found:

[T]he [Chinese] government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. 75 75. King et al., 2017 APSR, supra note 48, at 484.

In an attention-scarce world, these kinds of methods are more effective than they might have been in previous decades. When listeners have highly limited bandwidth to devote to any given issue, they will rarely dig deeply, and they are less likely to hear dissenting opinions. In such an environment, flooding can be just as effective as more traditional forms of censorship.

Related to techniques of flooding is the intentional dissemination of so-called “fake news” and the discrediting of mainstream media sources. In modern times, this technique seems, once again, to be a key tool of political influence used by the Russian government. In addition to its attacks on regime critics, the Russian web brigade also spreads massive numbers of false stories, often alleging atrocities committed by its targets. 76 76. See Pomerantsev, supra note 57. While this technique can be accomplished by humans, it is aided and amplified by the increasing use of human-impersonating robots, or “bots,” which relay the messages through millions of fake accounts on social media sites like Twitter.

Tufekci has documented similar strategies employed by the Turkish government in its efforts to control opposition. The Turkish government, in her account, relies most heavily on discrediting nongovernmental sources of information. As she writes, critics of the state found “an enormous increase in challenges to their credibility, ranging from reasonable questions to outrageous and clearly false accusations. These took place using the same channels, and even the same methods, that a social movement might have used to challenge false claims by authorities.” 77 77. Tufekci, supra note 1, at 246. The goal, she writes, was to create “an ever-bigger glut of mashed-up truth and falsehood to foment confusion and distraction” and “to overwhelm people with so many pieces of bad and disturbing information that they become confused and give up trying to figure out what the truth might be—or even the possibility of finding out what is true.” 78 78. Id. at 231, 241.

While the technique was pioneered overseas, it is clear that flooding has come to the United States. Here, the most important variant has been the development and mass dissemination of so-called “fake news.” Consider in this regard the work of Philip Howard, who runs the Computational Propaganda Project at Oxford University. As Howard points out, voters are strongly influenced by what they think their neighbors are thinking; hence fake crowds, deployed at crucial moments, can create a false sense of solidarity and support. Howard and his collaborators studied the linking and sharing of news on Twitter in the week before the November 2016 U.S. presidential vote. Their research produced a startling revelation: “junk news was shared just as widely as professional news in the days leading up to the election.” 79 79. Philip N. Howard, et al., Junk News and Bots During the U.S. Election: What Were Michigan Voters Sharing over Twitter? , Computational Propaganda Project (Mar. 26,2017), http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/03/What-Were-Michigan-Voters-Sharing-Over-Twitter-v2.pdf.

Howard’s group believes that bots were used to help achieve this effect. These bots pose as humans on Facebook, Twitter, and other social media, and they transmit messages as directed. Researchers have estimated that Twitter has as many as 48 million bot users, 80 80. See Onur Varol et al., Online Human-Bot Interactions: Detection, Estimation, and Characterization , Int’l AAAI Conf. Web & Social Media (Mar. 27, 2017), https://arxiv.org/pdf/1703.03107.pdf (estimating that 9 to 15 percent of active Twitter users are bots). and Facebook has previously estimated that it has between 67.65 million and 137.76 million fake users. 81 81. Rebecca Grant, Facebook Has No Idea How Many Fake Accounts It Has—But It Could Be Nearly 140M , Venturebeat (Feb. 3, 2014), https://venturebeat.com/2014/02/03/facebook-has-no-idea-how-many-fake-accounts-it-has-but-it-could-nearly-140m. Some percentage of these, according to Howard and his team, are harnessed en masse to help spread fake news before and after important events.

Robots have even been employed to attack the “open” processes of the administrative state. In the spring of 2017, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment. In previous years, such proceedings attracted vigorous argument by (human) commentators. This time, someone directed robots to impersonate—via stolen identities—over one hundred thousand people, flooding the system with fake comments, all of which were purportedly against federal net neutrality rules. 82 82. Patrick Kulp, Bots Are the Latest Weapon in the Net Neutrality Battle , Mashable (May 10, 2017), http://mashable.com/2017/05/10/bots-net-neutrality-comments-fcc/#a8jzOIYQu5qE.

As it stands, the First Amendment has little to say about any of these tools and techniques. The mobilization of online vitriol or the dissemination of fake news by private parties or foreign states, even if in coordination with the U.S. government, has been considered a matter of journalistic ethics or foreign policy, not constitutional law. And it has long been assumed (though rarely tested) that the U.S. government’s own use of domestic propaganda is not a contestable First Amendment concern, on the premise that propaganda is “government speech.” 83 83. The Supreme Court reaffirmed last spring that U.S. government propaganda is outside the reach of the First Amendment. See Matal v. Tam , 137 S. Ct. 1744, 1758 (2017) (noting, in dicta, that “the First Amendment did not demand that the Government balance the message of [pro-World War II] posters by producing and distributing posters encouraging Americans to refrain from engaging in these activities”). For an argument that such propaganda ought be subject to First Amendment controls, see William W. Van Alstyne, The First Amendment and the Suppression of Warmongering Propaganda in the United States: Comments and Footnotes , 31 Law & Contemp. Probs. 530 (1966). The closest thing to a constitutional limit on propagandizing is the premise that the state cannot compel citizens to voice messages on its behalf (under the doctrine of compelled speech) 84 84. See Wooley v. Maynard , 430 U.S. 705 (1977). or to engage in patriotic acts like saluting the flag or reciting the pledge of allegiance. 85 85. See, e.g. , West Virginia State Bd. of Educ. v. Barnette , 319 U.S. 624 (1943); s ee also Linmark Assocs., Inc. v. Willingboro Twp ., 431 U.S. 85 (1977). Other constraints are surveyed in Mark G. Yudof, When Governments Speak: Toward a Theory of Government Expression and the First Amendment , 57 Tex. L. Rev. 863 (1979). But under the existing jurisprudence, it seems that little—other than political norms that are fast eroding—stands in the way of a full-blown campaign designed to manipulate the political speech environment to the advantage of current officeholders.

What Might Be Done

What I have written suggests that the First Amendment and its jurisprudence is a bystander in an age of aggressive efforts to propagandize and control online speech. While it does wall off the most coercive technique of the government—directly punishing disfavored speakers or the press—that’s just one part of the problem.

If it seems that the First Amendment’s main presumptions are obsolete, what might be done? There are two basic answers to this question. The first is to admit defeat and suggest that the role of the political First Amendment will be confined to harms that fall within the original 1920s paradigm. There remains important work to be done here, as protecting the press and other speakers from explicit government censorship will continue to be essential. And perhaps this is all that might be expected from the Constitution (and the judiciary). The second—and more ambitious—answer is to imagine how First Amendment doctrine might adapt to the kinds of speech manipulation described above. In some cases, this could mean that the First Amendment must broaden its own reach to encompass new techniques of speech control. In other cases, it could mean that the First Amendment must step slightly to the side and allow different legal tools—like the enforcement of existing or as-yet-to-be-created criminal statutes—to do the lion’s share of the work needed to promote a healthy speech environment.

Accepting a Limited First Amendment

If we accept the premise that the First Amendment cannot itself address the issues here discussed, reform initiatives must center on the behaviors of major private parties that are, in practice, the most important speech brokers of our times. What naturally emerges is a debate over the public duties of both “the media,” traditionally understood, and of major Internet speech platforms like Facebook, Twitter, and Google. At its essence, the debate boils down to asking whether these platforms should adopt (or be forced to adopt) norms and policies traditionally associated with twentieth-century journalism. 86 86. For a sample of the debate, see Robyn Caplan, Like It or Not, Facebook Is Now a Media Company , N.Y. Times (May 17, 2016), https://www.nytimes.com/roomfordebate/2016/05/17/is-facebook-saving-journalism-or-ruining-it/like-it-or-not-facebook-is-now-a-media-company.

We often take for granted the press’s role as a bulwark against the speech control techniques described in this paper. Ever since the rise of “objectivity” and “independence” norms in the 1920s, along with the adoption of formal journalism codes of ethics, the press has tried to avoid printing mere rumors or false claims, knowingly serving as an arm of government propaganda efforts, or succumbing to the influence of business interests. 87 87. See Stephen J.A. Ward, The Invention of Journalism Ethics, First Edition: The Path to Objectivity and Beyond (2006); David T.Z. Mindich, Just the Facts: How “Objectivity” Came to Define American Journalism (2000). It has also guaranteed reporters some security from attacks and abuse. The press may not have performed these duties perfectly, and there have been the usual debates about what constitutes a “fact” or “objectivity.” But the aspiration exists, and it succeeds in filtering out many obvious distortions.

In contrast, the major speech platforms, born as tech firms, have become players in the media world almost by accident. By design, they have none of the filters or safeguards that the press historically has employed. There are advantages to this design: it yields the appealing idea that anyone, and not only professionals, might have her say. In practice, it has precipitated a great flourishing of speech in various new forms, from blogging to user-created encyclopedias to social media. 88 88. See Clay Shirky, Here Comes Everybody: The Power of Organizing Without Organizations (2008); cf. William Fisher, Theories of Intellectual Property , in New Essays in the Legal and Political Theory of Property 168, 193 (Stephen R. Munzer ed., 2001) (describing semiotic democracy). As Volokh prophesized in 1995: “Cheap speech will mean that far more speakers—rich and poor, popular and not, banal and avant garde—will be able to make their work available to all.” 89 89. Volokh, supra note 31, at 1807. But it has also meant, as we’ve seen, that the platforms have been vulnerable to tactics that weaponize speech and use the openness of the Internet as ammunition. The question now before us is whether the platforms need do to more to combat these problems for the sake of political culture in the United States.

We might, for example, fairly focus on Twitter, which has served as a tool for computational propaganda (through millions of fake users), dissemination of fake news, and harassment of speakers. Twitter does little about any of these problems. It has adopted policies that are meant, supposedly, to curb abuse. But the policies are widely viewed as ineffective, in no small part because they put the burden of action on the person being harassed. West, for example, describes her attempt to report as “abusive” a user who threatened to rape her with an “anthropomorphic train.” 90 90. Lindy West, Twitter Doesn’t Think These Rape and Death Threats Are Harassment , Daily Dot (Dec. 23, 2014), https://www.dailydot.com/via/twitter-harassment-rape-death-threat-report. Twitter staff responded that the comment was “currently not violating the Twitter Rules.” 91 91. Id. When Twitter’s CEO recently asked, “What’s the most important thing you want to see Twitter improve or create in 2017?” one user responded: a “comprehensive plan for getting rid of the Nazis.” 92 92. Lindy West, I’ve Left Twitter. It Is Unusable for Anyone but Trolls, Robots and Dictators , Guardian (Jan. 3, 2017), https://www.theguardian.com/commentisfree/2017/jan/03/ive-left-twitter-unusable-anyone-but-trolls-robots-dictators-lindy-west. To suggest that private platforms could—and should—be doing more to prevent speakers from harassment and abuse is perhaps the clearest remedy for the emerging threats identified above, even if it is not clear at this time exactly what such remedies ought to look like.

The so-called troll problem is among the online world’s oldest problems and a fixture of early “cyberspace” debates. 93 93. See, e.g. , Julian Dibbel, A Rape in Cyberspace: How an Evil Clown, a Haitian Trickster Spirit, Two Wizards, and a Cast of Dozens Turned a Database Into a Society , Village Voice (Dec. 23, 1993), http://www.juliandibbell.com/texts/bungle_vv.html. Anonymous commentators and mobs have long shown their capacity to poison any environment and, through their vicious and demeaning attacks, chill expression. That old debate also revealed that design can mitigate some of these concerns. For example, consider that Wikipedia does not have a widespread fake news problem. 94 94. See Cade Metz, At 15, Wikipedia Is Finally Finding Its Way to the Truth , Wired (Jan. 15, 2016), https://www.wired.com/2016/01/at-15-wikipedia-is-finally-finding-its-way-to-the-truth. But even if the debate remains similar, the stakes and consequences have changed. In the 1990s, trolls would abuse avatars, scare people off AOL chatrooms, or wreck virtual worlds. 95 95. See Wu, supra note 14, at 292. Today, we are witnessing efforts to destroy the reputations of real people for political purposes, to tip elections, and to influence foreign policy. It is hard to resist the conclusion that the law must be enlisted to fight such scourges.

First Amendment Possibilities

Could the First Amendment find a way to adapt to twenty-first-century speech challenges? How this might be accomplished is far from obvious, and I will freely admit that this paper is of the variety that is intended to ask the question rather than answer it. The most basic stumbling block is well known to lawyers. The First Amendment, like other guarantees in the Bill of Rights, has been understood primarily as a negative right against coercive government action—not as a right against the conduct of nongovernmental actors, or as a right that obliges the government to ensure a pristine speech environment. Tactics such as flooding and purposeful generation of fake news are, by our current ways of thinking, either private action or, at most, the government’s own protected speech.

A few possible adaptations present themselves, and they can be placed in three groups. The first concerns the “state action” doctrine, which is the limit that most obviously constrains the First Amendment from serving as a check on many of the emerging threats to the political speech environment. If a private mob attacks and silences critics of the government, purely of its own volition, under a basic theory of state action there is no role for the First Amendment—even if the mob replicates punishments that the government itself might have wanted to inflict. But what about when the mob is not quite as independent as it first appears? The First Amendment’s under-discussed “accomplice liability” doctrine may become of increasing importance if, in practice, governmental units or politicians have a hand in encouraging, coordinating, or otherwise providing direction to what might seem like private parties.

A second possibility is expanding the category of “state action” itself to encompass the conduct of major speech platforms like Facebook or Twitter. However, as discussed below, I view this as an unpromising and potentially counterproductive solution.

Third, the project of realizing a healthier speech environment may depend more on what the First Amendment permits, rather than what it prevents or requires. Indeed, some of the most important remedies for the challenges described in this paper may consist of new laws or more aggressive enforcement of existing laws. The federal cyberstalking statute, 96 96. This statute proscribes conduct that is intended to “harass” or “intimidate,” is carried out using “any interactive computer service or electronic communication service or electronic communication system of interstate commerce,” and “would be reasonably expected to cause substantial emotional distress.” 18 U.S.C § 2261A(2)(B). for example, has already been used to protect the press from egregious trolling and harassment. 97 97. See, e.g. , United States v. Moreland , 207 F. Supp. 3d 1222, 1229 (N.D. Okla. 2016) (holding that the cyberstalking statute’s application to a defendant who sent repeated bizarre and threatening messages to a journalist on social media websites is not barred by the First Amendment). New laws might target foreign efforts to manipulate American elections, or provide better and faster protections for members of the press. Assuming such laws are challenged as unconstitutional, the necessary doctrinal evolution may involve the First Amendment accommodating robust efforts to fight the new tools of speech control.

Let us look a little more closely at each of these possibilities.

State Action—Accomplice Liability

The state action doctrine, once again, limits constitutional scrutiny to (as the name suggests) actions taken by the state. However, in the “troll army” model, punishment of the press and political critics is conducted by ostensibly private parties or foreign governments. Hence, at a first look, such conduct seems unreachable by the Constitution.

Yet as many have observed, the current American President has seemingly directed online mobs to go after his critics and opponents, particularly members of the press. 98 98. See, e.g. , Martin Pengelly and Joanna Walters, Trump Accused of Encouraging Attacks on Journalists with CNN Body-Slam Tweet , Guardian (July 2, 2017), https://www.theguardian.com/us-news/2017/jul/02/trump-body-slam-cnn-tweet-violence-reporters-wrestlemania. Even members of the President’s party have reportedly been nervous to speak their minds, not based on threats of ordinary political reactions but for fear of attack by online mobs. 99 99. See French, supra note 64. And while the directed-mob technique may have been pioneered by Russia and employed by Trump, it is not hard to imagine a future in which other Presidents and powerful leaders sic their loyal mobs on critics, confident that in so doing they may avoid the limits imposed by the First Amendment.

But the state action doctrine may not be as much of a hindrance as this end-run supposes. The First Amendment already has a nascent accomplice liability doctrine that makes state actors, under some circumstances, “liable for the actions of private parties.” 100 100. Blum v. Yaretsky , 457 U.S. 991, 1003 (1982). In Blum v. Yaretsky,the Supreme Court explained that the state can be held responsible for private action “when it has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the State.” 101 101. Id. at 1004. The Fourth Circuit puts it slightly differently: the state is acting when it “has coerced the private actor,” or when it “has sought to evade a clear constitutional duty through delegation to a private actor.” German v. Fox , 267 F. App’x 231, 233 n.* (4th Cir. 2008). The Blum formulation echoes common-law accomplice liability principles: a principal is ordinarily liable for the illegal actions of another party when it both shares the underlying mens rea, or purpose, and when it acts to encourage, command, support, or otherwise provide aid to that party. 102 102. See, e.g. , N.Y. Penal Law § 20.00 (McKinney); Model Penal Code § 2.06 (Am. Law Inst. 2016). Blum itself was not a First Amendment case, and it left open the question of what might constitute “significant encouragement” in various settings. 103 103. In Blum itself, the Supreme Court stated that the medical decisions made by the nursing home were insufficiently directed by the state to be deemed state action. Blum , 457 U.S. at 1012. But in subsequent cases, the lower courts have provided a greater sense of what factual scenarios might suffice for state accomplice liability in the First Amendment context.

For example, the Sixth Circuit has a line of First Amendment employment retaliation cases that suggest when public actors may be held liable for nominally private conduct. In the 2010 case Paige v. Coyner, the Sixth Circuit addressed the constitutional claims of a woman who was fired by her employer at the behest of a state official (Coyner) after she spoke out at a public meeting in opposition to a new highway development. 104 104. 614 F.3d 273 (6th Cir. 2010). Unlike a typical retaliation-termination case, the plaintiff presented evidence that she was fired because the state official complained to her employer and sought to have her terminated. 105 105. Id . at 276. The Sixth Circuit held that the lawsuit properly alleged state action because Coyner encouraged the firing, even though it was the employer who actually inflicted the punishment. 106 106. Id . at 284. Moreover, the court suggested an even broader liability standard than Blum, holding that the private punishment of a speaker could be attributed to a state official “if that result was a reasonably foreseeable consequence.” 107 107. Id. at 280 . More recently, the Sixth Circuit reaffirmed Coyner where a police officer, after a dispute with a private individual, went to her workplace to complain about her with the “reasonably foreseeable” result of having her fired. 108 108. Wells ex rel. Bankr. Estate of Arnone-Doran v. City of Grosse Pointe Farms , 581 F. App’x 469 (6th Cir. 2014). Similar cases can be found in other circuits. 109 109. In a district court case in New Jersey, the court refused to dismiss an action brought by a woman who was fired by her nonprofit after local government officials extensively criticized her for comments she made about law enforcement. Downey v. Coal. Against Rape & Abuse, Inc. , 143 F. Supp. 2d 423 (D.N.J. 2001); s ee also Lynch v. Southampton Animal Shelter Found. Inc. , 278 F.R.D. 55 (E.D.N.Y. 2011) (denying motion to dismiss where a privatized animal shelter that fired a volunteer who was an animal rights activist was alleged to be a state actor); Ciacciarella v. Bronko , 534 F. Supp. 2d 276 (D. Conn. 2008); Pendleton v. St. Louis County , 178 F.3d 1007 (8th Cir. 1999).

In the political “attack mob” context, it seems that some official encouragement of attacks on the press or other speakers should trigger First Amendment scrutiny. Naturally, those who attack critics of the state merely because they feel inspired to do so by an official’s example do not present a case of state action. (If burdensome enough, however, the original attack might be a matter of First Amendment concern.) But more direct encouragement may yield a First Amendment constraint. Consider, for example, the following scenarios:

Based on the standards enumerated in Blum and other cases, these scenarios might support a finding of state action and a First Amendment violation. In other words, an official who spurs private censorial mobs to attack a disfavored speaker might—in an appropriately brought lawsuit, contingent on the usual questions of standing and immunity—be subject to a court injunction or even damages, just as if she performed the attack herself.

State Action—Platforms

The central role played by major speech platforms like Twitter, Google, and Facebook might prompt another question: should the platforms themselves be treated as state actors for First Amendment purposes? Perhaps, like the company town in Marsh v. Alabama, 110 110. 326 U.S. 501 (1946). these companies have assumed sufficiently public duties or importance that they stand “in the shoes of the State.” 111 111. Lloyd Corp., Ltd. v. Tanner , 407 U.S. 551, 569 (1972). While some have argued that this is appropriate, 112 112. See, e.g. , Trevor Puetz, Note, Facebook: The New Town Square , 44 Sw. L. Rev. 385, 387–88 (2014) (arguing that “Facebook should be analyzed under the quasi-municipality doctrine, which allows for the application of freedom of speech protection on certain private property”). there are a number of reasons why treating these platforms as state actors strikes me as an unpromising and undesirable avenue.

First, there are real differences between the Marsh company town and today’s speech platforms. Marsh was a case where the firm had effectively taken over the full spectrum of municipal government duties, including ownership of the sidewalk, roads, sewer systems, and policing. 113 113. Marsh , 326 U.S. at 502-03. The company town was, in most respects, indistinguishable from a traditional government-run locality—it just happened to be private. The residents of Chickasaw had no way of escaping the reach of the company’s power, as the Gulf Shipbuilding Corporation claimed, in Max Weber’s terms, a “monopoly of the legitimate use of physical force.” 114 114. Max Weber, Politics as a Vocation , in From Max Weber: Essays in Sociology 77, 78 (H.H. Gerth & C. Wright Mills eds. & trans., 1946). Even the town’s policeman was paid by the corporation. Marsh , 326 U.S. at 502. To exempt such a company town from constitutional scrutiny therefore produced the prospect of easy constitutional evasion by privatization.

However important Facebook or Google may be to our speech environment, it seems much harder to say that they are acting like the government all but in name and thereby avoiding the Constitution. It is true that one’s life may be heavily influenced by these and other large companies, but influence alone cannot be the criterion for what makes something a state actor; in that case, every employer would be a state actor, and perhaps so would nearly every family. If the major speech platforms (including the major television networks) ought to be classified as state actors based not on the assumption of specific state-like duties but merely on their influence, it is hard to know where the category ends.

This is not to deny that the leading speech platforms have an important public function. In fact, I have argued in other work that regulation of communications carriers plays a critical role in facilitating speech, comprising a de facto First Amendment tradition. 115 115. Wu, supra note 32. Yet if these platforms are treated as state actors under the First Amendment in all that they do, their ability to handle some of the problems presented here may well be curtailed. This danger is made clear by Cyber Promotions, Inc. v. American Online, a 1996 case against AOL, the major online platform at the time. 116 116. 948 F. Supp. 436 (E.D. Pa. 1996). In Cyber Promotions, a mass-email marketing firm alleged that AOL’s new spam filters were violations of the First Amendment as, effectively, a form of state censorship. The court distinguished Marsh on factual grounds, but what if it hadn’t? Holding AOL—or today’s major platforms—to be a state actor could have severely limited its ability to fight not only spam but also trolling, flooding, abuse, and myriad other unpleasantries. From the perspective of listeners, it would likely be counterproductive.

Statutory or Law Enforcement Protection of Speech Environments and the Press

Many of the efforts to control speech described in this paper may be best countered not by the judiciary using the First Amendment, but rather by law enforcement using already existing or newly enacted laws. Consider several possibilities, some of which target trolling and others of which focus on flooding:

The enactment and vigorous enforcement of these laws would yield a range of challenging constitutional questions that this paper cannot address in their entirety. But the important doctrinal question held in common is whether the First Amendment would give sufficient room for such measures. To handle the political speech challenges of our time, I suggest that the First Amendment must be interpreted to give wide latitude for new measures to advance listener interests, including measures that protect some speakers from others.

As a doctrinal matter, such new laws would bring renewed attention to classic doctrines that accommodate the interests of listeners—such as the doctrines of “true threats” and “captive audiences”—as well as to the latitude that courts have traditionally given efforts to protect the electoral process from manipulation. Such laws might also redirect attention to a question originally raised by the Federal Communications Commission’s fairness doctrine and the Red Lion Co. v. FCC decision: how far the government may go solely to promote a better speech environment. 117 117. 395 U.S. 367 (1969).

We might begin with the prosecution of trolls, which could be addressed criminally as a form of harassment or threat. Current case law is relatively receptive to such efforts, for it allows the government to protect listeners from speech designed to intimidate them by creating a fear of violence. 118 118. See, e.g. , Virginia v. Black , 538 U.S. 343 (2003) (describing the true threat doctrine). The death threat and burning cross serve as archetypical examples. As we have seen, trolls frequently operate by describing horrific acts, and not in a manner suggesting good humor or artistic self-expression. 119 119. See Watts v. United State s, 394 U.S. 705 (1969) (barring the prosecution of a defendant when a threat was obviously made in jest); see also Elonis v. United States , 135 S. Ct. 2001 (2015) (reversing a conviction where the defendant maintained that his threats were self-expressive rap lyrics). In the Supreme Court’s most recent statement on the matter, it advised that “[i]ntimidation in the constitutionally proscribable sense of the word is a type of true threat, where a speaker directs a threat to a person or group of persons with the intent of placing the victim in fear of bodily harm or death.” 120 120. Black , 538 U.S. at 360. The fact that threats are often not carried out is immaterial; the intent to create a fear of violence is sufficient. 121 121. Id. Given this doctrinal backdrop, there is reason to believe that the First Amendment can already accommodate increased prosecution of those who try to intimidate journalists or other critics.

This belief is supported by the outcome of United States v. Moreland, the first lower court decision to consider the use of the federal cyberstalking statute to protect a journalist from an aggressive troll. 122 122. 207 F. Supp. 3d 1222 (N.D. Okla. 2016). Jason Moreland, the defendant, directed hundreds of aggressive emails, social media comments, and physical mailings at a journalist living and reporting in Washington, D.C. Many of his messages referenced violence and “a fight to the death.” In the face of a multi-faceted First Amendment challenge, the court wrote:

His communications directly referenced violence, indicated frustration that CP would not respond to his hundreds of emails, reflected concern that CP or someone on her behalf wanted to kill Moreland, stated that it was time to “eliminate things” and “fight to the death,” informed plaintiff that he knew where her brother was, and repeatedly conveyed that he expected a confrontation with CP or others on her behalf. . . . [T]he Court concludes that the statute is not unconstitutional as applied, as the words are in the nature of a true threat and speech integral to criminal conduct. 123 123. Id. at 1230-31.

Cases like Moreland suggest that while efforts to reduce trolling might present a serious enforcement challenge, the Constitution will not stand in the way so long as the trolling at issue looks more like threats and not just strongly expressed political views.

The constitutional questions raised by government efforts to fight flooding are more difficult. Much depends on the extent to which these efforts are seen as serving important societal interests beyond the quality or integrity of public discourse, such as the protection of privacy or the protection of the electoral process.

Of particular relevance, as more and more of our lives are lived online—for many Americans today, nearly every waking moment is spent in close proximity to a screen—we may be “captive audiences” far more often than in previous decades. The captive audience doctrine, first developed in the 1940s, describes situations in which one is left with no practical means of avoiding unwanted speech. It was developed in cases like Kovacs v. Cooper, 124 124. 336 U.S. 77 (1949). which concerned a city ban on “sound trucks” that drove around broadcasting various messages at a loud volume so as to reach both pedestrians and people within their homes. The Court wrote that “[t]he unwilling listener is not like the passer-by who may be offered a pamphlet in the street but cannot be made to take it. In his home or on the street he is practically helpless to escape this interference with his privacy by loud speakers except through the protection of the municipality.” 125 125. Id. at 86–87; see also Frisby v. Schultz, 487 U.S. 474 (1988) (upholding a municipal ordinance that prohibited focused picketing in front of residential homes). It is worth pondering the extent to which we are now captive audiences in somewhat subtler scenarios, and whether we have developed virtual equivalents to the home—like our various devices or our email inboxes—where it is effectively impossible to avoid certain messages. The idea that one might simply “avert the eyes” 126 126. Erznoznik v. City of Jacksonville , 422 U.S. 205, 211 (1975) (quoting Cohen v. California , 403 U.S. 15, 21 (1971)). as a means to deal with offensive messages seems increasingly implausible in many digital contexts. Relying on cases like Kovacs, the government might seek to develop and enforce “anti-captivity” measures that are designed to protect our privacy or autonomy online.

Other government interests may be implicated by efforts to fight flooding in the form of foreign propaganda. Consider, for instance, a ban on political advertising—including payments to social media firms—by foreign governments or even foreigners in general. Such a ban, if challenged as censorship, might be justified by the state’s compelling interest in defending the electoral process and the “national political community,” in the same manner that the government has justified laws banning foreign campaign contributions. As a three-judge panel of the D.C. district court explained in a recent ruling: “the United States has a compelling interest for purposes of First Amendment analysis in limiting the participation of foreign citizens in activities of American democratic self-government, and in thereby preventing foreign influence over the U.S. political process.” 127 127. Bluman v. FEC , 800 F. Supp. 2d 281, 288 (D.D.C. 2011), aff’d, 565 U.S. 1104 (2012). A related interest—protecting elections—has been called on to justify “campaign-free zones” near polling stations. See Burson v. Freeman , 504 U.S. 191, 211 (1992). It should not be any great step to assert that the United States may also have a compelling interest in preventing foreign interests from manipulating American elections through propaganda campaigns conducted through social media platforms.

I have left for last the question presented by potential new laws premised solely on an interest in improving the political speech environment. These laws would be inspired by the indelible dictum of Alexander Meiklejohn: “What is essential is not that everyone shall speak, but that everything worth saying shall be said” 128 128. Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 25 (1948). —and, to some meaningful degree, heard. Imagine, for instance, a law that makes any social media platform with significant market power a kind of trustee operating in the public interest, and requires that it actively take steps to promote a healthy speech environment. This could, in effect, be akin to a “fairness doctrine” for social media.

For those not familiar with it, for decades the fairness doctrine obligated broadcasters to use their power over spectrum to improve the conditions of political speech in the United States. 129 129. See Report on Editorializing by Broadcast Licensees , 13 F.C.C. 1246 (1949). It required that broadcasters affirmatively cover matters of public concern and do so in a “fair” manner. Furthermore, it created a right for anyone to demand the opportunity to respond to opposing views using the broadcaster’s facilities. 130 130. Applicability of the Fairness Doctrine in the Handling of Controversial Issues of Public Importance, 29 Fed. Reg. 10,426 (July 25, 1964). At the time of the doctrine’s first adoption in 1949, the First Amendment remained largely inert; by the 1960s, a constitutional challenge to the regulations became inevitable. In the 1969 Red Lion decision, the Supreme Court upheld the doctrine and in doing so described the First Amendment’s goals as follows:

It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount. It is the purpose of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail, rather than to countenance monopolization of that market, whether it be by the Government itself or a private licensee. 131 131. Red Lion Broad. Co. v. FCC , 395 U.S. 367, 390 (1969).

While Red Lion has never been explicitly overruled, it has been limited by subsequent cases, and it is now usually said to be dependent on the scarcity of spectrum suitable for broadcasting. 132 132. See, e.g. , Miami Herald Pub. Co. v. Tornillo , 418 U.S. 241 (1974) (striking down a state “right of reply” statute as applied to a newspaper). The FCC withdrew the fairness doctrine in 1987, opining that it was unconstitutional, 133 133. Syracuse Peace Council , 2 F.C.C. Rcd. 5043, 5047 (1987). and Red Lion has been presumed dead or overruled by a variety of government officials and scholars. 134 134. See, e.g. ,Thomas W. Hazlett et. al., The Overly Active Corpse of Red Lion, 9 Nw. J. Tech. & Intell. Prop. 51 (2010). Nonetheless, in the law, no doctrine is ever truly dead. All things have their season, and the major changes in our media environment seem to have strengthened the constitutional case for laws explicitly intended to improve political discourse.

To make my own preferences clear, I personally would not favor the creation of a fairness doctrine for social media or other parts of the web. That kind of law, I think, would be too hard to administer, too prone to manipulation, and too apt to flatten what has made the Internet interesting and innovative. But I could be overestimating those risks, and my own preferences do not bear on the question of whether Congress has the power to pass such a law. Given the problems discussed in this paper, among others, Congress might conclude that our political discourse has been deeply damaged, threatening not just coherent governance but the survival of the republic. On that basis, I think the elected branches should be allowed, within reasonable limits, to try returning the country to the kind of media environment that prevailed in the 1950s. Stated differently, it seems implausible that the First Amendment cannot allow Congress to cultivate more bipartisanship or nonpartisanship online. The justification for such a law would turn on the trends described above: the increasing scarcity of human attention, the rise to dominance of a few major platforms, and the pervasive evidence of negative effects on our democratic life.

Conclusion

It is obvious that changes in communications technologies will present new challenges for the First Amendment. For nearly twenty years now, scholars have been debating how the rise of the popular Internet might unsettle what the First Amendment takes for granted. Yet the future retains its capacity to surprise, for the emerging threats to our political speech environment are different from what many predicted. Few forecast that speech itself would become a weapon of censorship. In fact, some might say that celebrants of open and unfettered channels of Internet expression (myself included) are being hoisted on their own petard, as those very same channels are today used as ammunition against disfavored speakers. As such, the emerging methods of speech control present a particularly difficult set of challenges for those who share the commitment to free speech articulated so powerfully in the founding—and increasingly obsolete—generation of First Amendment jurisprudence.

1 Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest 226 (2017).

2 A third, slightly older technique—control of speech platforms—is also used to regulate speech, but it is not the subject of this paper. See, e.g. , Lawrence Lessig, What Things Regulate Speech: CDA 2.0 vs. Filtering , Berkman Klein Center for Internet & Society at Harvard University (May 12, 1998), https://cyber.harvard.edu/works/lessig/what_things.pdf; Jack Goldsmith and Tim Wu, Who Controls the Internet? Illusions of a Borderless World (2006); Jack M. Balkin, Old-School/New-School Speech Regulation , 127 Harv. L. Rev. 2296 (2014). One reason is that these techniques have already been subject to extensive scholarly attention. The other is that laws that require speech platforms to control speech are usually subject to First Amendment scrutiny. See, e.g. , Reno v. American Civil Liberties Union , 521 U.S. 844 (1997).

3 Peter Pomerantsev, The Menace of Unreality: How the Kremlin Weaponizes Information, Culture and Money , Interpreter (Nov. 22, 2014), http://www.interpretermag.com/the-menace-of-unreality-how-the-kremlin-weaponizes-information-culture-and-money.

4 There may, moreover, be more work to be done now in areas such as libel law. Given the raft of libel-trolling suits that burden small presses, stronger and faster First Amendment protection has arguably become necessary.

5 See, e.g. , Citizens United v. FEC , 558 U.S. 310 (2010).

6 Sorrell v. IMS Health Inc ., 564 U.S. 552 (2011).

7 Matal v. Tam , 137 S. Ct. 1744 (2017).

8 The merits of the recent, economic-rights case law is not the subject of this paper. Suffice it to say that these rulings have some academic supporters and many detractors. See, e.g. , Amanda Shanor, The New Lochner , 2016 Wis. L. Rev. 133 (2016); Jeremy K. Kessler, The Early Years of First Amendment Lochnerism , 116 Colum. L. Rev. 1915 (2016); Samuel R. Bagenstos, The Unrelenting Libertarian Challenge to Public Accommodations Law , 66 Stan. L. Rev. 1205 (2014); Leslie Kendrick, First Amendment Expansionism , 56 Wm. & Mary L. Rev. 1199 (2015).

9 The First Amendment was even silent when Congress passed its first laws restricting speech in 1798, not long after the adoption of the Bill of Rights and with the approval of many of the framers. This fact, among others, has long been slightly embarrassing to would-be “originalists” who by disposition would like to believe in a strong First Amendment. Robert Bork was rare among the first wave of originalists in calling attention to the Amendment’s unpromising early history. See Robert H. Bork, Neutral Principles and Some First Amendment Problems , 47 Ind. L.J. 1, 22 (1971).

10 Sedition Act of 1918, Pub. L. No. 65-150 (1918); Espionage Act of 1917, Pub. L. No. 65-24 (1917).

11 Debs v. United States , 249 U.S. 211, 214 (1919).

12 See, e.g. , Alan Axelrod, Selling the Great War: The Making of American Propaganda (2009); James R. Mock and Cedric Larson, Words That Won the War: The Story of the Committee on Public Information, 1917–1919 (1968).

13 George Creel, How We Advertised America: The First Telling of the Amazing Story of the Committee on Public Information That Carried the Gospel of Americanism to Every Corner of the Globe 5 (1920).

14 As described in Tim Wu, The Attention Merchants (2016), and sources cited therein.

15 On this tension, see Vincent Blasi, Rights Skepticism and Majority Rule at the Birth of the Modern First Amendment (2017) (unpublished manuscript) (on file with author).

16 See Masses Pub. Co. v. Patten , 244 F. 535, 543 (S.D.N.Y.), rev’d, 246 F. 24 (2d Cir. 1917) (granting a preliminary injunction to the publisher of The Masses , a revolutionary journal that the Postmaster General intended to withhold from the mails because it featured cartoons and text critical of the draft); see also Vincent Blasi, Learned Hand and the Self-Government Theory of the First Amendment: Masses Publishing Co. v. Patten, 61 U. Colo. L. Rev. 1 (1990).

17 See, e.g. , Whitney v. California , 274 U.S. 357, 372 (1927) (Brandies, J., concurring) ; Abrams v. United States , 250 U.S. 616, 624 (1919) (Holmes, J., dissenting) .

18 In cases like Dennis v. United States, 341 U.S. 494 (1951), and Brandenburg v. Ohio , 395 U.S. 444 (1969).

19 See, e.g. , Encyclopedia of the American Constitution: Supplement I 585 (Leonard W. Levy, Kenneth L. Karst and Adam Winkler eds., 1992) (describing the absence of a constitutional challenge to the Committee on Public Information).

20 Cf. Alexander Meiklejohn, The First Amendment Is a n Absolute , 1961 Sup. Ct. Rev. 245, 255 (arguing that the First Amendment “protects the freedom of those activities of thought and communication by which we ‘govern’”); Bork, supra , note 9, at 27-28 (defining political speech as “ speech concerned with governmental behavior, policy or personnel, whether the governmental unit involved is executive, legislative, judicial, or administrative”).

21 354 U.S. 476 (1957).

22 425 U.S. 748 (1976).

23 See, e.g. , First Nat’l Bank of Boston v. Bellotti , 435 U.S. 765 (1978).

24 See, e.g. , Buckley v. Valeo , 424 U.S. 1 (1976).

25 The trend is summarized well in Morgan N. Weiland, Expanding the Periphery and Threatening the Core: The Ascendant Libertarian Speech Tradition , 69 Stan. L. Rev. 1389 (2017).

26 In other words, this is a paper about speech and reporting concerned with how we are governed, which includes political criticism, campaigning, and public debates over policy or specific regulatory or legislative initiatives. By focusing on the political First Amendment, I am not taking the position that other domains of the First Amendment are unimportant. Cf. Robert Post, Meiklejohn ’s Mistake: Individual Autonomy and the Reform of Public Discourse , 64 U. Colo. L. Rev. 1109 (1993).

27 A metaphor suggested, though not actually used, by Justice Holmes. See Vincent Blasi, Holmes and the Marketplace of Ideas , 2004 Sup. Ct. Rev. 1, 4 (2004).

28 This corresponds to Balkin’s “Old School” speech regulation techniques. See Balkin, supra note 2, at 2298.

29 Herbert A. Simon, Designing Organizations for an Information-Rich World , in Computers, Communications, and the Public Interest 37, 40-41 (Martin Greenberger ed., 1971).

30 Consider that information—including speech—is not actually received or processed unless it attracts the fickle attention of the listener. As William James first pointed out in the 1890s, and as neuroscientists have confirmed, the brain ignores nearly everything, paying attention to a very limited stream of information. William James, The Principles of Psychology 403-04 (1890). At a minimum, the total capacity for attention is limited by time—168 hours a week—which becomes of particular relevance when the listeners in question are members of Congress, regulators, or others who are the supposed customers in the marketplace for good policy ideas.

31 See, e.g. , James Gleick, The Information: A History, a Theory, a Flood (2011); Eugene Volokh, Cheap Speech and What It Will Do , 104 Yale L.J. 1805 (1995).

32 See, e.g. , Balkin, supra note 2; Jeffrey Rosen, The Deciders: The Future of Privacy and Free Speech in the Age of Facebook and Google , 80 Fordham L. Rev. 1525 (2012); Tim Wu, Is Filtering Censorship? The Second Free Speech Tradition , Brookings Institution (Dec. 27, 2010), https://www.brookings.edu/research/is-filtering-censorship-the-second-free-speech-tradition.

33 See generally Wu, supra note 14.

34 See Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (2011). Scholarly consideration of filtering came earlier . See, e.g ., Cass Sunstein, Republic.com (2001); Dan Hunter, Philippic.com , 90 Cal. L. Rev. 611 (2002) (reviewing Sunstein’s Republic.com ); Elizabeth Garrett, Political Intermediaries and the Internet “ Revolution , ” 34 Loy. L.A. L. Rev. 1055 (2001).

35 Borrowing a term popularized by Nicholas Negroponte, the founder of MIT’s Media Lab. See Nicholas Kristof, The Daily Me , N.Y. Times (Mar. 18, 2009), http://www.nytimes.com/2009/03/19/opinion/19kristof.html.

36 See Lauren Drell, Why “Time Spent” Is One of Marketing’s Favorite Metrics , Mashable (Dec. 13, 2013), http://mashable.com/2013/12/13/time-spent-metrics/#8apdGV9yugq3.

37 See Wu, supra note 14, at 303.

38 See Tim Wu, How Donald Trump Wins by Losing , N.Y. Times (Mar. 3, 2017), https://www.nytimes.com/2017/03/03/opinion/sunday/how-donald-trump-wins-by-losing.html?_r=1.

39 Kathleen M. Sullivan, First Amendment Intermediaries in the Age of Cyberspace, 45 UCLA L. Rev. 1653, 1669 (1998).

40 Lessig, supra note 2; see also Lawrence Lessig, Code and Other Laws of Cyberspace (1999).

41 See Jonathan Zittrain, Internet Points of Control , 44 B.C. L. Rev. 653 (2003); Christopher S. Yoo, Free Speech and the Myth of the Internet as an Unintermediated Experience , 78 Geo. Wash. L. Rev. 697 (2010); Jeffrey Rosen, Google’s Gatekeepers , N.Y. Times (Nov. 28, 2008), http://www.nytimes.com/2008/11/30/magazine/30google-t.html; Goldsmith and Wu, supra note 2.

42 The Open Net Initiative, launched as a collaboration between several universities in 2004, was and is perhaps the most ambitious documentation of online censorship around the world. See Evan M. Vittor, HLS Team to Study Internet Censorship , Harvard Crimson (Apr. 28, 2004), http://www.thecrimson.com/article/2004/4/28/hls-team-to-study-internet-censorship.

43 See generally Balkin, supra note 2 (describing “new school” speech control).

44 See, e.g. , R. Kelly Garrett, Echo Chambers Online?: Politically Motivated Selective Exposure Among Internet News Users , 14 J. Computer-Mediated Comm. 265 (2009); W. Lance Bennett and Shanto Iyengar, A New Era of Minimal Effects? The Changing Foundations of Political Communication , 58 J. Comm. 707 (2008); Sofia Grafanaki, Autonomy Challenges in the Age of Big Data , 27 Fordham Intell. Prop. Media & Ent. L.J. 803 (2017).

45 For a notable partial exception, see Danielle Keats Citron, Cyber Civil Rights , 89 B.U. L. Rev. 61 (2009) (discussing online hate mob attacks on women and other vulnerable groups).

46 For a full account of the speech-restrictive measures taken by the U.S. government during wartime, see Geoffrey Stone, Perilous Times: Free Speech in Wartime, From the Sedition Act of 1798 to the War on Terrorism (2004).

47 See generally Tufekci, supra note 1.

48 Gary King, Jennifer Pan, and Margaret E. Roberts, How Censorship in China Allows Government Criticism but Silences Collective Expression , 107 Am. Pol. Sci. Rev. 326 (2013); Gary King, Jennifer Pan, and Margaret E. Roberts, How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument , 111 Am. Pol. Sci. Rev. 484 (2017) [hereinafter King et al., 2017 APSR].

49 King et al., 2017 APSR, supra note 48, at 496 (emphasis omitted).

51 The term was coined in an article about a cease-and-desist letter sent by Marco Beach Ocean Resort to Urinal.net—a “site [that] has hundreds of fans who regularly submit pictures of urinals they take from locations all over the world”—threatening legal action unless the website stopped mentioning the resort’s name alongside photos from its bathroom. The cease-and-desist letter prompted more attention than the original posts on Urinal.net. Mike Masnick, Since When Is It Illegal to Just Mention a Trademark Online? , Techdirt (Jan. 5, 2005), https://www.techdirt.com/articles/20050105/0132239.shtml.

52 Id. However, the Streisand example may be obscuring that many other cease-and-desist letters—even issued by celebrities—never attract much attention.

53 See Sharad Goel et al., The Structural Virality of Online Diffusion , 62 Mgmt. Sci. 180 (2016).

54 See supra note 2. The potential for requiring Internet intermediaries to control speech was something a number of people noticed early in its history. As Lessig observed in 1998, we had already begun to live in an era in which it was clear that networks might be designed to filter some speech and leave others untouched, or make intermediaries liable for carrying “forbidden” content. See Lessig, supra note 2. Around that same time, Congress undertook what Balkin later called “collateral censorship” techniques: requiring search engines and others to block copyrighted materials on request, and requiring hosts to prevent minors from accessing indecent content. See, e.g. , Online Copyright Infringement Liability Limitation Act, Pub. L. No. 105-304 (1998); Communications Decency Act of 1996, Pub. L. No. 104-104 (1996). The potential for foreign governments to rely on targeting search engines, ISPs, and major hosting sites as a technique of control was also recognized . See Goldsmith and Wu, supra note 2. By now, it has become common knowledge that platforms like Google and Facebook exert a major influence on the speech environment, and the techniques of targeting intermediaries have evolved considerably. For a comprehensive survey of such techniques, see Balkin, supra note 2; see also Seth F. Kreimer, Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link , 155 U. Pa. L. Rev. 11 (2006).

55 In fact, recent reports suggest that President Trump and his associates may have already engaged in the distraction and cheerleading techniques described above. See, e.g. , Oliver Darcy, Lawsuit: Fox News Concocted Seth Rich Story with Oversight from White House , CNN (Aug. 2, 2017), http://money.cnn.com/2017/08/01/media/rod-wheeler-seth-rich-fox-news-lawsuit/index.html; Taylor Link, P resident Trump Gave a Shout Out to an Apparent Twitter Bot, Hasn’t Removed the Retweet , Salon (Aug. 6, 2017), http://www.salon.com/2017/08/06/president-trump-gave-a-shout-out-to-an-apparent-twitter-bot-hasnt-removed-the-retweet.

56 Max Seddon, Documents Show How Russia’s Troll Army Hit America , BuzzFeed News (June 2, 2014), https://www.buzzfeed.com/maxseddon/documents-show-how-russias-troll-army-hit-america?utm_term=.dcXBvNmo9#.xj5842QBx; Pomerantsev, supra note 3; Russia Update: Questions About Putin’s Health After Canceled Meetings & Vague Answers , Interpreter (Mar. 12, 2015), http://www.interpretermag.com/russia-update-march-12-2015/#7432.

57 Peter Pomerantsev, Inside The Kremlin ’s Hall of Mirrors , Guardian (Apr. 9, 2015), https://www.theguardian.com/news/2015/apr/09/kremlin-hall-of-mirrors-military-information-psychology.

58 Pomerantsev, supra note 3.

59 Cf. Citron, supra note 45.

60 Andrew Higgins, Effort to Expose Russia’s “Troll Army” Draws Vicious Retaliation , N.Y. Times (May 30, 2016), https://www.nytimes.com/2016/05/31/world/europe/russia-finland-nato-trolls.html.

61 See Rachel Roberts, Russia Hired 1,000 People to Create Anti-Clinton “Fake News” in Key US States During Election, Trump-Russia Hearings Leader Reveals , Independent (Mar. 30, 2017), https://www.independent.co.uk/news/world/americas/us-politics/russian-trolls-hilary-clinton-fake-news-election-democrat-mark-warner-intelligence-committee-a7657641.html; Natasha Bertrand, It Looks like Russia Hired Internet Trolls to Pose as Pro-Trump Americans , Business Insider (July 27, 2016), http://www.businessinsider.com/russia-internet-trolls-and-donald-trump-2016-7.

62 Pomerantsev, supra note 3; see also Pomerantsev, supra note 57.

63 See Roberts, supra note 61 . The degree to which trolls operating in the United States are funded by or otherwise coordinated with the Russian state is a topic of wide speculation. However, based on leaked documents and whistleblower accounts, at least some of the attacks on Trump critics in 2016 and 2017 were launched by Russia itself. See, e.g. , Alexey Kovalev, Russia ’s Infamous “ Troll Factory ” Is Now Posing as a Media Empire , Moscow Times (Mar. 24, 2017), https://themoscowtimes.com/articles/russias-infamous-troll-factory-is-now-posing-as-a-media-empire-57534.

64 David French, The Price I’ve Paid for Opposing Donald Trump , National Review (Oct. 21, 2016), http://www.nationalreview.com/article/441319/donald-trump-alt-right-internet-abuse-never-trump-movement.

65 Stephen Battaglio, Trump’s Attacks On CNN Aren’t Hurting It One Bit , L.A. Times (Feb. 16, 2017), http://www.latimes.com/business/hollywood/la-fi-ct-cnn-zucker-20170216-story.html.

66 French, supra note 64.

67 Rosa Brooks, And then the Breitbart Lynch Mob Came for Me , Foreign Policy (Feb. 6, 2017), http://foreignpolicy.com/2017/02/06/and-then-the-breitbart-lynch-mob-came-for-me-bannon-trolls-trump.

68 See, e.g. , Katharine Q. Seelye, Protesters Disrupt Speech by “Bell Curve” Author at Vermont College , N.Y. Times (Mar. 3, 2017), https://www.nytimes.com/2017/03/03/us/middlebury-college-charles-murray-bell-curve-protest.html?_r=0.

69 But even more cruelly:
Someone—bored, apparently, with the usual angles of harassment—had made a fake Twitter account purporting to be my dead dad, featuring a stolen, beloved photo of him, for no reason other than to hurt me. The name on the account was “ PawWestDonezo, ” because my father’s name was Paul West, and a difficult battle with prostate cancer had rendered him “donezo” (goofy slang for “done”) just 18 months earlier. “Embarrassed father of an idiot,” the bio read. “Other two kids are fine, though.” His location was “Dirt hole in Seattle.”
Lindy West, What Happened When I Confronted My Cruelest Troll , Guardian (Feb. 2, 2015), https://www.theguardian.com/society/2015/feb/02/what-happened-confronted-cruellest-troll-lindy-west.

71 J.S. Mill, On Liberty and Other Writings 69 (Stefan Collini ed., Cambridge University Press 1989) (1859) (“These tendencies of the times cause the public to be more disposed than at most former periods to prescribe general rules of conduct, and endeavour to make every one conform to the approved standard.”).

72 Se e generally Adam Bienkov, Astroturfing: What Is It and Why Does It Matter? , Guardian (Feb. 8, 2012), https://www.theguardian.com/commentisfree/2012/feb/08/what-is-astroturfing.

73 See Goldsmith and Wu, supra note 2 (including an earlier investigation into Chinese censorship innovations).

74 Predictions of the Communist Party’s downfall at the hands of the Internet are surveyed in Chapter 6 of Goldsmith and Wu, supra note 2.

75 King et al., 2017 APSR, supra note 48, at 484.

76 See Pomerantsev, supra note 57.

77 Tufekci, supra note 1, at 246.

78 Id. at 231, 241.

79 Philip N. Howard, et al., Junk News and Bots During the U.S. Election: What Were Michigan Voters Sharing over Twitter? , Computational Propaganda Project (Mar. 26, 2017), http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/03/What-Were-Michigan-Voters-Sharing-Over-Twitter-v2.pdf.

80 See Onur Varol et al., Online Human-Bot Interactions: Detection, Estimation, and Characterization , Int’l AAAI Conf. Web & Social Media (Mar. 27, 2017), https://arxiv.org/pdf/1703.03107.pdf (estimating that 9 to 15 percent of active Twitter users are bots).

81 Rebecca Grant, Facebook Has No Idea How Many Fake Accounts It Has—But It Could Be Nearly 140M , Venturebeat (Feb. 3, 2014), https://venturebeat.com/2014/02/03/facebook-has-no-idea-how-many-fake-accounts-it-has-but-it-could-nearly-140m.

82 Patrick Kulp, Bots Are the Latest Weapon in the Net Neutrality Battle , Mashable (May 10, 2017), http://mashable.com/2017/05/10/bots-net-neutrality-comments-fcc/#a8jzOIYQu5qE.

83 The Supreme Court reaffirmed last spring that U.S. government propaganda is outside the reach of the First Amendment. See Matal v. Tam , 137 S. Ct. 1744, 1758 (2017) (noting, in dicta, that “the First Amendment did not demand that the Government balance the message of [pro-World War II] posters by producing and distributing posters encouraging Americans to refrain from engaging in these activities”). For an argument that such propaganda ought be subject to First Amendment controls, see William W. Van Alstyne, The First Amendment and the Suppression of Warmongering Propaganda in the United States: Comments and Footnotes , 31 Law & Contemp. Probs. 530 (1966).

84 See Wooley v. Maynard , 430 U.S. 705 (1977).

85 See, e.g. , West Virginia State Bd. of Educ. v. Barnette , 319 U.S. 624 (1943); s ee also Linmark Assocs., Inc. v. Willingboro Twp ., 431 U.S. 85 (1977). Other constraints are surveyed in Mark G. Yudof, When Governments Speak: Toward a Theory of Government Expression and the First Amendment , 57 Tex. L. Rev. 863 (1979).

86 For a sample of the debate, see Robyn Caplan, Like It or Not, Facebook Is Now a Media Company , N.Y. Times (May 17, 2016), https://www.nytimes.com/roomfordebate/2016/05/17/is-facebook-saving-journalism-or-ruining-it/like-it-or-not-facebook-is-now-a-media-company.

87 See Stephen J.A. Ward, The Invention of Journalism Ethics, First Edition: The Path to Objectivity and Beyond (2006); David T.Z. Mindich, Just the Facts: How “Objectivity” Came to Define American Journalism (2000).

88 See Clay Shirky, Here Comes Everybody: The Power of Organizing Without Organizations (2008); cf. William Fisher, Theories of Intellectual Property , in New Essays in the Legal and Political Theory of Property 168, 193 (Stephen R. Munzer ed., 2001) (describing semiotic democracy).

89 Volokh, supra note 31, at 1807.

90 Lindy West, Twitter Doesn’t Think These Rape and Death Threats Are Harassment , Daily Dot (Dec. 23, 2014), https://www.dailydot.com/via/twitter-harassment-rape-death-threat-report.

92 Lindy West, I’ve Left Twitter. It Is Unusable for Anyone but Trolls, Robots and Dictators , Guardian (Jan. 3, 2017), https://www.theguardian.com/commentisfree/2017/jan/03/ive-left-twitter-unusable-anyone-but-trolls-robots-dictators-lindy-west.

93 See, e.g. , Julian Dibbel, A Rape in Cyberspace: How an Evil Clown, a Haitian Trickster Spirit, Two Wizards, and a Cast of Dozens Turned a Database Into a Society , Village Voice (Dec. 23, 1993), http://www.juliandibbell.com/texts/bungle_vv.html.

94 See Cade Metz, At 15, Wikipedia Is Finally Finding Its Way to the Truth , Wired (Jan. 15, 2016), https://www.wired.com/2016/01/at-15-wikipedia-is-finally-finding-its-way-to-the-truth.

95 See Wu, supra note 14, at 292.

96 This statute proscribes conduct that is intended to “harass” or “intimidate,” is carried out using “any interactive computer service or electronic communication service or electronic communication system of interstate commerce,” and “would be reasonably expected to cause substantial emotional distress.” 18 U.S.C § 2261A(2)(B).

97 See, e.g. , United States v. Moreland , 207 F. Supp. 3d 1222, 1229 (N.D. Okla. 2016) (holding that the cyberstalking statute’s application to a defendant who sent repeated bizarre and threatening messages to a journalist on social media websites is not barred by the First Amendment).

98 See, e.g. , Martin Pengelly and Joanna Walters, Trump Accused of Encouraging Attacks on Journalists with CNN Body-Slam Tweet , Guardian (July 2, 2017), https://www.theguardian.com/us-news/2017/jul/02/trump-body-slam-cnn-tweet-violence-reporters-wrestlemania.

99 See French, supra note 64.

100 Blum v. Yaretsky , 457 U.S. 991, 1003 (1982).

101 Id. at 1004. The Fourth Circuit puts it slightly differently: the state is acting when it “has coerced the private actor,” or when it “has sought to evade a clear constitutional duty through delegation to a private actor.” German v. Fox , 267 F. App’x 231, 233 n.* (4th Cir. 2008).

102 See, e.g. , N.Y. Penal Law § 20.00 (McKinney); Model Penal Code § 2.06 (Am. Law Inst. 2016).

103 In Blum itself, the Supreme Court stated that the medical decisions made by the nursing home were insufficiently directed by the state to be deemed state action. Blum , 457 U.S. at 1012.

104 614 F.3d 273 (6th Cir. 2010).

108 Wells ex rel. Bankr. Estate of Arnone-Doran v. City of Grosse Pointe Farms , 581 F. App’x 469 (6th Cir. 2014).

109 In a district court case in New Jersey, the court refused to dismiss an action brought by a woman who was fired by her nonprofit after local government officials extensively criticized her for comments she made about law enforcement. Downey v. Coal. Against Rape & Abuse, Inc. , 143 F. Supp. 2d 423 (D.N.J. 2001); s ee also Lynch v. Southampton Animal Shelter Found. Inc. , 278 F.R.D. 55 (E.D.N.Y. 2011) (denying motion to dismiss where a privatized animal shelter that fired a volunteer who was an animal rights activist was alleged to be a state actor); Ciacciarella v. Bronko , 534 F. Supp. 2d 276 (D. Conn. 2008); Pendleton v. St. Louis County , 178 F.3d 1007 (8th Cir. 1999).

110 326 U.S. 501 (1946).

111 Lloyd Corp., Ltd. v. Tanner , 407 U.S. 551, 569 (1972).

112 See, e.g. , Trevor Puetz, Note, Facebook: The New Town Square , 44 Sw. L. Rev. 385, 387–88 (2014) (arguing that “Facebook should be analyzed under the quasi-municipality doctrine, which allows for the application of freedom of speech protection on certain private property”).

113 Marsh , 326 U.S. at 502-03.

114 Max Weber, Politics as a Vocation , in From Max Weber: Essays in Sociology 77, 78 (H.H. Gerth & C. Wright Mills eds. & trans., 1946). Even the town’s policeman was paid by the corporation. Marsh , 326 U.S. at 502.

115 Wu, supra note 32.

116 948 F. Supp. 436 (E.D. Pa. 1996).

117 395 U.S. 367 (1969).

118 See, e.g. , Virginia v. Black , 538 U.S. 343 (2003) (describing the true threat doctrine).

119 See Watts v. United State s, 394 U.S. 705 (1969) (barring the prosecution of a defendant when a threat was obviously made in jest); see also Elonis v. United States , 135 S. Ct. 2001 (2015) (reversing a conviction where the defendant maintained that his threats were self-expressive rap lyrics).

120 Black , 538 U.S. at 360.

122 207 F. Supp. 3d 1222 (N.D. Okla. 2016).

123 Id. at 1230-31.

124 336 U.S. 77 (1949).

125 Id. at 86–87; see also Frisby v. Schultz, 487 U.S. 474 (1988) (upholding a municipal ordinance that prohibited focused picketing in front of residential homes).

126 Erznoznik v. City of Jacksonville , 422 U.S. 205, 211 (1975) (quoting Cohen v. California , 403 U.S. 15, 21 (1971)).

127 Bluman v. FEC , 800 F. Supp. 2d 281, 288 (D.D.C. 2011), aff’d, 565 U.S. 1104 (2012). A related interest—protecting elections—has been called on to justify “campaign-free zones” near polling stations. See Burson v. Freeman , 504 U.S. 191, 211 (1992).

128 Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 25 (1948).

129 See Report on Editorializing by Broadcast Licensees , 13 F.C.C. 1246 (1949).

130 Applicability of the Fairness Doctrine in the Handling of Controversial Issues of Public Importance, 29 Fed. Reg. 10,426 (July 25, 1964).

131 Red Lion Broad. Co. v. FCC , 395 U.S. 367, 390 (1969).

132 See, e.g. , Miami Herald Pub. Co. v. Tornillo , 418 U.S. 241 (1974) (striking down a state “right of reply” statute as applied to a newspaper).

133 Syracuse Peace Council , 2 F.C.C. Rcd. 5043, 5047 (1987).

134 See, e.g. , Thomas W. Hazlett et. al., The Overly Active Corpse of Red Lion, 9 Nw. J. Tech. & Intell. Prop. 51 (2010).

Tim Wu is an Isidor and Seville Sulzbacher Professor of Law at Columbia Law School.

Filed Under

Sign up for news about First Amendment events, research, and litigation

Responses

Reflections on Whether the First Amendment Is Obsolete

Not Waving but Drowning: Saving the Audience from the Floods