clock menu more-arrow no yes mobile

Filed under:

The Supreme Court must decide if it wants to own Twitter

The justices risk miring the entire federal judiciary in the content moderation wars.

A photo of a phone displaying a Trump tweet that says “These thugs are dishonoring the memory of George Floyd and I won’t let that happen” and the tweet has a warning above it saying the post violated Twitter rules.
In this photo illustration, a notification from Twitter appears on tweet by former President Donald Trump that the social media platform says violated its policy on May 29, 2020.
Justin Sullivan/Getty Images
Ian Millhiser is a senior correspondent at Vox, where he focuses on the Supreme Court, the Constitution, and the decline of liberal democracy in the United States. He received a JD from Duke University and is the author of two books on the Supreme Court.

The Twitter Wars have arrived at the Supreme Court.

On Halloween, the Supreme Court will hear the first two in a series of five cases the justices plan to decide in their current term that ask what the government’s relationship should be with social media outlets like Facebook, YouTube, or Twitter (the social media app that Elon Musk insists on calling “X”).

These first two cases are, admittedly, the most low-stakes of the lot — at least from the perspective of ordinary citizens who care about free speech. Together, the first two cases, O’Connor-Ratcliff v. Garnier and Lindke v. Freed, involve three social media users who did nothing more than block someone on their Twitter or Facebook accounts. But these three social media users are also government officials. And when a government official blocks someone, that raises very thorny First Amendment questions that are surprisingly difficult to sort out.

Two of the three other cases, meanwhile, ask whether the government may order social media sites to publish content they do not wish to publish — something that, under longstanding law, is an unambiguous violation of the First Amendment. The last case concerns whether the government may merely ask these outlets to pull down content.

When the Supreme Court closes out its term this summer, in other words, it could become the central player in the conflicts that drive the Way Too Online community: Which content, if any, should be removed from social media websites? Which users are too toxic for Twitter or Facebook? How much freedom should social media users, and especially government officials, have to censor or block people who annoy them online? And should decisions about who can post online be made by the free market, or by government officials who may have a political stake in the outcome?

Some of the disputes that arise out of these questions are quite weighty. But if the Supreme Court allows itself to get pulled into the Twitter Wars, it risks drowning the judiciary in a deluge of inconsequential cases that have no business being heard by judges. For every president banned by Twitter, there is a simply astonishing array of ordinary moderation decisions lurking behind the scenes.

As Twitter recently told the justices, since August 2015 it has “terminated over 1.7 million accounts” for promoting terrorism or other illegal activities — and there are countless other moderation choices that impose one consequence or another on rude people who aren’t calling for terrorism or criminality. The Supreme Court should be extraordinarily cautious before it allows itself to get pulled into fights over content moderation, lest it get dragged into millions of disputes brought by internet trolls.

But, if the Supreme Court is not careful, it could wind up transforming itself into the final word on the most routine and petty online disputes between public officials and their constituents. Worse, by the end of its term, the Court could wind up becoming the venue of last resort for thousands of aggrieved social media users who are mad that their content has been suppressed.

The Court could end up, in effect, owning Twitter — an unfortunate position that has already turned the richest man in the world into a laughingstock.

So what’s going on in the two Halloween cases?

Both O’Connor-Ratcliff and Lindke involve strikingly similar disputes.

In the first case, Michelle O’Connor-Ratcliff and T.J. Zane, two candidates for school board in a district near San Diego, initially created Facebook and Twitter accounts to promote their candidacies. After they won, they continued to use these pages to interact with constituents and to promote some of their work on the school board.

A dispute arose after two parents of students in this school district started posting lengthy and often repetitive criticisms of the board. These complaints, according to the United States Court of Appeals for the Ninth Circuit panel that heard the O’Connor-Ratcliff case, concerned “race relations in the District, and alleged financial wrongdoing” by a former superintendent. One of these parents “posted 226 identical replies to O’Connor-Ratcliff’s Twitter page, one to each Tweet O’Connor-Ratcliff had ever written on her public account.”

Eventually, O’Connor-Ratcliff blocked these parents from her Facebook page and blocked one of them on Twitter, while Zane also blocked the parents on Facebook. The parents then sued, claiming that they have a First Amendment right to post public comments responding to their elected officials.

The Lindke case involves a similar dispute between James Freed, the city manager in Port Huron, Michigan, and Kevin Lindke, who was blocked from Freed’s Facebook page after Lindke posted comments on that page that were critical of Freed’s handling of the Covid-19 pandemic. Like the plaintiffs in O’Connor-Ratcliff, Lindke claims he has a First Amendment right to continue posting comments on Freed’s Facebook page.

Ordinarily, if a social media user is upset that they were blocked by another user, they can try to take it up with that user. Or maybe they can raise their grievance with the management of Twitter or Facebook. But they certainly would have no business making a federal case out of such a minor dispute.

But the First Amendment imposes very tight restrictions on government officials who engage in viewpoint discrimination. So, to the extent that O’Connor-Ratcliff, Zane, or Freed blocked someone because they disagreed with that person’s opinions or wanted to prevent those opinions from being seen by other people, they potentially violated the First Amendment.

That said, the specific issue before the Supreme Court in O’Connor-Ratcliff and Lindke doesn’t actually involve the First Amendment itself. It instead involves a threshold issue: Whether the three defendants in these cases were acting in their capacity as government officials when they blocked the plaintiffs, or whether they were merely acting as private citizens.

As a general rule, the Constitution only imposes limits on state actors. It is unconstitutional for the government to censor speech, but the First Amendment imposes no limits on private citizens who block social media users, on private companies that refuse to publish content they do not like, or on private individuals who tell someone else to “shut up.” Difficult questions sometimes arise when a government official takes an action that would be unconstitutional if they did it on the job — but it is unclear whether they were on the job.

Imagine, for example, that an off-duty police officer spots two of his neighbors engaged in a fight, and that he uses excessive force to break this fight up. If the officer was acting as a cop when he did so, he could face a constitutional lawsuit in federal court. If he was merely acting as a private citizen, he may still be liable for battery in state court, but the Constitution would have nothing to say about his actions.

The Supreme Court has handed down several precedents instructing lower courts on how to determine whether a government employee was acting within the scope of their employment when they took an allegedly unconstitutional action. In cases involving police officers, for example, the Court has placed a good deal of emphasis on whether the officer displayed their badge, or otherwise “purported to exercise the authority” of a government official.

But social media is a relatively new innovation. And the Supreme Court has not yet provided guidelines on when a public official exercises the authority of office when they moderate social media content.

These cases are complicated even more because the specific social media accounts at issue in O’Connor-Ratcliff and Lindke were sometimes used to discuss governmental matters and sometimes used to discuss other things. In the Lindke case, for example, Freed used his Facebook page both as a personal webpage — where he posted nongovernmental content such as photos of his daughter and Bible verses — and as a place where Facebook users could read press releases and other content relating to his official duties as city manager.

The Court, in other words, now faces the unenviable task of having to figure out which posts by public officials are sufficiently related to their jobs that those posts should be attributed to the government, and not simply to a private citizen who works for the government.

It’s really hard to come up with a legal test to determine when government officials are acting as government officials

Though the Supreme Court has decided quite a few cases asking whether a particular government official was acting in their official capacity when they took an allegedly unconstitutional action, the Court often emphasizes just how difficult it is to decide marginal cases. As the Court said in Jackson v. Metropolitan Edison (1974), “the question whether particular conduct is ‘private,’ on the one hand, or ‘state action,’ on the other, frequently admits of no easy answer.”

The briefs in the O’Connor-Ratcliff and Lindke cases propose a variety of different sorting mechanisms that the Court could use to determine when a government official is on the clock when they post on social media — indeed, they propose enough possible tests that it would be tedious to list them here. All of them exist on a spectrum between tests that would provide more certainty to government officials about what they can do without risking a lawsuit, and tests that are more flexible and give judges more ability to determine whether a particular social media post should be attributed to the government.

The Sixth Circuit, which heard the Lindke case, erred on the side of certainty. Its opinion (written by Judge Amul Thapar, a Trump appointee with close ties to the Federalist Society) determined that the Constitution only applies to a governmental official’s social media posts if they were posted “pursuant to his actual or apparent duties,” such as if a state law requires the official to maintain a social media presence, or if the official posted to an account that is owned by the government. Or if the official posted “using his state authority,” such as if Freed had relied on his own government-employed staff to maintain his Facebook page.

The Sixth Circuit concluded that Freed did not act in his official capacity when he posted to his Facebook page, even when he wrote about the local government he belongs to.

The Sixth Circuit’s approach has the advantage of being clear-cut — absent evidence that a public official used government resources or acted pursuant to their official duties, their actions are not constrained by the Constitution. But, as the ACLU warns in an amicus brief, this test is also far too narrow. Among other things, the ACLU warns — drawing upon a somewhat modified version of the facts of an actual case — that the Sixth Circuit’s test might prevent anyone from filing a constitutional lawsuit against off-duty police officers who ambush a private citizen and beat that person to death, since doing so is not an official duty of police officers.

Perhaps anticipating this critique, the Sixth Circuit’s opinion suggests that police officers are categorically different than other government officials. A police officer, Judge Thupar wrote, exudes authority “when he wears his uniform, displays his badge, or informs a passerby that he is an officer,” and so a cop who does so is presumptively engaged in state action subject to constitutional restrictions.

But this carveout for cops also sweeps too broadly. Imagine, for example, a police officer who gets off work and then, without changing out of their uniform, immediately drives to their child’s high school to pick up that child and a few friends. Now imagine that the student passengers refer to a classmate using vulgar and sexualized language, and the officer/parent tells them to “stop using that kind of language.”

Ordinarily, the First Amendment does not permit a uniformed law enforcement officer to police the language of a law-abiding citizen. But, in this situation, the officer was clearly acting as a parent and not as a government official. And no reasonable judge would conclude that the officer should be sued in federal court.

As the Supreme Court said in Jackson, coming up with bright-line rules that can distinguish private actions from state actions is quite difficult. And it’s easy to poke holes in the Sixth Circuit’s attempt to do so.

Meanwhile, the Ninth Circuit’s opinion in O’Connor-Ratcliff (written by Judge Marsha Berzon, a former union lawyer and a leading liberal voice within the judiciary), adopts a more flexible approach. Under that opinion, which ruled that the school board members in that case did act as state officials when they posted about school district business online, courts should ask questions like whether an official purports to act as a government official when they post online, or whether their online activity “‘related in some meaningful way’ to their ‘governmental status’ and ‘to the performance of [their] duties.’”

Yet, as the National Republican Senatorial Committee (NRSC) warns in its own amicus brief, the Ninth Circuit’s approach risks chilling the sort of political campaign speech that elected officials routinely engage in, and that receives the highest levels of First Amendment protection.

“For an incumbent,” the NRSC’s brief argues, “an important part of a social media messaging strategy is often to remind voters about his or her job performance.” That means that candidates for reelection will often discuss their past conduct in office and tout their accomplishments. But a candidate may be reluctant to engage in this kind of First Amendment-protected campaign speech online if they fear that discussing “the performance of their duties” will open them up to federal lawsuits.

Accordingly, the NRSC brief asks the justices to “establish a clear test that ensures ambiguity does not chill protected speech.”

It’s a pretty compelling argument. If you’ve spent any time whatsoever on platforms like Twitter, you know about the kind of malevolent, always-willing-to-escalate trolls that flourish on those platforms. Political candidates are not going to want to do anything that could open them up to being sued by their worst reply guys.

But none of that changes the fact that the Supreme Court has repeatedly warned, over the course of many decades, that it is devilishly hard to come up with a legal test that will correctly sort every action taken by a government official into the “private action” or “state action” box. Judge Berzon’s approach, which effectively requires a judge to take a close look at marginal cases and sort them into one box or the other, may be the best thing anyone can come up with.

The judiciary does not want to be responsible for this mess

The three other social media-related lawsuits that the Court will hear this term could also pull the judiciary into countless petty disputes about what is published online and who can see it.

Two of these cases, Moody v. NetChoice and NetChoice v. Paxton, involve unconstitutional Florida and Texas laws that force social media companies to publish or elevate content that they would prefer not to publish, or to publish but not widely distribute. Both laws are explicit attempts to force social media companies to give bigger platforms to conservative voices. As Florida Gov. Ron DeSantis said of his state’s law, it exists to fight supposedly “biased silencing” of “our freedom of speech as conservatives ... by the ‘big tech’ oligarchs in Silicon Valley.”

The two laws are similar but not identical. Both seek to impose strict limits on the major social media platforms’ ability to moderate content they deem offensive or undesirable. Texas’s law, for example, prohibits these platforms from moderating content based on “the viewpoint of the user or another person” or on “the viewpoint represented in the user’s expression or another person’s expression.”

As a practical matter, that means that Twitter or Facebook could not remove someone’s content because it expresses a viewpoint that is common within the Republican Party — such as if it promotes misinformation about Covid-19 vaccines, or if it touts the false belief that Donald Trump won the 2020 election. It also means that these companies could not remove content published by Nazis or Ku Klux Klansmen because the platforms disagree with the viewpoint that all Jews should be exterminated or that the United States should be a white supremacist society.

And both state laws permit private individuals to sue the major social media platforms — under Florida’s law a successful plaintiff can walk away with a payday of $100,000 or more.

The final social media case before the Supreme Court, Murthy v. Missouri, involves an odd decision by the right-wing Fifth Circuit, which effectively ordered much of the Biden administration to stop talking to social media companies about which content they should remove. According to the Justice Department, the federal government often asks social media companies to remove content that seeks to recruit terrorists, that was produced by America’s foreign adversaries, or that spreads disinformation that could harm public health.

As a general rule, the First Amendment forbids the government from coercing media companies to remove content, but it does not prevent government officials from asking a media outlet to voluntarily do so. The reason why the Fifth Circuit’s order in Murthy is so bizarre is that it blurred the line between these two categories, imposing a gag order on the Biden administration despite the fact that the Fifth Circuit did not identify any evidence of actual coercion.

The common theme connecting all five of these Supreme Court cases is that, in each of them, aggrieved social media users want to turn the kind of routine content moderation decisions made by both rank-and-file users of social media and by the platforms themselves, into matters that must be resolved by the courts.

The plaintiffs in O’Connor-Ratcliff and Lindke want the federal judiciary to get involved when a government official blocks someone online. The state laws animating the two NetChoice cases attempt to make state courts the arbiters of every social media company’s decision to ban a user, or even potentially to use an algorithm that doesn’t always surface conservative content. The Fifth Circuit’s approach in Murthy could potentially trigger a federal lawsuit every time a government official so much as has a conversation with someone at a social media company.

So let me close with a word of advice to the justices: You do not want this fight. Believe me, you do not want to turn yourselves into the final arbiter of what can be posted online. And, if you are not careful with these lawsuits, you are going to wind up overwhelming the court system with piddling disputes filed by social media trolls.

Not long after Elon Musk made his cursed purchase of Twitter, the Verge’s Nilay Patel published a prescient essay laying out why this purchase would inevitably end in disaster. Its title: “Welcome to hell, Elon.”

A core part of Patel’s argument is that social media companies depend on advertisers to pay their bills, and advertisers demand “brand safety,” meaning that they don’t want their paid ads to appear next to a swastika, an anti-vaxxer, or some other content that is likely to offend many potential consumers. As Patel wrote, running a platform like Twitter “means you have to ban racism, sexism, transphobia, and all kinds of other speech that is totally legal in the United States but reveals people to be total assholes.”

The courts are ill-equipped to make these kinds of judgments about which content should be published online, and any attempt by a government body like the judiciary to assume control over these sorts of content moderation decisions would raise serious First Amendment problems. Again, the First Amendment forbids government officials — and judges and justices are government officials — from telling a media company what they can and cannot publish.

And, as Patel emphasizes, the people who are most aggrieved by social media moderation are frequently, well, assholes. They are often the very sort of people who could bombard the courts with lawsuits because they are mad that their tweets aren’t getting much attention and are convinced that they’ve been “shadow banned.”

The fundamental question the justices need to decide in these five social media cases, in other words, is whether they want to make the very same mistake that Elon Musk made. They need to decide whether they want to own every content moderation decision made by companies like Twitter, and every decision by a politician to block an annoying troll.

If the justices are smart, they will do whatever they can to ensure that they do not wind up owning Twitter.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.