clock menu more-arrow no yes mobile

Filed under:

The Supreme Court appears worried it could break the internet

The justices probably won’t shut down major websites like Google, Twitter, or YouTube.

A Google sign outside an office building and parking lot.
An office at Google headquarters on February 2, 2023, in Mountain View, California. 
Justin Sullivan/Getty Images
Ian Millhiser is a senior correspondent at Vox, where he focuses on the Supreme Court, the Constitution, and the decline of liberal democracy in the United States. He received a JD from Duke University and is the author of two books on the Supreme Court.

The Supreme Court heard oral arguments on Tuesday in a case that could potentially break much of the internet — and seemed to realize the risks of heading down that path.

Gonzalez v. Google, the case heard today, could subject social media websites and even search engines to ruinous liability, potentially forcing these companies to abandon their business models or even shut down.

That said, most of the justices appeared sufficiently spooked by the possibility that they could destroy how the modern-day internet operates that they are likely to find a way to prevent that outcome. As Justice Elena Kagan warned at one point during the Gonzalez argument, the justices are “not the nine greatest experts on the internet.” So it makes sense for them to approach a case that could fundamentally change how foundational websites operate with a degree of humility.

Gonzalez concerns Section 230 of the Communications Decency Act of 1996, which is arguably the most important legal provision in the history of the internet. Briefly, Section 230 provides that an “interactive computer service” is not liable for “any information provided by another information content provider” that appears on the service’s website.

Thus, for example, if someone publishes a tweet that unlawfully libels another person, the author of that tweet may be sued for defamation, but Twitter cannot.

But Section 230 is also a very old law, at least by internet standards. While it protects a website like YouTube’s or Facebook’s ability to publish third-party content without being held liable if that content is unlawful, the plaintiffs in Gonzalez essentially argue that Section 230 does not protect a website’s decision to use algorithms to sort through all the content published on that website, and to use those algorithms to recommend certain content to certain users.

Thus, under this theory, though Twitter could not be held liable simply because it allows a user to publish a defamatory tweet, it would lose its legal immunity if its algorithm shows that tweet to users who might otherwise not have seen it.

The potential consequences of this legal theory are breathtaking. If Twitter, YouTube, or Facebook may be held liable for any content that is served to users by one of their algorithms, then these websites may need to dismantle the algorithms that make it possible for users to sort through the billions of videos, tweets, and other content published on these websites.

The Gonzalez case itself, for example, claims that Google should be liable because YouTube’s algorithm, which Google owns, sometimes served up ISIS recruitment videos to some users — and thus Google is legally responsible for the ISIS-led attacks that killed American citizens and their relatives. This same theory could hamstring search engines too.

In any event, many of the justices appeared bothered by the possibility that their decision could prevent much of the internet from functioning. So it is likely, if not entirely certain, that a majority of the justices will find a way to let Google keep its Section 230 protections — although it is not entirely clear how they will do so.

Several of the justices expressed fears that the Supreme Court could only make things worse if it weakens Section 230

As Justice Brett Kavanaugh noted during Tuesday’s arguments, lower federal appeals courts largely agree (with admittedly some prominent judges dissenting) that websites like YouTube or Twitter should not be held liable if their algorithms surface illegal content — at least assuming that those algorithms aren’t intentionally designed to promote such content.

Moving away from such a rule, Kavanaugh warned, would lead to “economic dislocation” and could do serious harm to companies and their workers who’ve built their businesses on the assumption that their algorithms do not open them up to liability. “Are we really the right body to draw back” from the status quo, Kavanaugh asked, suggesting that, if the law should be changed to abandon the lower courts’ views, that change should come from Congress.

Kavanaugh’s calls for restraint were echoed most vociferously by Justice Kagan, who warned that algorithms are “endemic in the internet” and that a Supreme Court decision holding websites liable for those algorithms could produce massive and unpredictable disruption.

“There is a lot of uncertainty about going the way you would have us go,” Kagan told Eric Schnapper, the plaintiffs’ lawyer.

Schnapper argued that, while Section 230 does protect social media websites from the mere act of publishing users’ illegal content, it does not permit those websites to recommend such content to others. So if Facebook were to, for example, send an email to one of its users recommending that it click on a defamatory Facebook post, the company could be held liable for such a recommendation.

Under this theory, an algorithm that ranks content based upon what it thinks website users wish to see — so, for example, every Facebook user’s home feed — is no different from such an email recommending a particular Facebook post, and thus is beyond Section 230’s protections.

But, as Chief Justice John Roberts suggested, it’s not entirely clear where to draw the line between content that is “recommended” by a website or other company, and content that is merely organized by that company. Suppose, for example, that a bookseller has a table where it places all the books related to sports. By grouping all the sports-related books together, this bookseller has engaged in the same sort of content organizing that an algorithm might engage in for a website.

But how is the Court supposed to draw a line between this benign sort of organization and an organizational system that actively recommends content to others? If YouTube’s algorithm starts flagging ISIS videos for a user who has already demonstrated an interest in terrorist organizations, is the algorithm recommending those videos, or merely organizing the terrorist-related content in the same way that the bookstore organized sports-related books?

If the Court is looking for a way to dispose of this case without having to answer such difficult questions, Justice Amy Coney Barrett raised one possible way that it could do so. On Wednesday, the Court will hear oral arguments in a closely related case that asks whether social media websites can be liable for hosting ISIS content under a federal anti-terrorism statute — assuming, of course, that Section 230 does not immunize these websites from such lawsuits.

Barrett suggested that, if the Court determined that this anti-terrorist statute does not open social media websites up to such liability, then there’s no reason to decide the Section 230 question — and thus it can avoid many of the larger questions about how the internet should operate altogether.

The justices appeared divided on what their opinion should say

Though it seemed likely that at least five of the justices will agree that Google should not face liability every time its algorithms surface content that could lead to a lawsuit, the justices appeared split on what their opinion in Gonzalez should actually say. And at least two justices appeared open to the possibility of reading Section 230 narrowly.

Many of Justice Samuel Alito’s questions were, frankly, bizarre. And they suggested that he either does not understand how Section 230 functions, or that he does understand and wants to neutralize its protections regardless of what the law actually says.

Justice Ketanji Brown Jackson, meanwhile, told Lisa Blatt, the lawyer for Google, that the problem of algorithms “was not what Congress was concerned about when it enacted this statute,” suggesting that she may agree with the plaintiffs that Section 230 does not protect websites if their algorithms serve up illegal content.

It didn’t help that, during her time at the podium, Blatt seemed to overreach, claiming that Google should be immune from lawsuits even if it intentionally designs its algorithms to serve up ISIS content — or to serve up other content that is illegal. Kagan, Barrett, and Jackson all took turns beating up on this theory.

One possible way that the Court could resolve this case was suggested by Justice Clarence Thomas early in the argument. Thomas noted that Google does not have a “focused algorithm with respect to terrorist activities,” and hinted that maybe websites should retain their legal immunity so long as their algorithm is “neutral” — meaning that it applies the same rules to all content rather than specifically trying to promote certain subject matters or viewpoints.

Justice Neil Gorsuch, meanwhile, pointed to a provision of Section 230 that can be read to permit websites to “pick, choose, analyze, or digest content” as a reason to permit algorithms to function unmolested. At the very least, Gorsuch suggested at one point, the Court could send the case back down to the lower courts to consider this language.

There is still a fair amount of mystery, in other words, surrounding just how the Court will write its Gonzalez opinion. But, despite this uncertainty, enough of the justices appeared bothered by the potential impact of a victory for the plaintiffs that such a victory appears unlikely.

The Court seems likely to show some uncharacteristic humility in this case. And that means that the Court’s ultimate decision probably will not light much of the online world on fire.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.