clock menu more-arrow no yes mobile

Filed under:

The Supreme Court decides not to break the internet

Clarence Thomas did something right, for a change.

Justice Clarence Thomas.
Justice Clarence Thomas, who authored the Court’s opinion in Twitter v. Taamneh.
Drew Angerer/Getty Images
Ian Millhiser is a senior correspondent at Vox, where he focuses on the Supreme Court, the Constitution, and the decline of liberal democracy in the United States. He received a JD from Duke University and is the author of two books on the Supreme Court.

The Supreme Court handed down two high-stakes tech decisions on Thursday — cases that, if handled ineptly, could have destroyed much of the internet and subjected social media companies to devastating liability.

The good news is that none of that will happen.

Both Justice Clarence Thomas’s unanimous opinion in Twitter v. Taamneh and the Court’s brief, unanimous, and unsigned opinion in Gonzalez v. Google show admirable restraint. The justices add clarity to a 2016 anti-terrorism law that, if read broadly, could have made tech companies whose products form the backbone of modern-day communications liable for every violent act committed by the terrorist group ISIS.

Instead, the Court’s Twitter and Google decisions largely ensure that the internet will continue to function as normal, provided that websites like Twitter or YouTube do not actively provide assistance to terrorism.

The cases involve similar facts. Google concerns a wave of murders ISIS committed in Paris — one of the victims of those attacks was Nohemi Gonzalez, a 23-year-old American student who died after ISIS assailants opened fire on the café where she and her friends were eating dinner. Twitter, meanwhile, involves an ISIS attack on a nightclub in Istanbul, in which 39 people were killed including Nawras Alassaf, a Jordanian man with American relatives.

At this point, you’re probably wondering what these horrific acts have to do with tech companies like Google or Twitter. The answer arises from the US Justice Against Sponsors of Terrorism Act (JASTA), which permits lawsuits against anyone “who aids and abets, by knowingly providing substantial assistance” to certain acts of “international terrorism.”

The plaintiffs in both cases, relatives of Gonzalez and Alassaf, essentially allege that Twitter, Facebook, and YouTube (which is owned by Google) provided substantial assistance to ISIS by allowing it to use the companies’ social media websites to post videos and other content that promoted ISIS’s ideology and sought to recruit individuals to their cause. In effect, the plaintiffs argued that these tech platforms had an affirmative duty to stop ISIS from using their websites, and that the tech companies could be held liable if ISIS terrorists use a service that is freely available to billions of people across the globe.

It’s a breathtaking legal argument. As Thomas writes in the Twitter opinion, “under plaintiffs’ theory, any U.S. national victimized by an ISIS attack could bring the same claim based on the same services allegedly provided to ISIS.” The three tech companies, in other words, would potentially be liable for any American or relative of an American who is killed by ISIS.

The JASTA statute, moreover, authorizes a successful plaintiff to recover three times the loss inflicted upon them by a terrorist, which in a case similar to Twitter or Google could mean three times the cost of a mass murder. So even a corporate behemoth like Google could potentially be brought to its knees by the amount of money they would have to pay out in future cases if these lawsuits had prevailed.

The Court’s unanimous opinion, however, rejects that outcome. Though the plaintiffs’ theory rests on a plausible reading of the vaguely worded JASTA statute, the Court’s decision establishes that, at the very least, a company has to do more than provide its product to any customer in the world — including customers who may use that product for evil purposes — in order to be held liable for a terrorist act.

The Court holds that the “mere creation” of a platform that can be used by bad actors is not a “culpable” act

The idea that someone who does not commit an illegal act may nonetheless be responsible for “aiding and abetting” that act is well established in US law and the English legal concepts that much of US law still relies upon. Thomas quotes a 1795 English treatise for the proposition that someone who was “present, aiding and abetting the fact to be done” could be deemed responsible for the criminal act of another.

But, as Thomas also acknowledges, this concept “has never been boundless.” If it were, then “ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer.” Suppose, for example, that Ford sells a truck to a man who then uses that truck to run over and kill another person. Does Ford really deserve to be held liable for this act of homicide?

The thrust of Thomas’s opinion is that, to be held responsible for aiding and abetting another’s actions, a defendant must have provided “knowing and substantial assistance” to that individual. He also writes that these two requirements — the assistance must be both “knowing” and “substantial” — operate on a kind of sliding scale. Someone with greater “knowledge” that they are assisting an illegal act might be held liable for providing less “substantial” aid to that act, and vice versa.

There is some evidence that the tech platforms were aware that ISIS sometimes used their products to distribute content. The companies often tried to remove ISIS content from their websites — though Thomas’s opinion suggests that removing all of it might be an impossible task. As he writes, “it appears that for every minute of the day, approximately 500 hours of video are uploaded to YouTube, 510,000 comments are posted on Facebook, and 347,000 tweets are sent on Twitter.” The social media giants would have needed to cull through all of this almost-entirely benign content to find the relative handful of videos and writings posted by ISIS members.

In any event, the tech companies won’t have to do that because the Court deemed the amount of assistance they provided to ISIS to be too insubstantial to justify holding them liable. Thomas essentially argues that providing a product to a general public that includes malicious actors does not constitute “substantial” enough assistance to justify holding the tech platforms liable for terrorism.

As he writes, there’s no evidence that the tech companies endorsed ISIS’s Paris and Istanbul attacks, participated in them as if they wanted them to happen, or “sought ‘by [their] action to make it succeed.’” All they did was create platforms that are used by hundreds of millions or even billions of people, the overwhelming majority of whom use those platforms for lawful ends.

Worse, if Twitter or YouTube could be held liable because a bad actor used its platform, that could potentially open up any communications platform to such extraordinary liability that it would destroy the company. As Thomas warns, a cellphone company could be held liable for “illegal drug deals brokered over cell phones.”

Thomas does write that a company might be liable if it provided assistance to a terrorist organization that went above and beyond the services it offers to the general public, or if it “provides such dangerous wares that selling those goods to a terrorist group could constitute aiding and abetting a foreseeable terror attack.”

But Twitter is not an arms dealer. And all nine justices agreed that it and similar companies should not be held liable for creating a platform that can be used by everyone in the world.

The Court dodged an important question about whether tech platforms can be held liable for their algorithms

As mentioned above, the Twitter and Google cases are factually similar, and the Court’s brief opinion in Google quite reasonably suggests that Gonzalez’s relatives should not prevail for the same reason that Alassaf’s relatives cannot succeed in their lawsuit. Again, a tech platform is not liable because terrorists use that platform on the same terms as any other user.

The Google case, however, teed up a separate legal issue involving one of the most important laws in the internet’s multi-decade history. Section 230 of the Communications Decency Act of 1996 provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” As a general rule, that means that if a website like YouTube or Facebook hosts content produced by third parties (like the videos posted to YouTube by Vox or photos shared on Facebook by you), it won’t be held legally responsible if that content violates the law.

Without this statute, websites such as YouTube or Twitter simply could not exist. Among other things, Section 230 prevents social media websites from being sued every time someone sends a defamatory tweet or video.

The Gonzalez plaintiffs, however, claimed to have found a massive loophole in Section 230. They argued that websites can be liable if they employ algorithms that recommend content to users and these algorithms surface content that is defamatory or otherwise illegal. (If you want to learn more about this legal argument and its potentially world-changing consequences, I’ve written about it at length here and here.)

For now, the Court decided not to decide this issue. The question of whether Section 230 contains an algorithm loophole remains unresolved. Technically, the Google opinion sends that case back down to an appeals court to consider whether any of the case survives after the Twitter decision. But the answer to that question is likely to be a firm “no.” As the Court says in its unsigned Google opinion, the plaintiff’s legal complaint “appears to state little, if any, plausible claim for relief.”

Nevertheless, the Court’s Twitter decision is a massive victory for the status quo, and it suggests that the justices will be cautious in future cases that could fundamentally transform the internet.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.