How the 2019 Christchurch Massacre Changed Facebook Forever

Internal documents from the company's response team reveal the failed algorithms and reporting issues that were overhauled in the wake of the brutal shooting.

We may earn a commission from links on this page.
Image for article titled How the 2019 Christchurch Massacre Changed Facebook Forever
Photo: Sanka Vidangama (Getty Images)

On March 15, 2019, a heavily armed white supremacist named Brenton Tarrant walked into two separate mosques in Christchurch, New Zealand, and opened fire, killing 51 Muslim worshipers and wounding countless others. Close to 20 minutes of the carnage from one of the attacks was livestreamed on Facebook—and when the company tried taking it down, more than 1 million copies cropped up in its place.

While the company was able to quickly remove or automatically block hundreds of thousands of copies of the horrific video, it was clear that Facebook had a serious issue on its hands: Shootings aren’t going anywhere, and livestreams aren’t either. In fact, up until this point, Facebook Live had a bit of a reputation as a place where you could catch streams of violence—including some killings.

Advertisement

Christchurch was different.

An internal document detailing Facebook’s response to the Christchurch massacre, dated June 27, 2019, describes steps taken by the company’s task force created in the tragedy’s wake to address users livestreaming violent acts, illuminating the failures of the company’s reporting and detection methods before the shooting began, how much it changed about its systems in response to those failures—and how much further its systems still have to go.

Advertisement

More: Here Are All the ‘Facebook Papers’ We’ve Published So Far

The 22-page document was made public as part of a growing trove of internal Facebook research, memos, employee comments, and more captured by Frances Haugen, a former employee at the company who filed a whistleblower complaint against Facebook with the Securities and Exchange Commission. Hundreds of documents have been released by Haugen’s legal team to select members of the press, including Gizmodo, with unnumbered more expected to arrive over the coming weeks.

Advertisement

Facebook relies heavily on artificial intelligence to moderate its sprawling global platform, in addition to tens of thousands of human moderators who have historically been subject to traumatizing content. However, as the Wall Street Journal recently reported, additional documents released by Haugen and her legal team show that even Facebook’s engineers doubt AI’s ability to adequately moderate harmful content.

Facebook did not yet respond to our request for comment.

You could say that the company’s failures started the moment the shooting did. “We did not proactively detect this video as potentially violating,” the authors write, adding that the livestream scored relatively low on the classifier used by Facebook’s algorithms to pinpoint graphically violent content. “Also no user reported this video until it had been on the platform for 29 minutes,” they added, noting that even after it was taken down, there were already 1.5 million copies to deal with in the span of 24 hours.

Advertisement

Further, its systems were apparently only able to detect any sort of violent violations of its terms of service “after 5 minutes of broadcast,” according to the document. Five minutes is far too slow, especially if you’re dealing with a mass shooter who begins filming as soon as the violence starts, the way Tarrant did. For Facebook to reduce that number, it needed to train its algorithm, just as data is needed to train any algorithm. There was just one gruesome problem: there weren’t a lot of livestreamed shootings to get that data from.

The solution, according to the document, was to create what sounds like one of the darkest datasets known to man: a compilation of police and bodycam footage, “recreational shootings and simulations,” and assorted “videos from the military” acquired through the company’s partnerships with law enforcement. The result was “First Person Shooter (FPS)” detection and improvements to a tool called XrayOC, according to internal documents, which enabled the company to flag footage from a livestreamed shooting as obviously violent in about 12 seconds. Sure, 12 seconds isn’t perfect, but it’s profoundly better than 5 minutes.

Advertisement

The company added other practical fixes, too. Instead of requiring that users jump through multiple hoops to report “violence or terrorism” happening on their stream, Facebook figured that it might be better to let users report it in one click. They also added a “Terrorism” tag internally to better keep track of these videos once they were reported.

Next on the list of “things Facebook probably should have had in place way before broadcasting a massacre,” the company put some restrictions on who was allowed to go Live at all. Before Tarrant, the only way you could get banned from livestreaming was by violating some sort of platform rule while livestreaming. As the research points out, an account that was internally flagged as, say, a potential terrorist “wouldn’t be limited” from livestreaming on Facebook under these rules. After Christchurch, that changed; the company rolled out a “one-strike” policy that would keep anyone caught posting particularly egregious content from using Facebook Live for 30 days. Facebook’s “egregious” umbrella includes terrorism, which applies to Tarrant.

Advertisement

Of course, content moderation is a dirty, imperfect job carried out, in part, by algorithms that, in Facebook’s case, are often just as flawed as the company that made them. These systems didn’t flag the shooting of a retired police chief David Dorn when it was caught on Facebook Live last year, nor did it catch a man who livestreamed his girlfriend’s shooting just a few months later. And while the hours-long obvious bomb threat that was livestreamed on the platform by a far-right extremist this past August wasn’t as explicitly horrific as either of those examples, it was also a literal bomb threat that was able to stream for hours.

Regarding the bomb threat, a Facebook spokesperson told Gizmodo: “At the time, we were in contact with law enforcement and removed the suspect’s videos and profile from Facebook and Instagram. Our teams worked to identify, remove, and block any other instances of the suspect’s videos which do not condemn, neutrally discuss the incident or provide neutral news coverage of the issue.”

Advertisement

Still, it’s clear the Christchurch disaster had lasting effect on the company. “Since this event, we’ve faced international media pressure and have seen legal and regulatory risks on Facebook increase considerably,” reads the document. And that’s an understatement. Thanks to a new Australian law that was hastily passed in the wake of the shooting, Facebook’s executives could face steep legal fees (not to mention jail time) if they were caught allowing livestreamed acts of violence like the shooting on their platform again.

This story is based on Frances Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were obtained by a consortium of news organizations, including Gizmodo, the New York Times, Politico, the Atlantic, Wired, the Verge, CNN, and dozens of other outlets.

Advertisement