Researchers: Instagram 'Bullied' Us Into Halting Algorithmic Research

For the second week in a row, Facebook killed a project meant to shed light on its practices.

We may earn a commission from links on this page.
An activist wears a mask depicting Mark Zuckerberg outside the European Commission building in December.
An activist wears a mask depicting Mark Zuckerberg outside the European Commission building in December.
Photo: Kenzo Tribouillard (Getty Images)

A Berlin-based nonprofit studying the ways in which Instagram’s algorithm presents content to users says parent company Facebook “bullied” its researchers into killing off experiments and deleting underlying data that was collected with consent from Instagram users.

Algorithm Watch, as its name suggests, is involved in research that monitors algorithmic decision-making as it relates to human behavior. In the past year, the group has published research suggesting Instagram favors seminude photographs, and that posts by politicians were less likely to appear in feeds when they contained text. Facebook has disputed all of the group’s findings, which are published with their own stated limitations. At the same time, the group said, the company has refused to answer researchers’ questions.

Advertisement

Algorithm Watch said Friday that while it believed the work both ethical and legal, it could not afford a court battle against a trillion-dollar company. On that basis alone, it complied with orders to terminate the experiments.

Advertisement

“Digital platforms play an ever-increasing role in structuring and influencing public debate,” Nicolas Kayser-Bril, a data journalist at Algorithm Watch, said in a statement. “Civil society watchdogs, researchers and journalists need to be able to hold them to account.”

Advertisement

The project was shut down a week after Facebook suspended the accounts of NYU researchers investigating the Facebook platform’s role in spreading disinformation about U.S. elections and the coronavirus, among other topics. The NYU researchers said Facebook had issued warnings about its methods in October 2020, but only took action hours after finding the research would also focus on the platform’s role in the January 6 insurrection.

More than a hundred academics and technologists signed a letter last week denouncing Facebook’s actions. Federal lawmakers have accused the company of purposefully shielding itself from accountability. The Federal Trade Commission was forced to publicly correct a statement made by a Facebook official who had blamed the suspensions on a privacy settlement negotiated with regulators after the Cambridge Analytics scandal.

Advertisement

According to Algorithm Watch, its experiments were fueled by data collected from some 1,500 volunteers, each of whom consented to having their Instagram feeds monitored. The volunteers installed a plug-in that captured images and text from posts Instagram’s algorithm surfaced in their feeds. No information was collected about the users themselves, according to the researchers.

Facebook claimed the project had violated a condition of its terms of service that prohibits “scraping,” but which the company has construed of late to include data voluntarily provided by its own users to academics.

Advertisement

Kayser-Bril, a contributor to the Data Journalism Handbook, says the only data collected by Algorithm Watch was transmitted by Instagram to its army of volunteers. “In other words,” he said, “users of the plug-in [were] only accessing their own feed, and sharing it with us for research purposes.”

Facebook also accused the researchers of violating privacy protections under the EU’s privacy law, the GDPR; specifically, saying its plugin collected data on users who’d never agreed to be a part of the project. “However, a cursory look at the source code, which we open-sourced, show that such data was deleted immediately when arriving at our server,” Kayser-Bril said.

Advertisement

A Facebook spokesperson said company officials had requested an informal meeting with Algorithm Watch “to understand their research, and to explain how it violated our terms,” and had “repeatedly offered to work with them to find ways for them to continue their research in a way that did not access people’s information.”

“When Algorithm Watch appeared unwilling to meet us, we sent a more formal invitation,” the spokesperson said.

Advertisement

Kayser-Bril wrote the “more formal invitation” was perceived by the group as “a thinly veiled threat.” As for the help Facebook says it offered, the journalist said the company can’t be trusted. “The company failed to act on its own commitments at least four times since the beginning of the year, according to The Markup, a non-profit news organization that runs its own monitoring effort called Citizen Browser,” he said. “In January for instance, in the wake of the Trumpist insurgency in the US, the company promised that it would stop making recommendations to join political groups. It turned out that, six months later, it still did.”

In an email, a Facebook spokesperson included several links to datasets the company offers to researchers, though they were either exclusive to the Facebook platform or related to ads; neither relevant to Algorithm Watch’s Instagram-related work.

Advertisement

The NYU researchers banned by the company had similar complaints. The dataset offered to them only covered three months’ worth of ads prior to the 2020 election, and was irrelevant to their research into pandemic-related misinformation, as well as a new project focused on the Capitol riot. The data also purposely excluded a majority of small-dollar ads, which was crucial to NYU’s project.

Researchers say data offered up by Facebook is rendered useless by limitations imposed by the company, and using it would allow Facebook to control the outcomes of experiments. One complaint aired recently by Facebook, for instance, is that it couldn’t identify which users had installed plug-ins designed by researchers to collect data.

Advertisement

But allowing Facebook this knowledge, they say, would give the company the power to manipulate volunteers’ feeds; to filter out content, for example, that it doesn’t wish researchers to see. One researcher, who asked not to be named over legal concerns, compared this to allowing Exxon to furnish its own water samples after an oil spill.

“We collaborate with hundreds of research groups to enable the study of important topics, including by providing data sets and access to APIs, and recently published information explaining how our systems work and why you see what you see on our platform,” a Facebook spokesperson said. “We intend to keep working with independent researchers, but in ways that don’t put people’s data or privacy at risk.”

Advertisement

Added Kayser-Bril: “Large platforms play an oversized, and largely unknown, role in society, from identity-building to voting choices. Only by working towards more transparency can we ensure, as a society, that there is an evidence-based debate on the role and impact of large platforms – which is a necessary step towards holding them accountable.”

Advertisement