Skip to main content

What if Facebook can’t be fixed?

Dozens of cardboard cutouts of Facebook CEO Mark Zuckerberg are seen during an Avaaz.org protest outside the U.S. Capitol in Washington, U.S., April 10, 2018.
Dozens of cardboard cutouts of Facebook CEO Mark Zuckerberg are seen during an Avaaz.org protest outside the U.S. Capitol in Washington, D.C., April 10, 2018.
Image Credit: REUTERS/Aaron P. Bernstein

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Facing the biggest boycott threat in Facebook’s history, the social networking company is under growing pressure to address the harassment, misogyny, racism, and extremism that continues to swamp its platform. Organized by the Stop Hate For Profit Coalition, the movement of civil rights groups and advertisers has proposed 10 steps it wants Facebook to take.

This is no doubt a well-intentioned effort, and it’s bringing even greater public attention to Facebook’s problems. Something that even its role in fomenting genocide failed to do. But the campaign takes as a given, as so many other critiques have, that Facebook is something that can be fixed.

But what if Facebook can’t be fixed?

I don’t pose this question to create an excuse for Facebook to do nothing. Rather, I want to suggest that the problems we are seeing are due to the fundamental structure of Facebook, and social media more generally. The problems are not simply the result of executive reticence to act, though that has exacerbated things. No, these problems are due to the very nature of the beast.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Perhaps the most damning thing one can point to is the moves Facebook has made. Contrary to what many of these critics say, Facebook has been taking a lot of action.

In terms of hate speech, Facebook disclosed in its most recent transparency report that it blocked 9.6 million pieces of content in Q1 2020, up from 5.7 million pieces of content in Q4 2019. This week Facebook banned 200 anti-government “Boogaloo” groups. In May, the company appointed a Content Oversight Board. These are just a handful of examples.

Besides investing in technology to identify all of this mischief, Facebook has been expanding its human content moderation efforts. But this program, which relies heavily on third-party contractors, has been deemed woefully insufficient by numerous critics, including an NYU study released last month that called on Facebook to bring these employees in-house.

And yet it feels like nothing has changed. So what’s the answer? As with previous calls for change, the new boycott campaign imagines that more could be done to address the rampant issues, that the central causes relate to a weakness in the way Facebook operates and a lack of will to change.

The group’s proposals include establishing a high-level executive position to “evaluate products and policies for discrimination, bias, and hate”; submitting to third-party audits to verify its transparency report; provide refunds to advertisers if their content appears next to malicious content; and remove groups related to “white supremacy, militia, antisemitism, violent conspiracies, Holocaust denialism, vaccine misinformation, and climate denialism.”

Many of the other suggestions, like creating a way to flag harmful content, ensure accuracy of political and voting content, and eliminating exceptions for politicians are either things Facebook is doing or has said it’s considering doing.

The last suggestion: Create a call center for people to contact so they can speak to a Facebook employee if they have been the victim of hate and harassment. I find it hard to believe any victim who had been attacked online would then find solace in calling someone at the company that enabled it. (Side note: Can you imagine such a hotline for people who had been harassed on Twitter?)

In general, these proposals echo many other vague calls for Facebook to do something, anything to make Facebook less awful. And though it did not stop the boycott, as it typically does, Facebook agreed to some measures, such as the content audit.

Facebook vice president Nick Clegg defended the company in a series of interviews and op-eds, insisting to advertisers and users that the company is relentless in its efforts to remove harmful content.

“Facebook does not profit from hate,” Clegg wrote. “Billions of people use Facebook and Instagram because they have good experiences — they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it.”

But rooting out this content is a massive digital whack-a-mole game.

“With so much content posted every day, rooting out the hate is like looking for a needle in a haystack,” Clegg wrote. “We invest billions of dollars each year in people and technology to keep our platform safe. We have tripled — to more than 35,000 — the people working on safety and security. We’re a pioneer in artificial intelligence technology to remove hateful content at scale.”

All of these proposals strike at the margins of what Facebook does. But none of them go the very heart of what has made it so odious. Far-right actors (and let’s be honest here, almost all of these abuses are traced back to right-wing sources) have recognized Facebook (and really, all social media platforms) as the perfect delivery vehicles for propaganda, disinformation, and relentless campaigns to sow division.

To truly understand the scope of this onslaught, let’s look at the figures Facebook itself shares to convince us that it is making progress.

The company reported that it removed 3.2 billion fake accounts between April 2019 and September 2019, up from the 1.55 billion accounts it removed from the same period the previous year. From October 2018 to March 2019, Facebook removed 3.39 billion fake accounts.

That is an astonishing number for a social network that counts 2.37 billion monthly active users. Every quarter it is kicking out more fake profiles than people who use the service. Facebook is constantly under siege by those forces that are investing huge amounts of time and resources exploiting its dynamics.

Facebook has responded by implementing more aggressive monitoring to weed out these accounts before anyone sees them. Yet the company still estimates that 5% of monthly active accounts are fake. It’s also true, and yet equally troubling, when Clegg points out that these assaults on Facebook users are creating wedges along fault lines that already exist. They are meant to prey on our economic, social, and racial divisions and make them wider and stoke our anger.

That’s why most of these new policy proposals or content moderation programs or AI or government regulation are not likely to change the fundamental dynamic of Facebook: It is the perfect delivery vehicle for disinformation, propaganda, and hate. Silicon Valley has built the ultimate society-destroying tool and given the keys to those who want nothing more than to sow chaos.

To truly reform Facebook, more radical steps would be needed. Users could be required to verify their identity, though I think most users would recoil from such a notion. The U.S. government could repeal the law that lets platforms avoid legal liability for content. But aside from President Trump ranting on Twitter and a few conservative hotheads in Congress, most people would not back such a move, which risks massive unintended consequences.

So then what? It’s most likely that the current battle will continue endlessly. Facebook will do just enough eventually to end the boycott. Billions of people will keep using Facebook. Billions of people will continue complaining about Facebook. Organizations around the world will go on finding ways to exploit our use of Facebook to make us angry and uninformed.

And one day, historians will marvel at how Facebook convinced so many people to actively participate in a digital experiment that eroded civil society and set us against each other.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.