clock menu more-arrow no yes mobile

Filed under:

Elon Musk’s Israel disinformation investigation, explained

Europe’s probe into harmful content on X about the Israel-Hamas war tests a new law that could reshape the internet.

Elon Musk, wearing a black suit and white shirt, speaks to a row of reporters with microphones and cameras as he walks down a hallway.
Elon Musk’s X, formerly Twitter, is the first company to face an EU investigation under a new law designed to hold online platforms more accountable for harmful content.
Chip Somodevilla/Getty Images
A.W. Ohlheiser is a senior technology reporter at Vox, writing about the impact of technology on humans and society. They have also covered online culture and misinformation at the Washington Post, Slate, and the Columbia Journalism Review, among other places. They have an MA in religious studies and journalism from NYU.

It’s no mystery that after buying Twitter, now known as X, Elon Musk has taken steps to weaken the platform’s defenses against unchecked amplification of misinformation. In early October, the Israel-Hamas war became the first real test for how X now handles major breaking news events. By many accounts, it’s not going well.

It’s clearer than ever that X has changed more than just its name. Misinformation has circulated freely on X in the minutes, hours, and days following Hamas’s attack on Israel. Musk even used his own X account to direct users seeking news on the war to two verified accounts that have a clear history of sharing false information.

Regulators in Europe, however, are trying to check the power of big tech companies by testing a new law that seeks to make social media companies more accountable and transparent. The European Union is investigating X’s handling of hate speech and misinformation related to the Israel-Hamas war, the first of its kind under the new Digital Services Act (DSA) which went into effect in late August.

This test of the DSA’s reach could have real consequences for social media companies, and the outcome could change the experience of being online well beyond Europe. While some tech companies have rolled out new transparency features for European users that are unavailable elsewhere, it’s likely going to be difficult for tech companies with a global reach to maintain two versions of their products — one DSA compliant, and one not — indefinitely.

In the US, Section 230 shields tech companies from liability for the content their users post. The DSA replaces the EU’s e-Commerce Directive, which was enacted in 2000 and contained similar liability shields for Big Tech companies operating in Europe. While the DSA doesn’t remove these shields, it does define the responsibilities an online service operating in Europe has to its users. The law also requires companies to be more transparent about their moderation and content curation, and limits the scope of targeted advertising. (Meanwhile, the US Supreme Court declined to define the limits of Section 230 protections in a pair of decisions issued last May, leaving the future of free speech online in question.)

Digital rights groups in the US have monitored the DSA’s adoption, along with how Silicon Valley companies respond to it. While there are still a lot of variables for this largely untested set of regulations, the DSA has the potential to influence policy reform globally and could change how these platforms operate beyond Europe.

To catch you up, here’s a guide to the DSA, its investigation into X, and what might come next.

Has misinformation on X really been that bad?

In the hours after the Hamas attack on Israel began, users subscribed to X Premium — whose accounts show a verified check mark and get boosted engagement in exchange for a monthly fee — spread a number of particularly egregious pieces of misinformation. According to a running tracker by Media Matters, these accounts amplified a fake White House memo claiming the US government was about to send $8 billion in aid to Israel; circulated videos from other conflicts (and in some cases, footage from a video game) while claiming they showed the latest out of Israel and Gaza; falsely claimed that a church in Gaza had been bombed; and impersonated a news outlet. These posts were shared by X users with huge followings and viewed tens of millions of times. The Tech Transparency Project said on Thursday that it had identified X Premium accounts promoting Hamas propaganda videos, which were viewed hundreds of thousands of times.

Why does it matter if there’s a bunch of misinformation on X?

The impact of paid X accounts posting terrorist propaganda videos is probably self-evident, but the reasons to care about the spread of misinformation go beyond enabling literal war propaganda. Scammers and grifters are using war footage to gain social media followers. Misinformation, when viewed widely enough, can shape policies and beliefs that impact real people’s lives. It can incite violence against people or groups by triggering anger and outrage in its quest to get views. There are real harms here to real people.

While X has never been exactly great at surfacing only accurate information during a major breaking news event, its chronological timeline and brevity made it a useful and important tool for accessing a sea of information from a variety of sources during rapidly changing moments in history. This was especially true in the early- to mid-2010s, and the platform remained more useful than not, some would argue, until late October 2022, when Elon Musk completed his purchase of Twitter. The billionaire quickly moved to prioritize the content of paying users in everyone’s feeds.

In the year since taking over Twitter and turning it into X, Musk has overseen a reduction in the platform’s capacity to moderate hate speech and misinformation, dissolved the site’s Trust and Safety Council, and ended its identity verification system for influential accounts. Instead, the platform now rewards blue check mark badges that are identical to the old verified check marks exclusively to X’s paying subscribers, who do not undergo a verification process. In late September, X also removed a feature allowing users to report content containing misleading information.

Another reason why you should care about misinformation is simply X’s power users. Despite Musk’s best efforts, a ton of journalists, experts, politicians, and influential people still use X as an information source. None of the platforms that emerged as possible Twitter replacements over the past year have captured the access to attention and power that characterized the site’s usefulness. So, X remains an efficient tool for reaching a lot of people who have large audiences of their own, which means that when falsehoods catch on there, they spread.

What exactly is this new law the EU is enforcing?

The Digital Services Act (DSA) governs some pretty important aspects of how Big Tech companies moderate content in Europe. Passed in 2022, the DSA went into effect just months ago, and it requires certain sites designated as “very large online platforms” to abide by the EU’s rules in order to operate in Europe. Those 19 platforms include Amazon, Apple’s App Store, Facebook, several Google services, Instagram, TikTok, and, of course, X.

The rules are designed to make platforms more responsible for the content they host and recommend, and more transparent about how their algorithms work. The DSA outlines moderation, reporting, and transparency standards, bans targeted advertising based on race, religion, gender identity, and other sensitive categories, as well as targeted advertising to children. The rules also require platforms to take actions to assess the risks of and slow the spread of disinformation, illegal content, violence, election integrity, and human rights.

The new law is enforced by the EU’s Commission, which also wrote the law, along with the European Centre for Algorithmic Transparency, which was created by the EU and employs scientists and other researchers with relevant expertise.

Companies that don’t comply with the DSA’s rules will face penalties, including fines of up to 6 percent of their annual revenue (X had an annual revenue of $5.08 billion in 2021, the last full year that the company released financial results before going private under Musk. Using that number, X could face fines of up to $300 million. Musk has said he expects X to make around $3 billion in 2023). While many of the requirements have to do with transparency and reporting, the DSA has already changed how some of these platforms operate in the region. For instance, TikTok and Meta have allowed European users the option of turning off personalized feeds.

What prompted the EU to go after X? How did X respond?

EU commissioner Thierry Breton posted an open letter on X on October 10 in which he warned Musk that the EU had “indications” that X was being used “to disseminate illegal content and disinformation in the EU,” and reminded Musk of the DSA’s “very precise obligations regarding content moderation.”

In the letter, Breton names a couple of specifics. First, he writes that recent policy changes have introduced uncertainties around how X enforces its rules and what content is even permitted in the first place. Second, Breton says they have reports from “qualified sources” of illegal content continuing to circulate on X after being flagged by authorities. He also told Musk that X was required to have “proportionate and effective mitigation measures” for disinformation.

In a post on X, Musk replied to Breton’s letter, writing that X’s policy was to keep everything “open source and transparent,” and asked Breton to “list the violations you allude to on X, so that that [sic] the public can see them.” Breton then replied that Musk should be “well aware” of the content reported to X by users and authorities, to which Musk insisted he’d only address them if posted publicly.

X CEO Linda Yaccarino released an open letter the following day defending X’s response to disinformation and illegal content on the platform, arguing that X is “committed to serving the public conversation, especially in critical moments like this ... there is no place on X for terrorist organizations or violent extremist groups and we continue to remove such accounts in real time, including proactive efforts.” According to Yaccarino’s letter, X has removed “tens of thousands” of pieces of content since the Hamas attack.

The European Commission sent a formal request for information to X on October 12, seeking material pertinent to “its policies and actions regarding notices on illegal content, complaint handling, risk assessment and measures to mitigate the risks identified.” X is required to respond to the request by October 18 or it could face fines.

What about the other big platforms, like Facebook and TikTok?

Breton also wrote letters to the chief executives of Meta, TikTok, and YouTube.

The EU commissioner asked Meta CEO Mark Zuckerberg to “ensure that your systems are effective” for responding to illegal content on Facebook and Instagram related to the Israel-Hamas war. The letter, published on October 11, also asks Zuckerberg to respond to a request for information on Meta’s measures to mitigate deepfakes and disinformation targeting election integrity.

The letter to YouTube CEO Neal Mohan reminded the platform of its obligation under DSA to prevent children and teenagers from viewing violent content, including hostage videos and war propaganda.

Breton also referenced a young user base in his letter to TikTok CEO Shou Zi Chew, arguing that TikTok’s rising popularity as a news source underlines the need to make sure “reliable sources” are “adequately differentiated from terrorist propaganda.” Breton asked TikTok to “step up” its enforcement efforts, report back, and “respond promptly” to requests from European law enforcement agencies.

But as of now, X is the only platform that the EU has sent a formal request for information.

What happens next?

The EU’s investigation into X is the first public case targeting a large tech company under the DSA since it went into effect a couple months ago. So, there are a lot of things we just don’t know about how this will play out. Remember, the DSA authorizes the EU to issue penalties for violating the DSA’s regulations, and also for failing to comply with the investigation’s procedures.

Some experts have raised concerns about Breton’s initial communications with X and questioned whether the letter’s warnings would turn out to be enforceable. The Center for Democracy and Technology’s Asha Allen referred to the letter to Musk as a “misstep,” one that could end up infusing uncertainty into the DSA’s provisions and weaken the impact of its enforcement attempts.

Breton told Politico his agency would “thoroughly enforce” the DSA. If nothing else, the investigations into X indicate that they are following through on that promise.

Still, the DSA’s regulations are complicated to enforce and require expertise on the inner workings of algorithmic design and moderation practices. Those barriers prompted the EU’s creation of an entire research agency staffed with experts in order to identify hard evidence of DSA rule violations. As the New York Times reported in September, any enforcement is likely to get tied up in court once the EU does penalize a big tech company for violating the law.

If the DSA’s provisions withstand these tests, Big Tech companies will need to adopt their platforms for a new era of tech regulation that more directly addresses the variety of online harms in the current era of the internet. But the EU’s policy reforms aren’t the only source of pressure here: Legislators in the US have for years been pushing to pass Section 230 reform, even if the law’s most vocal critics don’t exactly have the same priorities in mind.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.