Skip to main content

Everything we know about this week’s big Twitter hack so far

Everything we know about this week’s big Twitter hack so far

/

Who might have done it, and how, and what else they might have planned. Plus: Q&A with Facebook’s chief diversity officer

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Illustration by Alex Castro / The Verge

It’s been such a newsy week that we’re ending it with two columns — enough to last you the whole weekend. First, we have what we hope is the ultimate Twitter hack FAQ, in response to this week’s catastrophic security breach. Yesterday’s issue was the most-read in Interface history, and we wanted to make sure you had all the latest developments.

Second, I’m excited to share a conversation I had this week with Facebook’s chief diversity officer, Maxine Williams, on the occasion of the company releasing its annual diversity report. I wanted to know why progress on the issue has been so hard to come by, what it means that she reports to Sheryl Sandberg now, and much more. Williams is a dynamo; I hope you’ll enjoy our chat.

* * *

On Wednesday, Twitter had the worst security incident in company history, with a coordinated attack resulting in the takeover of more than a dozen high-profile accounts including President Obama, Joe Biden, Amazon CEO Jeff Bezos, and Elon Musk. On Thursday, the fallout began.

Here are some top questions about the attack, answered as best we can as of press time.

Do we know who perpetrated the attack? Ace cybersecurity reporter Brian Krebs traces the attack to a community of “SIM swappers,” though the report is as yet unconfirmed. The FBI is investigating. And I imagine that the Federal Trade Commission, which has Twitter under a 20-year consent decree for failing to protect users’ personal information, will be looking into it as well.

Do we know how the hack happened? We do not, though Twitter shared a handful of details late Wednesday. Among other things, the company confirmed that a Twitter employee was involved in the incident. How exactly? It won’t say; Dustin Volz describes the range of possibilities nicely at the Wall Street Journal:

The social-media company hasn’t said specifically how the attackers penetrated its internal systems and tools or indicated how long they had access to them. Twitter has said only that the hackers used “social engineering” techniques, where employees are tricked into clicking a link, divulging information or otherwise aiding outsiders. The hackers might have accessed information or engaged in other malicious activity, Twitter said, adding that it has “taken significant steps to limit access to internal systems” while it continues to investigate the incident.

What if I want to read a speculative but plausible account of how the hack worked from one of the victims? This piece by the owner of the @6 Twitter account is very good.

Did the hackers access our passwords? Twitter says there is no evidence that they did.

Did the hackers access our direct messages? Twitter won’t comment, presumably because it doesn’t yet know. Also, maybe DMs should be encrypted?

Are there wild theories about other attacks that the hackers might be planning based on their theoretical access to DMs? Yes.

Does Twitter CEO Jack Dorsey promise to tell us everything that happened as soon as he can? He does.

Do lawmakers have some sternly worded questions for Twitter in the meantime? They do.

Will the activist hedge fund that took a stake in Twitter earlier this year also have some strongly worded questions for the company? Maybe!

How far back does the behavior of taking over a Twitter account to sow panic go? At least 2011. In that year, NBC News’ account got hacked and falsely said that Ground Zero was under attack. Two years later, someone took over the Associated Press account and falsely reported that there had been explosions in the White House; the stock market dropped on the entirely fake news.

What are some other examples of hackers using social engineering techniques to wreak havoc? Here’s a story about employees of telecoms getting recruited by criminals to transfer numbers to different SIM cards, enabling the criminals to take over social media accounts. And here’s a story about an employee of the popular game platform Roblox who was bribed by a hacker to look up users’ personal information.

What should we expect tech companies to do about these internal threats? Alex Stamos, who used to run security at Facebook, has some good ideas. Make customer-service tasks two-person jobs rather than one-person jobs, increasing friction for would-be criminals; and stop holding customer service representatives to impossible speed quotas, giving them more room to suss out attacks.

Could you have made more money off this level of account access than the attackers did? Maybe, but it would have been much more work than these attackers seemed to have put in, and there’s no guarantee you would have succeeded.

Was Twitter better when verified accounts couldn’t tweet? Many people are saying this.

Q&A with Facebook’s chief diversity officer

The Black Lives Matter protests that swept the country in June brought fresh attention to the country’s history of racial injustice, and galvanized discussions within workplaces about what role corporations could play in addressing that history. One company where those discussions were particularly vigorous was Facebook, a platform that showcased the protests and the violence that led up to them — and, some employees argued, had empowered white supremacy and other forms of hate speech through a combination of content policies and moderation decisions. An advertiser boycott of Facebook and other social networks this month, along with a difficult civil rights audit of the company, have brought further scrutiny to the issue.

All of which made the publication of Facebook’s annual diversity report this week especially fraught. Diversity reports first became popular in Silicon Valley in the middle of the 2010s as a way for the technology industry to attempt to hold itself to a higher collective standard. And while almost every major tech company now publishes such a report annually, the industry’s overall diversity has remained disappointingly stagnant.

At Facebook, the news for 2020 is mixed. Sarah Frier summed it up at Bloomberg:

Black employees now comprise 1.7% of the social media company’s technical roles, up from 1.5% in 2019 and 1% in 2014, Facebook said in its latest diversity report. The demographic lags even as the company has made progress in other areas, like gender. Women now make up 24.1% of technical employees, up from 15% in 2014.

Since 2013, the project of improving Facebook’s diversity and inclusion efforts has been led by Maxine Williams. As Facebook’s chief diversity officer, she works to attract and retain members of underrepresented groups at the company. And so when Facebook invited me to speak with her about the company’s latest diversity report, I jumped at the chance. (You can read Facebook’s 2020 diversity report here.)

“We try to keep our eyes on the long term,” Williams told me. As representation has grown at the company, employees have made more demands for inclusion — policies that help retain talent once Facebook successfully attracts it. Recently, Williams’ role was elevated to report directly to Chief Operating Officer Sheryl Sandberg, and she now sits on Mark Zuckerberg’s leadership team. The reason, she told me, is to make sure inclusion is “in everything, front and center.”

Ultimately, Williams said, diversity and inclusion is challenging because people themselves are complicated. “People are the most complex systems,” she said. “Computer scientists can talk all they want about code. Nothing is more complex than people.”

Highlights of our talk are below. This interview has been edited for clarity and length.

Casey Newton: Recently the Black Lives Matters protests led to a lot of internal questions from Facebook employees about the role the company can play in fighting racism. Is that a rare opportunity to improve inclusion within a company, since for once you have a huge part of the workforce focused on justice issues?

Maxine Williams: Oh yes, and not just for my company — I mean, across the board in America. It’s a moment of stress, because we’re constantly re-traumatized by what we’re talking about. And by the way, if you’re a somebody in this role, disproportionately people on diversity teams are people from vulnerable groups themselves. And so it is hard — every day, when what we’re talking about is like, they kill people that look like me. But at the same time, it is a moment of opportunity. If [a company] made a statement, they should probably walk the walk, too. Personally I’d prefer that we never had to be here. But there is some opportunity to be had from it, yes.

One thing I’ve heard from Facebook employees is that the experience for nonwhite workers is inconsistent. Some people have a great career at Facebook, while other people don’t and quit. I’m sure that’s true for a lot of reasons, but to the extent that it’s an inclusion issue, how do you figure out where the gaps are?  

What we have come to realize is that in people analytics, often with minority groups, we wouldn’t get insight on them because there weren’t enough of them to hit some kind of [statistical] standard for confidence. Right. And we cannot be in this loop where it’s like, well, “we could tell you what was going on, if there were more of them.” When the point is that there aren’t enough of them because something’s going on. 

And so we pivoted how we did people analytics to put much more weight into qualitative feedback. We hold focus groups all the time. We’re gonna have to get comfortable with a different level of confidence seen in the traditional way, and buttress what we see internally with research or other studies.

And then we started to hire people who are race experts, bias experts, social psychologists, to work on this and to give us the insight. Your question is, how do you figure out what’s going on if there’s this inconsistency? And that’s one of the ways.

The advertiser boycott that’s under way right now has called for a C-level executive that will, among other things, review products that are in development to assess their impact on civil rights. Do you think that a role like that would be beneficial? 

Yes. We’ve already opened a job requisition, and people are applying. We’re going to hire a vice president of civil rights. 

And that person will work on product issues?

Product, policy, everything.

What I have done a lot of in my seven years here is ... call it internal consulting. Advice and input looking through the lens of equal value. Is this product going to bring equal value to all people? Because minority groups can often get overlooked.

But civil rights is also its own body of work, which has its own legal underpinnings. So I think it’s important to have somebody who’s focused through that lens, as I focus on equal value for diverse groups. These things are complementary, but they’re not the same.

Finally: what do you hope Facebook accomplishes on diversity and inclusion in the next year?

One is building a consistency of behavior. We’ve put a lot of strategies into play; We need everybody to now do it consistently. It’s not very sexy, but it’s actually, I think, probably the most important thing. 

And I think the other thing would be, we’re working on a lot of work streams now to give more people voice in the development of products. To make products more inclusive in development and policies. And that is very early stage, but in a year’s time, we should have built out some of that and have a sense of how it’s working. 

It’s one of the things that, in the civil rights audit, the auditors talked about: employees wanting more participation. And so we’re trying to figure it out. But it’s complicated, too, because you want to balance that with not having the employees who are most passionate, who are probably from underrepresented groups, paying an additional tax, right? Now they have a job on top of their job. So you need to balance that. In a year’s time we’ll have something to say, whether it’s gonna be, “it’s amazing!” or “wow, we learned that that first version didn’t work. There will still be something to say, because we are gonna focus a lot on how we give more people voice.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending up: TikTok launched a new video series with some of the app’s top creators to help people spot misinformation. The campaign, called “Be Informed,” will address topics like how to scrutinize the credibility of sources and how to distinguish fact from opinion. (Mark Sullivan / Fast Company)

🔽 Trending down: Twitter’s rigid fact-check rules have allowed President Trump to continue spreading false information about the election. Under the company’s policies, untrue tweets about vote-rigging in a specific state are unacceptable. But the rules don’t apply when those lies are spread on a national scale. (Marshall Cohen / CNN)

Governing

Facebook announced it will add labels to all posts from presidential candidates that mention voting or ballots, regardless of whether they contain misinformation. The move is a response to recent accusations that the company isn’t doing enough to tackle voter suppression on the platform. Here’s Sara Fischer at Axios:

The labels, rolling out today, aren’t a judgment of whether the posts themselves are accurate, but are instead meant to signal to Facebook users that they can get the most accurate information about voting by leading them to an official government website.

Related: Facebook is rife with misinformation about voting, according to an analysis by ProPublica and the nonprofit First Draft. Many of the misleading posts center on voting by mail, which is the safest way of casting a ballot during the pandemic. While these posts appear to violate Facebook’s policies, many continue to stay up. (Ryan McCarthy / ProPublica)

The White House said restrictions on TikTok could come in “weeks, not months.” Trump’s chief of staff Mark Meadows said the administration is also looking at WeChat “and other apps that have the potential for national security exposure.” Sam Byford at The Verge has the story:

“There are a number of administration officials who are looking at the national security risk as it relates to TikTok, WeChat and other apps that have the potential for national security exposure, specifically as it relates to the gathering of information on American citizens by a foreign adversary,” Meadows told reporters traveling from Atlanta on Air Force One. “I don’t think there’s any self-imposed deadline for action, but I think we are looking at weeks, not months.”

A prosecutor on the trial team that won Roger Stone’s conviction is leaving the Justice Department to join Facebook, where he will set policy on the site’s content. The move follows Trump’s decision to commute Stone’s sentence. (Christian Berthelsen / Bloomberg)

Attorney General William Barr accused companies like Google, Microsoft, Yahoo, and Apple of being “all too willing to collaborate with the (Chinese Communist party).”  He added that Hollywood has routinely caved into pressure and censored their films “to appease the Chinese Communist Party.” (Sarah N. Lynch, David Shepardson / Reuters)

A second surge in coronavirus deaths is upon us. And it was easily predicted by all available data on the subject. (Alexis C. Madrigal / The Atlantic)

Russian hackers are attempting to steal coronavirus vaccine research from American, British, and Canadian universities and health care organizations. The National Security Agency said that a hacking group implicated in the 2016 break-ins into Democratic Party servers has been implicated in the attacks. (Julian E. Barnes / The New York Times)

Europe’s top court struck down a flagship EU-US data flows arrangement called Privacy Shield. The court’s finding is that “the requirements of US national security, public interest and law enforcement have primacy, thus condoning interference with the fundamental rights of persons whose data are transferred to that third country.” (Natasha Lomas / TechCrunch)

Industry

Instagram is preparing to launch its TikTok competitor, known as Reels, in the US. The company expects to bring the new video feature to its platform in early August. Here’s Sarah Perez at TechCrunch:

Reels was designed to directly challenge TikTok’s growing dominance. In a new area in the Instagram app, Reels allows users to create and post short, 15-second videos set to music or other audio, similar to TikTok. Also like TikTok, Reels offers a set of editing tools — like a countdown timer and tools to adjust the video’s speed, for example — that aim to make it easier to record creative content. Instagram, however, doesn’t have the same sort of two-tabbed, scrollable feed, like TikTok offers today.

The move to more quickly roll out Reels to more markets comes as TikTok has come under intense scrutiny for its ties to China. India banned the app, along with 58 other mobile applications designed by Chinese firms, in June. The Trump administration more recently said it was considering a similar ban on TikTok, for reasons related to national security. Yesterday, it said such a decision could be just weeks away.

Instagram is also starting to roll out a dedicated Shop page under the Explore tab that’ll highlight different brands and items that people can purchase. The move is meant to make it easier for people to shop inside its app. (Ashley Carman / The Verge)

Facebook added screen sharing to Messenger video calls on its iOS and Android mobile apps. Previously, the feature was only available on Messenger’s web or desktop apps. (Christine Fisher / Engadget)

Hollywood has been noticeably silent on the Facebook ad boycott. While film studios are big advertisers on the platform, only Magnolia Pictures Sesame Street have joined what civil rights groups are calling the #StopHateForProfit campaign. (Brooks Barnes and Nicole Sperling / The New York Times)

Google launched a video shopping platform called Shoploop to introduce consumers to new products in under 90 seconds. It’s a project from the company’s R&D division, Area 120, where it tests out new ideas with a public user base. (Sarah Perez / TechCrunch)

Twitter is rolling out a new version of its developer API. Twitter API v2 includes features that were missing from the earlier API, like conversation threading, poll results in tweets, pinned tweets, spam filtering and more powerful stream filtering and search query language.

Twitter also unveiled a new interface for its direct messages on the web, allowing users to send and receive DMs without having to leave their timelines. Currently, users have to open a separate section of Twitter’s website to look at their DMs. (Jon Porter / The Verge)

Amazon added live streaming to its existing Amazon Influencer Program. The move gives live streamers a new way to earn commissions on purchases of products showcased in their streams. Who will reinvent QVC first — Amazon or Instagram? (Sarah Perez / TechCrunch)

Amazon is extending its corporate work-from-home policy through January 2021. The company is also planning to continue restricting nonessential business travel through the end of the year. (Nick Statt / The Verge)

A viral Twitter account about Jurassic Park is the perfect satire for companies reopening amid the coronavirus pandemic. (David Mack / BuzzFeed) (David Mack / BuzzFeed)

Things to do

Stuff to occupy you online during the quarantine.

Listen to a special reunion episode of Why’d You Push That Button. Ashley Carman and Kaitlyn Tiffany are back to talk about virtual dating during the pandemic.

Look out someone else’s window. Window Swap is an open platform for sharing the view you have of the world outside.

Watch Hamilton as sung by the Muppets. And then watch Hamilton again.

Those Good Tweets

Talk to us

Send us tips, comments, questions, and Twitter theories: casey@theverge.com and zoe@theverge.com.