Skip to main content

What newsrooms can learn from threat modeling at Facebook

What newsrooms can learn from threat modeling at Facebook

/

An interview with Facebook’s ex-security chief

Share this story

Collision 2019 - Day One
Photo By Stephen McCarthy/Sportsfile via Getty Images

Editor’s note: We’re barreling toward the 2020 election with big unresolved problems in election interference — from foreign actors to domestic troublemakers. So how can journalists sort through all of the noise without having their coverage or their newsrooms compromised? Jay Rosen, one of America’s foremost press critics and a professor of journalism at NYU, argues that national news providers must work to “identify the most serious threats to a free and fair election and to American democracy.” In an essay on PressThink, Rosen says that newsrooms need threat modeling teams, which could be fashioned after those run by major platforms like Facebook. To explore this model, Rosen interviewed Alex Stamos, the former chief security officer of Facebook and a public advocate for democracy and election security. Their interview is published in full below.

Jay Rosen: You’re a former chief security officer at Yahoo and Facebook, among other roles you have had. For people who might not know what that means, what is a CSO responsible for? 

Alex Stamos: Traditionally, the chief information security officer is the most senior person at a company who is solely tasked with defending the company’s systems, software, and other technical assets from attack. In tech companies, chief security officer is sometimes used as there is only a small physical security component to the job. I had the CISO title at Yahoo and CSO at Facebook. In the latter job, my responsibility broke down into two categories.

The first was the traditional defensive information security role. Basically, supervising the central security team that tries to understand risk across the company and work with many other teams to mitigate that risk.

The second area of responsibility was to help prevent the use of Facebook’s products to cause harm. A lot of teams at Facebook worked in this area, but as CSO I supervised the investigations team that would handle the worst cases of abuse.

Abuse is the term we use for technically correct use of a product to cause harm. Exploiting a software flaw to steal data is hacking. Using a product to harass people, or plan a terrorist attack, is abuse. Many tech companies have product and operational teams focused on abuse, which we also call “trust and safety” in the Valley.

In this case, I had lots of partners for both of my areas of responsibility and a lot of the job was coordination and attempting to create a coherent strategy out of the efforts of hundreds of people. The CSO / CISO also has an important role of being one of the few executives with access to the CEO and board who is purely paranoid and can speak frankly about the risks the company faces or creates for others.

And where does the discipline of threat modeling fit into those responsibilities you just described? I’m calling it a “discipline.” Maybe you have another term for it.

When I hear most people say “threat modeling,” they don’t mean the act of formal threat modeling that some companies do, so I’ll take a step back and we can discuss some terminology as I understand it.

Please do.

Threat modeling is a formal process by which a team maps out the potential adversaries to a system and the capabilities of those adversaries, maps the attack surfaces of the system and the potential vulnerabilities in those attack surfaces, and then matches those two sets together to build a model of likely vulnerabilities and attacks. Threat modeling is useful to help security teams perform resource management. 

My manager at Yahoo, Jay Rossiter, once told me that my entire job was “portfolio management.” I had a fixed (and at Yahoo, quite small) budget of person-power, OpEx, and CapEx that I could deploy, so I had to be incredibly thoughtful about what uses for those resources would be most effective in detecting and mitigating risk.

Threat modeling can help you figure out where best to deploy your resources. Its use in tech greatly increased after Microsoft’s software security push of 2002–2010, during which time the company implemented formal threat modeling across all product teams. Microsoft faced a huge challenge in their trustworthy computing project, in that they had to reconsider design and implementation decisions across hundreds of products and billions of lines of code years after it had been written.

So threat modeling helped them understand where they should deploy internal and external resources. I was one of those external resources and Microsoft was one of the best customers of the consultancy I helped found in 2004. People interested in this kind of formal threat modeling can read about Microsoft’s process as captured by Frank Swiderski and Window Snyder in their book with the very creative title, Threat Modeling.

Since then, most tech companies have adopted some of those ideas, but very few use this intense modeling process.

But there’s a looser meaning to the term as well, right?

Others have formal threat modeling exercises but do so with less heavyweight mechanisms.

Often, when people talk about “threat modeling,” they really mean “threat ideation,” which is a process where you explore potential risks from known adversaries by effectively putting yourself in their shoes.

So at a big tech company, you might have your threat intelligence team, which tracks known actors and their operations and capabilities, work with a product team to think through “what would I do if I was them?” 

This is usually less formal than a big threat model but equally helpful. It’s also a great exercise for making the product managers and engineers more paranoid. One of the fundamental organizational challenges for security leadership is dealing with the different mindsets of their team versus other teams.

People like to believe that their work is positive and has purpose. Silicon Valley has taken this natural impulse to an extreme, and the HBO show very accurately parodied the way people talk about “changing the world” when they are building a slightly better enterprise resource management database. 

So product people are innately positive. They think about how the product they are building should be used and how they and the people they know would benefit.

Security and safety people spend all their time wallowing in the misery of the worst-case abuses of products, so we tend to immediately only see the negative impacts of anything.

The truth is somewhere in the middle, and exercises that bring both sides together to think about realistic threats are really important.

Makes sense.

Two more models: the first is Red Teaming. A Red Team is a team, either internal to the company or hired from external consultants, that pretends to be an adversary and acts out their behavior with as much fidelity as is possible.

At Facebook, our Red Team ran large exercises against the company twice a year. These would be composed based upon studying a real adversary (say, the Ministry of State Security of the People’s Republic of China, aka APT 17 or Winnti). 

The exercises would simulate an attack, start to finish. They would spend months planning these attacks and building deniable infrastructure that couldn’t be immediately attributed to the team.

And then would execute them from off campus just like a real attacker. This is an important process for not just testing technical vulnerabilities, but the response capabilities of the “blue team.” Only I and my boss (the General Counsel) would know that this breach was not real, so everybody else responded as they would in a real crisis. This was sometimes not super fun.

One exercise at Facebook started with a red team member visiting an office where nobody knew him. He hid his Facebook badge and spent time playing with one of those scheduling tablets outside of each conference room. He installed malware that called out and established a foothold for the team. From there, the team was able to remotely jump into a security camera, then into the security camera software, then into the virtualization infrastructure that software ran on, then into the Windows server infrastructure for the corporate network.

At that point they were detected, and the blue team responded. Unfortunately, this was at something like 4AM on a Sunday (the London office was on-call) so I had to sit in a conference room and pretend to be super worried about this breach at 5AM. My acting probably wasn’t great.

At some point, you call it and allow the blue team to sleep. But you end up finishing out the entire response and mitigation cycle.

After this was over, we would have a marathon meeting where the red team and blue team would sit together and compare notes, stepping through each step the red team took. At each step, would ask ourselves why the blue team didn’t detect it and what we could do better.

Sounds like an action movie in some ways, except most of the “action” takes place on keyboards.

Yes, an action movie except with keyboards, tired people in Patagonia vests, and living off of the free snack bars at 3AM.

The red team exercise would lead to one last process, the tabletop exercise. A tabletop is like a red team but compressed and without real hacking.

This is where you involve the executives and all the non-technical teams, like legal, privacy, communications, finance, internal audit, and the top executives.

This seems relevant to what I am proposing.

I can’t tell Mark Zuckerberg that the company has been breached and then follow up with “Gotcha! That was an exercise!” 

I guess I could have done that exactly once.

Right.

So with a tabletop, you bring everybody together to walk through how you would respond to a real breach.

We would base our tabletops on the red team exercises, so we would know exactly which attacks were realistic and how the technical blue team responded.

The way I ran our exercises was that we would tell people way ahead of time to set aside an entire workday. Let’s say it’s a Tuesday.

Then, that morning, we would inject the scenario into various parts of the company. One exercise we ran was focused on the GRU breaking into Facebook to steal the private messages of a European politician and then blackmailing them. 

So at midnight Pacific time, I sent an email to the Irish office, which handles European privacy requests, from the interior ministry of this targeted country saying that they thought their politician’s account had been hacked.

Early East Coast time, the DC comms team got a request for comment from “The Washington Post.”

The tech team got a technical alert.

All these people know it’s an exercise, and you have to carefully mark the emails with [RED TEAM EXERCISE] so that some lawyer doesn’t discover them and say you had a secret breach.

Then, as CSO, my job was to take notes on how these people contacted our team and what happened during the day. In the late afternoon, we pulled 40 people together around the world (back when people sat in conference rooms) and talked through our response. At the end, the CEO and COO dialed in and the VPs and GC briefed them on our recommended strategy. We then informed the board of how we did.

This is an incredibly important process.

I can see why.

Breaches are (hopefully) black swan events. They are hard to predict and infrequent, so what you find from these exercises is that the internal communication channels and designation of responsibility is extremely vague.

In this exercise I mentioned, there were actually two entirely different teams working to respond to the breach without talking to one another.

So the technical Red Team helps you improve the response of the hands-on-keyboard people, and the tabletop helps you improve the non-tech teams and executive response.

The other benefit is that everybody gets used to what a breach feels like.

I used to do this all the time as a consultant (still do, occasionally) and it is much easier to stay calm and to make intelligent decisions when you at least have been in a simulated firefight.

Anyway, all those things might be exercises you could lump under “threat modeling.”

Thank you, this all makes sense to me, as a layman. One more question on threat modeling itself. Then on to possible adaptation in election year journalism. 

What is the end product of threat modeling? What does it help you do? To put it another way, what is the deliverable? One answer you have given me: it helps you deploy scarce resources. And I can immediately see the parallel there in journalism. You only have so many reporters, so much room on the home page, so many alerts you can send out. But are there other “products” of threat modeling?

The most important outputs are the process and organizational changes necessary to deal with the inevitability of a crisis.  

Being a CISO is like belonging to a meditative belief system where accepting the inevitability of death is just a step on the way to enlightenment. You have to accept the inevitability of breach.

So one “deliverable” is the changes you have to make to be ready for what is coming. 

For journalists, I think you have to accept that somebody will try to manipulate you, perhaps in an organized and professional fashion. 

Let’s look back at 2016. As I’ve discussed multiple times, I think it’s likely that the most impactful of the five separate Russian operations against the election was the GRU Hack and Leak campaign.

While there were technical components to the mapping out of the DNC / DCCC and the breach of their emails, the real goal of the operation was to manipulate the mainstream US media into changing how they approached Hillary Clinton’s alleged misdeeds. 

They were extremely successful.

So, let’s imagine The New York Times has hired me to help them threat model and practice for 2020. This is a highly unlikely scenario, so I’ll give them the advice here for free.

First, you think about your likely adversaries in 2020.

You still have the Russian security services. FSB, GRU, and SVR.

So I would help gather up all of the examples of their disinformation operations from the last four years.

Yes, I am following.

This would include the GRU’s tactic of hacking into websites to plant fake documents, and then pointing their press outlets at those documents. When the documents are inevitably removed, they spin it as a conspiracy. This is something they did to Poland’s equivalent of West Point, and there has been some recent activity that looks like the planting of fake documents to muddy the waters on the poisoning of Navalny. 

You have the Russian Internet Research Agency, and their current activities. They have also pivoted and now hire people in-country to create content. Facebook broke open one of these networks this week.

This year, however, we have new players! You have the Chinese. China is really coming from behind on combined hacking / disinformation operations, but man are they making up time fast. COVID and the Hong Kong crisis has motivated them to build much more capable overt and covert capabilities in English.

And most importantly, in 2020, you have the domestic actors.

The Russian activity in 2016, from both the security services and troll farms, has been really well documented.

And breakdowns created by government, like an overwhelmed Post Office.

Yes, true!

I wrote a piece for Lawfare imagining foreign actors using hacking to cause chaos in the election and then spreading that with disinfo. It’s quaint now, as the election has been pre-hacked by COVID.

The struggles that states and local governments are having to prepare for pandemic voting and the intentional knee-capping of the response by the Administration and Republican Senate has effectively pre-hacked the election — in that there is already going to be huge confusion about how to vote, when to vote, and whether the rules are being applied fairly.

So, anyway, this is “threat ideation.”

Right.

Then, I would examine my “attack surfaces.”

For The New York Times, those attack surfaces would be the ways  these adversaries would try to inject evidence or narratives into the paper. The obvious one is hacked documents. Worked great in 2016, why change horses?

And there has been some discussion of that. But no real preparation that I am aware of.

But I would also consider these other actions by the GRU, like creating fake documents and “leaking” them in deniable ways. (The Op-Ed page also turns out to be an attack surface, but that’s another discussion.)

So from this threat ideation and attack surface mapping, I would create a realistic scenario and then run a tabletop exercise. I would do it the exact same way. Tell key reporters, editors, and the publisher to set aside a day.

Inject stolen documents via their SecureDrop, call a reporter on Signal from a fake 202 number, and claim to be a leaker (backstopped with real social media, etc.).

Then pull everybody together and talk about “What would we do in this situation?” See who makes the decisions, who would be consulted. What are the lines of communication? I think there is a real parallel here with IT breaches, as you only have hours to respond.

I would inject realistic new data. “Fox News just ran with the story! What do you do?” And coming out of that you do a post-mortem of “How could we have responded better?”

That way, when the GRU releases the “Halloween Documents,” including Hunter Biden’s personal emails and a fake medical record for VP Biden, everybody has exercised the muscle of making these decisions under pressure.

Okay, we are getting somewhere. 

I have written that our big national news organizations should have threat modeling teams in order to cope with what’s happening in American democracy, and in particular the November elections.

By “threat” in that setting I did not mean attacks on news companies IT systems, or bad actors trying to “trick” a reporter so much as the threat  that the entire system for having a free and fair vote could fail, the possibility that we could slip into a constitutional crisis, or a very dangerous kind of civil chaos, or even “lose” our democracy — which is no joke — and of course all the ways the news system as a whole could be manipulated by strategic falsehoods, or other methods.

In that context, how practical do you think this suggestion — big national news organizations should have threat modeling teams — really is? 

It’s absolutely realistic for the big organizations. The New York Times, NBCUniversal (Comcast has a very good security team), CNN (part of AT&T, with thousands of security people and a huge threat intel team). The Washington Post is probably the break-even organization, and smaller papers might have difficulty affording this.

I was thinking about the big players.

But even small companies can and do hire security consultants. So like in tech, the big players can have in-house teams and the smaller ones should bring in experts to help plan for a couple of weeks. The big organizations all have great reporters who have been studying this issue for years.

There is a great parallel here with tech. In tech, one of our big problems is that the product team doesn’t appropriately consult the in-house experts on how those products are abused, maybe because they don’t want to know.

From the scuttlebutt I’ve heard, this is sometimes what happens with editors and reporters from different teams not consulting with the people who have spent years on this beat.

That can happen, yes.

NBC should not run with stolen documents without asking Ben Collins and Brandy Zadrozny for their opinions. The Times needs to call Nicole Perlroth and Sheera Frenkel. The Post, Craig Timberg and Elizabeth Dwoskin. 

It can happen because maybe some people don’t want the story shot down.

Right, they don’t want to hear “you are getting played,” especially if it’s a scoop.

Just like Silicon Valley product people don’t want to hear “That idea is fundamentally dangerous.”

One of the products that I thought could come from the newsroom threat modeling team is a “live” Threat Urgency Index, republished daily. It would be an editorial product published online and in a newsletter, sort of like Nate Silver’s election forecast.

The Threat Urgency Index would summarize and rank the biggest dangers to a free and fair election and to American democracy during the election season by merging assessments of how consequential, how likely, and how immediate each threat is. It would change as new information comes in. How might such an Index work in your vision?

I think that would be useful, but I am doubtful you can create quantitative metrics that mean something.

InfoSec has spent years and millions on trying to create quantitative risk management models. We are all jealous of the financial risk modeling that financial institutions do.

But it turns out that trying to build those models in very fast-moving, adversarial situations where we are still learning about the fundamental weaknesses is incredibly hard.

Accounting is like 500 years old. Probably older in China.

Maybe not a quantitative ranking with scoring, but how about a simple hierarchy of threats?

I think an industry-wide threat ideation and modeling exercise would be great. And super useful for the smaller outlets. One of the things I’ve said to my Times / Post / NBC friends is that they really need to both create internal guidelines on how they will handle manipulation but then publish them for everybody else. This is effectively what happens in InfoSec with the various information sharing and collaboration groups.

The big companies generate threat intel and ideas that are consumable by companies that can’t afford in-house teams.

A Threat Urgency Index might be seen as an industry-wide resource. And what about these categories —how consequential, how likely, and how immediate each threat is — are they in fact distinct? Do they make sense to you? 

You are effectively talking about creating the journalism equivalent of the MITRE ATT&CK Matrix. This is a resource that combines the output of hundreds of companies into one mapping of Adversaries, to Kill Chain, to Technique, to Response.

It’s an extremely useful resource for companies trying to explore all of the areas they should be considering.

Final question. Put on your press criticism hat for a moment: What worries you about how the American news media is confronting these dangers?

Well, I guess I would have two major criticisms. 

First, for the last four years, most media outlets have spent most of their time covering the failures of tech, which were very real, and not their own failures. This has distorted the public perception of impact, elevating diffuse online trolling above highly targeted manipulation of the narrative. It also means that they are likely still open to being attacked themselves by the same means. Just listen to Mike Barbaro’s podcast with Dean Baquet and it’s obvious that some people think they did great in 2016.

Yep. I wrote about it. The big problem was not talking to enough Trump voters, according to Dean.

Second, the media is still really bad at covering disinformation, in that they give it a huge amount of reach that wasn’t earned by the initial actor. The best example of this is the first “slowed down Nancy Pelosi” video. Now, there is an entire debate to be had on manipulated media and the line between parody and disinformation. But even if you assume that there is something fundamentally wrong with that video, it had a very small number of views until people started pointing at it on Twitter and then in the media to criticize it. This individual domestic troll became national news! I did an interview on MSNBC about it, and while I was talking about how we shouldn’t amplify these things they were playing the video in split-screen!

This is a huge problem.

I have written about this, too. The dangers of amplification have not been thought through very well in most newsrooms.

Because the incorrect, dominant narrative has created the idea that every spicy meme is a Russian troll and that any amount of political disinformation, which is inevitable in a free society, automatically invalidates the election results. That is an insane amount of power to give these people.

You could see this as hacking the “newsworthiness” system.

There are people doing good, quantitative work on the impact of both online and networked disinformation and the impact is usually much more mild than you would expect. That doesn’t mean we shouldn’t stop it (especially in situations like voting disinformation, which can directly affect turnout) but we need to put online disinformation in a sane ranking of risks against our democracy. 

A sane ranking of risks against our democracy. That’s the Threat Urgency Index.

I’m glad you are covering these things.