The CyberWire Daily Podcast EP. 389 With Guest Speaker David Brumley

David Brumley
August 14, 2019
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The CyberWire Daily podcast delivers the day's cyber security news into a concise format. The CyberWire Daily includes interviews with a wide spectrum of experts from industry, academia, and research organizations all over the world.

On July 22, 2019, David Brumley, CEO of ForAllSecure and professor at CMU, joined The CyberWire Daily host, David Bittner, to discuss how autonomous security enables security and development teams to not only automate security as a part of the SDLC, but also implement a data-drive rubric for determining whether an application is secure enough for production. This 20 minute podcast is available for listening below. The full transcript is also available below.

Transcript

Dave Bittner: [00:00:03] A contractor for Russia's FSB security agency was apparently breached. NSO Group says its Pegasus software can now obtain access to private messages held in major cloud services. Iranian cyber operations are said to be spiking, and Tehran is paying particular attention to LinkedIn. Colleges and universities are experiencing ERP issues and a minor wave of bogus student applications. Equifax receives its judgment. And there's a sentence in the case of the NSA hoarder. 

Dave Bittner: [00:00:39]  And now a word from our sponsor, ExtraHop, the enterprise cyber analytics company delivering security from the inside out. The cloud may help development and application teams move fast, but for security teams already dealing with alert fatigue, tool sprawl and legacy workflows, cloud adoption means a lot more stress. You're building your business cloud first. It's time to build your security the same way. ExtraHop's Reveal(x) provides network detection and response for the hybrid enterprise. With complete visibility, real-time detection and guided investigation, Reveal(x) helps security teams unify threat detection and response across on-prem and cloud workloads so you can protect and scale your business. Learn more at extrahop.com/cyber. That's extrahop.com/cyber. And we thank ExtraHop for sponsoring our show. 

Dave Bittner: [00:01:37]  Funding for this CyberWire podcast is made possible in part by ExtraHop, providing cyber analytics for the hybrid enterprise. Learn more about how ExtraHop Reveal(x) enables network threat detection and response at extrahop.com. 

Dave Bittner: [00:01:51]  From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Monday, July 22, 2019. 

Dave Bittner: [00:02:00]  The BBC's Russian-language service reported late Friday that SyTech, a Moscow-based IT firm, had been successfully hacked. The company's website was defaced with a leering Yoba face, and the attackers claimed to have stolen some 7 1/2 terabytes of data. SyTech is generally thought to be an FSB contractor. Among the data the attackers obtained and shared with hacktivist group Digital Revolution were screenshots of the target company's internal interface, including some employees' names and notes on the projects they were working on. The SyTech projects exposed included social media monitoring solutions and TOR deanonymization tools. 

Dave Bittner: [00:02:42]  The Financial Times reports that the controversial lawful intercept shop NSO Group says it can access private messages held in major cloud services, including those provided by Apple, Amazon, Google and Facebook. The claims are found in marketing pitches for an enhanced version of NSO Group's Pegasus tool. The ability to get information from clouds that are normally thought secure, notably Apple's iCloud, is new for Pegasus. Reports suggest that smishing is one possible attack vector for the spyware. 

Dave Bittner: [00:03:09]  This isn't a commodity attack tool. Pegasus is pricey. BGR says the price tag for Pegasus is in the range of millions of dollars. This effectively limits its market to government customers, and these, indeed, seem to have been NSO Group's principal buyers. 

Dave Bittner: [00:03:32]  CBS News and others report that Microsoft has observed a spike in Iranian cyberattacks since nuclear nonproliferation agreements collapsed. FireEye warned last week that APT34, also known as Helix Kitten, is undertaking a large catphishing campaign via LinkedIn. Its apparent goal is espionage directed against the financial and energy sectors. Government agencies are also targeted. Catphishing, you will recall, is the creation of a fictitious online persona used to induce a victim to connect in some way. In this case, they're seeking to establish a connection over LinkedIn. 

Dave Bittner: [00:04:10]  Late last week, the U.S. Department of Education warned that there had been active and ongoing exploitation of the Ellucian Banner system. Banner is an enterprise resource planning solution widely used by colleges and universities to manage student services, registration, grade reporting and financial aid. Modules also offer academic institutions tools for human resources and financial operations. Sixty-two colleges and universities are believed to have been affected. The attackers are using administrative privileges to create fraudulent student accounts. As many as 600 bogus accounts have been created in a single day, with totals over several days running into the thousands. The Department of Education says the phony accounts are almost immediately being put to unspecified criminal use. 

Dave Bittner: [00:05:00]  Ellucian, which Education Dive says had patched the vulnerability back in mid-May, said in a statement Friday that there are really two issues here. First, of course, is the now-fixed bug. The second is the creation of fraudulent applications. The issue, Ellucian says, is unrelated to the vulnerability in Banner. The company believes that criminals are, quote, "utilizing bots to submit fraudulent admissions applications and obtain institution email addresses through admission application portals," quote. That's not an issue specific to Banner. Ellucian recommends that schools adopt safeguards like reCAPTCHA to better secure their information. 

Dave Bittner: [00:05:41]  Dr. David Brumley is co-founder and CEO of ForAllSecure and a professor at Carnegie Mellon University, where he's also faculty adviser to the Plaid Parliament of Pwning, CMU's competitive security team. He's got some thoughts on DevSecOps, specifically his belief that autonomous security is the key to fixing what he says is a broken system. 

David Brumley: [00:06:04]  Yeah. What we're talking about in DevSecOps is making all sorts of security testing part of normal development. It's part of this movement to shift left from doing security testing at the end of application development to really making it part of the entire development lifecycle. 

Dave Bittner: [00:06:20]  This insertion of the Sec into DevOps - what's been the practical implications of that? 

David Brumley: [00:06:26]  I think the practical implications are - you get two things. First, you get, actually, a higher reliable software. A lot of security tests are about - how can you crash an application or take it over? - which kind of sounds like security. But the business impact is often, you have unavailability of your service. So I think that's really the primary impact of putting the sec in is you get higher quality software in addition to, of course, more secure. 

Dave Bittner: [00:06:50]  And this realization across the industry that this is a better way to go than doing your security testing at the tail end, is that pretty much an accepted practice these days? 

David Brumley: [00:07:01]  I think everyone accepts the notion, and they're trying to figure out how to implement it right now. 

Dave Bittner: [00:07:05]  And so how is that going? 

David Brumley: [00:07:07]  I think that it's going well, but it's not without its troubled points. What we're finding is a lot of security teams start with applications already developed, and that's really their bread and butter. And so this idea of pushing it back to developers really requires you move from security teams to the developers themselves. 

Dave Bittner: [00:07:25]  So we're talking about just sort of an ongoing collaboration between the teams and embedding the security teams in with the developers. 

David Brumley: [00:07:32]  I think so. I think there's really two ways we see it. First, you can embed security team inside the development team. Or second, you can give developers better tools and better training about what security is going to check for. 

Dave Bittner: [00:07:45]  I'm curious. With your role as a teacher, as a professor, how much of this is embedding that philosophy in with the students who are learning to be developers? 

David Brumley: [00:07:56]  I think it's a huge part. I mean, at Carnegie Mellon, we started maybe 10 years ago, starting to teach our sophomores about computer security. But I think we're one of the few. And the thing that's really interesting is, when you start teaching developers about security, they just become better developers; they see all the different ways they can get things wrong. Humans are absolutely awful at assessing security, especially at the end. And we also know that they're underresourced. And so what we want to do is build autonomous systems that help take the load off a human but also turn everything really into a more data-driven rubric as opposed to just a gut-feel on whether something's secure enough or not. 

Dave Bittner: [00:08:35]  Can you walk me through that? Can you give me an example of how that would play out? 

David Brumley: [00:08:39]  When you go out and you - let's say you're doing a penetration test at the end of the application lifecycle. A lot of times, you're just scanning for known vulnerabilities. And I tell you what - like, when a hacker is trying to break into your system, they're not just scanning for known vulnerabilities, at least not the good ones who are trying to get into your system. And so what we're starting to do is add in tools that help build in security checks as you build and ship software. And a lot of that's actually about security testing. How do you, when a developer builds an application, autonomously take that application and give it to something that's going to help, almost like a penetration test, and do that every time you release the software? 

Dave Bittner: [00:09:14]  And does that increase any significant friction? Does it slow the process down? 

David Brumley: [00:09:18]  I think it's one of those things of cost-benefit. So some people will say it slows it down, just the way software testing slows down anything, right? Like, there's this short-term pain of, oh, man, writing that test case or ingesting it into that autonomous system, that's extra work. But once you do it, it's absolutely beautiful because then the system does all those things that you used to have to do manually. One of the cool things about this tech, and it was really pioneered in the lab, is that as you learn more and more about the security of your app, you actually create an automated regression suite to make sure that you continue to check those things. And that also bootstraps the process next time you have a release. So when you add new code, it checks all the old things and then tries to find new things in the new code. 

Dave Bittner: [00:10:00]  That's Dr. David Brumley from ForAllSecure. 

Dave Bittner: [00:10:04]  The U.S. Federal Trade Commission announced today that Equifax will pay $575 million in its settlement over the credit bureau's 2017 breach. The agreed settlement doesn't address only the FTC's complaint but figures in a global settlement with the Federal Trade Commission, the Consumer Financial Protection Bureau and 50 U.S. states and territories. The allegations hold that Equifax's failure to take reasonable steps to secure its network led to a data breach in 2017 that affected approximately 147 million people. 

Dave Bittner: [00:10:38]  Some $300 million will go into a fund that will provide affected consumers with identity protection. It will also compensate people who bought credit monitoring or identity theft protection from Equifax or who sustained other out-of-pocket expenses as a direct result of the breach. If $300 million isn't enough to cover such compensation, Equifax is on the hook to pony up an additional $125 million if necessary. The remainder of the amount will be distributed as follows - $175 million will go to 48 states, the District of Columbia and Puerto Rico; the remaining $100 million will be paid to the Consumer Financial Protection Board in civil penalties. 

Dave Bittner: [00:11:19]  Former NSA contractor Hal Martin was sentenced to nine years imprisonment on Friday for theft of classified information. As ZDNet observes, the government did not establish that Martin was the source of the ShadowBrokers' leaks. That had been widely believed, at least in the more speculative precincts of the online world, but it wasn’t borne out in court. Martin had taken a guilty plea in federal court, admitting to theft of classified documents. Nine years is a stiff enough sentence, especially when compared to the maximum of 10 years he faced for each of the 20 counts against him. It is, however, in line with the expectations set in his plea agreement. Martin's defense attorneys had presented him as a hoarder, a pack rat and not a traitor, and after the sentence was passed, they pointed out that the government had not demonstrated treason or treasonous intent. They said his problems amounted to an extenuating mental health issue. 

Dave Bittner: [00:12:15]  The prosecutors didn't buy it, according to CyberScoop. This is not a case of hoarding; this is stealing, the government argued. And they noted that the 50 terabytes of information Martin had squirreled away in his Glen Burnie shed was not squirreled away in a disorganized manner. The presiding judge also observed that, for a hoarder, Hal Martin seemed pretty well-organized. So don’t be a hoarder. But it may look better on Judgment Day if your house looks like the digs you see on the reality TV show “Hoarders" - bad TV, but maybe not so bad in the courtroom. 

Unidentified Person: [00:12:57]  Great party, huh? 

Dave Bittner: [00:12:59]  Yeah, yeah. Great party. Could you excuse me for just a moment? Hey, you. What are you doing? What - oh, no. Looks like another insider got into our systems when we weren't looking. I am going to be in so much trouble with the boss. 

Unidentified Person: [00:13:18]  Did someone say trouble? I bet I can help. 

Dave Bittner: [00:13:21]  Who are you? 

Unidentified Person: [00:13:22]  To catch insider threats, you need complete visibility into risky user activity. Here - I'll show you how ObserveIT works. 

Dave Bittner: [00:13:29]  Wow. Now I can see what happened before, during and after the incident. And I'll be able to investigate in minutes. It used to take me days to do this. 

Unidentified Person: [00:13:39]  Exactly. Now, if you'll excuse me, I think there's a cocktail over there with my name on it. 

Dave Bittner: [00:13:45]  But wait - what's your name? Oh, well. Thanks, ObserveIT and whoever she is. ObserveIT enables security teams to detect a risky user activity, investigate incidents in minutes and effectively respond. Get your free trial at observeit.com/cyberwire. 

Dave Bittner: [00:14:12]  And joining me once again is Joe Carrigan. He's from the Johns Hopkins University Information Security Institute and also my co-host on the "Hacking Humans" podcast. Joe, great to have you back. 

Joe Carrigan: [00:14:21]  Hi, Dave. 

Dave Bittner: [00:14:21]  A story here from CNET. 

Joe Carrigan: [00:14:23]  Yeah. 

Dave Bittner: [00:14:23]  And it's titled "More Than 1,000 Android Apps Harvest Data Even After You Deny Permissions." 

Joe Carrigan: [00:14:29]  How is this even possible, Dave? 

Dave Bittner: [00:14:31]  (Laughter) That's what I want you to explain. 

Joe Carrigan: [00:14:32]  Right, OK. 

Dave Bittner: [00:14:33]  What's going on here? 

Joe Carrigan: [00:14:34]  So these app developers are being very creative with how they gather your information. 

Dave Bittner: [00:14:40]  OK. 

Joe Carrigan: [00:14:40]  This article talks about one app in particular from Shutterfly. 

Dave Bittner: [00:14:45]  Photo app, yeah. 

Joe Carrigan: [00:14:46]  It's a photo app, and I think they sell books of your photos and all that stuff. So the app will ask you for permission to use your location data. 

Dave Bittner: [00:14:53]  OK. 

Joe Carrigan: [00:14:54]  Right? And you can deny the app permission from the location data. But they're still getting your location data because you have to give it access to your photos in order for it to work. And if your photos have the geotagging information in it, there's the same information from another source. So when you deny them access to the location data in the operating system, that essentially denies them access to the GPS receiver. 

Dave Bittner: [00:15:17]  Right. 

Joe Carrigan: [00:15:17]  Right? So what's what's happening is, if you're geotagging your pictures, then your camera has access to the GPS receiver, your camera writes that GPS information into the photo, your Shutterfly app has access to the photo, and lo and behold, there's the GPS permission right there in the metadata. So essentially, they get the data anyway. 

Dave Bittner: [00:15:39]  Even though you said, yeah... 

Joe Carrigan: [00:15:41]  Well... 

Dave Bittner: [00:15:42]  It's one of those funny distinction without a difference things. 

Joe Carrigan: [00:15:44]  Right. 

Dave Bittner: [00:15:44]  Like, I said, I didn't want you to know where I am. 

Joe Carrigan: [00:15:47]  Right. 

Dave Bittner: [00:15:47]  No fair going around that way. 

Joe Carrigan: [00:15:50]  A Shutterfly spokeswoman said the company would only gather the information with explicit permission from the user - right? - despite what the researchers found. You know, it's funny - there are other companies in here that are using this. They're piggybacking on other apps to get access to the information that the user might not want them to have. And these are big-name companies like Baidu's Hong Kong Disneyland park app, right? They found a bunch of apps that are doing this, more than 1,300 of them. And there's going to be - this paper is going to be presented at Usenix Security conference next month. 

Dave Bittner: [00:16:23]  You specifically say, no, I don't want you to do this, and they find a work around. 

Joe Carrigan: [00:16:27]  Right. 

Dave Bittner: [00:16:28]  I know, overall, you're not a big fan of regulations. 

Joe Carrigan: [00:16:31]  Right, yeah. 

Dave Bittner: [00:16:31]  But boy, this makes me wonder, do we need a bigger stick here to tell them they can't do this? (Laughter). 

Joe Carrigan: [00:16:38]  Well, Google is going to fix this in the next release of their operating system. 

Dave Bittner: [00:16:41]  OK. 

Joe Carrigan: [00:16:42]  I don't know how they're going to do that, but it's not going to be available until Android Q. 

Dave Bittner: [00:16:47]  I guess it's - this demonstrates that we can't trust the app developers to do the right thing in good faith. 

Joe Carrigan: [00:16:53]  Right. One of the things that the researchers have done is they did notify the FTC of these apps. 

Dave Bittner: [00:16:58]  OK. 

Joe Carrigan: [00:16:59]  They sent this information to the FTC, as well as to Google. So they disclosed the vulnerability to Google, and then they told the FTC about it. 

Dave Bittner: [00:17:05]  All right. Well, maybe that's a good evolution. Maybe FTC could... 

Joe Carrigan: [00:17:09]  Maybe - yeah, maybe there is some regulation coming or at least some fines or some penalties. 

Dave Bittner: [00:17:11]  Yeah. 

Joe Carrigan: [00:17:11]  Now, what's interesting is that this is only research on Android apps. I'd like to see if this is being done in the Apple marketplace. Because I think Google approached this problem in good faith, right? Google says, all right, we're going to give users the ability to block apps from getting their location information. And I can very easily see this becoming something that, oh, you know what? We didn't think about that. We didn't think that apps could look at the photos and get the location information from the photos. 

Dave Bittner: [00:17:36]  Right. 

Joe Carrigan: [00:17:37]  Or we didn't think that apps could look at some other app and get the unique identifier for the phone out of that other app. We need to lock that down. And Google is taking care of it in the next release of Android. Unfortunately, it's going to be a couple of months before that's out. 

Dave Bittner: [00:17:50]  Yeah, yeah. I outlined it on a recent episode of "Grumpy Old Geeks," where I ran into something with - and this is on IOS. 

Joe Carrigan: [00:17:57]  Right. 

Dave Bittner: [00:17:57]  I ran into something with an app tracking the food that I was eating, you know, trying to lose a few pounds. 

Joe Carrigan: [00:18:02]  Right. 

Dave Bittner: [00:18:03]  And I started seeing ads for the foods I was tracking on Twitter and - despite having disabled tracking explicitly from this app. 

Joe Carrigan: [00:18:13]  Right, right. 

Dave Bittner: [00:18:14]  The only food ads that were showing up in Twitter were foods that I was tracking in my weight tracking app. So could be a coincidence. May very well be a coincidence. But I don't trust that it is a coincidence anymore (laughter). 

Joe Carrigan: [00:18:27]  Right. Because you have to enter that food data into that app, right? 

Dave Bittner: [00:18:29]  Right, right. Exactly. 

Joe Carrigan: [00:18:30]  So you know they have the information. 

Dave Bittner: [00:18:32]  Exactly. 

Joe Carrigan: [00:18:32]  Which is more likely - that the ad engine is so good that which - actually, it's pretty likely, but (laughter)... 

Dave Bittner: [00:18:37]  Yeah. Well, that's the thing, right? 

Joe Carrigan: [00:18:38]  Or that they just took your information and said, Dave likes to eat... 

Dave Bittner: [00:18:42]  Eggos. 

Joe Carrigan: [00:18:42]  Eggos, right? 

Dave Bittner: [00:18:43]  Eggo waffles. Yeah, that's what it was - Eggo waffles. 

Joe Carrigan: [00:18:45]  Really? 

Dave Bittner: [00:18:45]  Yeah, Eggo waffles. Big hit in my family. Yeah, yeah. So I - just it's this erosion of trust that I find troubling. And hopefully, we will evolve past this and come up with some system where we can feel like we can trust these apps and these devices again. Joe Carrigan, thanks for joining us. 

Joe Carrigan: [00:19:03]  It's my pleasure, Dave. 

Dave Bittner: [00:19:08]  And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor ObserveIT, the leading insider threat management platform. Learn more at observeit.com. 

Dave Bittner: [00:19:21]  The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Tamika Smith, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Nick Veliky, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.

Share this post

Add a Little Mayhem to Your Inbox

Subscribe to our weekly newsletter for expert insights and news on DevSecOps topics, plus Mayhem tips and tutorials.

By subscribing, you're agreeing to our website terms and privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Add Mayhem to Your DevSecOps for Free.

Get a full-featured 30 day free trial.

Complete API Security in 5 Minutes

Get started with Mayhem today for fast, comprehensive, API security. 

Get Mayhem

Maximize Code Coverage in Minutes

Mayhem is an award-winning AI that autonomously finds new exploitable bugs and improves your test suites.

Get Mayhem