Every year CA Veracode publishes its “State of Software Security” report, chock full of research about how well software is being secured during development and beyond. It’s eye-opening stuff and definitely worth a read for anyone even remotely interested in helping make software more secure.
In this DevOps Chat, Chris Eng, VP of research at Veracode, recently discussed with me this year’s report findings and some takeaways that are worthy of noting.
As usual, the streaming audio is immediately below, followed by the transcript of our conversation.
Transcript
Alan Shimel: Hey, everyone, it’s Alan Shimel, DevOps.com, and you’re listening to another DevOps Chat. Joining us on this episode of DevOps Chat is Chris Eng of Veracode. Now I’m proud to say, actually, that I know Chris, actually, before there even was a Veracode. But – so it goes back a while. Chris, I don’t remember your exact title, but don’t you head up research at Veracode?
Chris Eng: That’s right, yeah. VP of research at Veracode.
Shimel: Yep. Welcome and good to have you back on, Chris.
Eng: Good to be here.
Shimel: Yep. So, Chris, you know, I’m gonna assume, if people wanna know who you are, it’s Chris Eng – E-N-G. You could look him up. He’s a bonafide cybersecurity warrior, with a long track record of accolades and just being a great guy in the community. On top of that, though, I’m gonna assume people know that Veracode, for the last couple years, has been putting out a report called “The State of Software Security.” In fact, this is volume nine, so which assumes there was eight before then, right, Chris?
Eng: That is correct. [Chuckles] We do this every year and it gets bigger and better every year.
Shimel: Yep. So volume nine, you know, it’s CA Technologies Veracode now, of course. Some changes. And, Chris, it’s kind of your baby, right? You shepherd this report every year, no?
Eng: I’ve been involved in it since the first year we did it. I mean, certainly, going back to the early years, there were a lot fewer of us sharing the burden and doing all of the writing ourselves. And now, thankfully, we can spread that out among more people and get more eyes on it. And that’s actually one of the things that we did this year that was kinda different, is, in addition to ourselves, we worked with some data scientists, over at Cyentia Institute, to actually help us dig into some of the things we really wanted to probe, around remediation work and looking at different organization behaviors. And I’ll get into that in a little bit.
Shimel: Absolutely. Chris, before we dive into maybe the specifics of this year, let’s just take a minute and talk about, look, nine years running, my god, how the world has changed in nine years. Right? I was talking to the CIO of Box, Paul Chapman, earlier today, and we were just talking about how the world’s changed, let’s say, in the last 20 years but even in the last 9 years. I mean, what we can – you know, security’s still security, and, unfortunately, we still have incidents; we still have vulnerable software; we still have buggy and have trouble closing up vulnerabilities. But the game has changed, though, too, has it, Chris? When you look back, what do you think that some of the biggest changes are?
Eng: Yeah, it’s definitely undergone a huge evolution. When we started Veracode in 2006, application testing primarily was companies doing penetration tests, maybe once a year, against their most critical applications. And, if anything changed between those, you know, in that one year, you kinda only had a point-in-time assessment. You had a lot of applications, a lot of pieces of software, that were just never being tested at all. And, since then, companies have realized that, number one, they need to test more frequently. That’s the only way that they’re able to release software that is going to be secure and also be able to drive the speed that they want to release new features and new products. Right?
The other, I think, realization is that you can’t just focus on the most critical applications to the business because attackers don’t really care how they get in, right? If they find a weaker way to – an easier way to get in, through some weaker application that’s not maybe core to the business but is still connected to the same infrastructure, well, they’re gonna go after that. So there’s been a realization that it’s important to understand everything that you have and to actually build software testing and security testing into all stages of the life cycle, right? Fix early and it’ll be a lot cheaper than fixing later on.
And, you know, we were looking at some of our stats the other day – this isn’t in the report, but to do the first million application scan for our customers took 10 years; the second million scans took one year.
Shimel: Wow.
Eng: Right? So that huge rate of increase is just showing that, across the board, across all industries, company sizes, organizations are realizing that they have to scale their software security programs to cover all the bases. And they have to do it continuously, in order to not rack up security debt. So I think that would be the biggest thing that’s changed in how we approach the problem.
Shimel: I agree. I agree. But, Chris, really, our time’s limited. Let’s jump into this year’s report. I’ve had a quick look through it, the executive summary and some of the other stuff. Boil it down for our listeners. What do you think are the three biggest kinda metrics or facts that jump out here, that they should be aware of?
Eng: Yeah. Let me give you just a quick –
Shimel: Sure.
Eng: – review on – so, like I said, we do this every year. And we take about 12 months of data, so this is across all of the assessments that we do for all of our customers, and it’s anonymized down, of course, and then we look at trends and try and see what’s going on out there, in terms of application security. This data set is about 700,000 application assessments during a 12-month period, and the one big thing that we’ve been able to do this time that we weren’t able to do in the past is actually look at how long flaws are sticking around once they’re discovered. Right? How long is it taking organizations to fix these flaws?
And so, when we detect something, we’re able to kinda track it across multiple assessments, so let’s say we scan an application today, we find a bunch of flaws, we fix some of them and not others, and then we scan it again in a week. We’re able to identify that certain of those flaws, the ones that we detect the second time, are the same as they were the first time. And so that allows us to actually track the life cycle of each individual flaw until it’s fixed, right? And so we’re able to look at “What does that flaw persistence look like?” Kind of a survivability analysis.
And, if you look overall, just across all the flaws that we look at, the numbers are not great. It takes 21 days on average to close 25 percent of the findings, 121 days to get to the halfway point, and 472 days to get to the 75 percent point. So that’s a long time, right? That’s over a year, so it’s not a very rosy picture overall. And so we broke this down and, you know, once the report is out, people can kind of look at “What does that look like in terms of different severities, different industries? What kind of patterns do we see there?” But I think the most interesting one, and the one that’ll be most interesting to you, is around the effect of DevOps or DevSecOps on fix velocity.
Now we don’t have an easy way of saying, for any given application, “Is this a DevOps shop or not?” Right? We can’t – we don’t have that information, so we have to figure out whether or not we think they are, and the proxy that we’ve been using for that is scan frequency, right? If you’re doing CI/CD, you’ve got your scanning baked into your build process or you’ve got it, some way, baked into your automation. And so, if you’re doing a nightly build, for example, you’re doing a test every night. And so we looked at scan frequency to identify that – you know, we think that probably these applications that are being scanned 300-plus times a year, those are probably DevOps shops, right? And the ones that are scanning one to three times a year, probably not.
And so we’ve got this broken down into buckets. If you have this in front of you, you wanna jump over to page 39 – it’s Figure 43 – and this is what I really want to – this is sort of a – our hypothesis was like, “Yeah, I bet DevOps shops fix faster because they’re scanning more frequently and they don’t wanna rack up as much security debt.” And, in fact, it does show that. Right? We see this one line on the report that says, “Applications that are scanned 300 or more times a year are fixing – they’re getting to the halfway point of fixing in five days and they’re closing 75 percent of flaws within seven days.” Right? So that’s a huge amount better than those averages that I just described to you.
And we have this broken down by bucket, right? So you have the 300-plus scans per year, you have 51 to 299, all the way down to one to three scans per year, where it takes close to four years for them to get to the 75 percent close mark. So I think that’s actually – this is what we hoped to see. This is what we expected to see. But it’s nice to see the data prove out that, in fact, yes, when you scan often, when you scan early and you stay on top of the findings, this is actually a very doable thing, right? To fix flaws in a reasonable time frame.
Shimel: Absolutely. Two things there, Chris. No. 1, I would think hand in hand with the DevOps is automation.
Eng: Yeah.
Shimel: Right? I’m wondering if part of it is that, in DevOps-enabled cultures and organizations, organizational cultures, whatever – companies doing DevOps – are they automating the scan process and that’s why we’re seeing them doing 300-plus scans? Right? And then they’re deploying more and, if their MO, if their standard process is “Nothing gets deployed without first being scanned,” right, that’s where you start racking up 300-plus scans. Right? I mean, when you –
Eng: Oh, yeah. Yeah, we’re –
Shimel: – build that into your pipeline, right, you build that into your CD process, those scans get done.
Eng: Yeah, we’re assuming that these – you know, we’ve gotten – I think, at the maximum point, there’s one app that was scanned over 1,000 times and, certainly, that’s – I mean, I really hope that’s not a human uploading something or pressing –
[Crosstalk]
Shimel: Right, _____ – hitting the “Scan” button –
Eng: – a button every time, right?
Shimel: – every time. You would hope not, right?
Eng: Right. Right. I’m assuming that even the ones that are scanned 50 times a year, right, which I assume to be every week, I assume that those are also –
[Crosstalk]
Shimel: Are still better than once _____ –
Eng: – _____ _____ through a build pipeline. Right? And, any time you can – I mean, and that’s been a big focus for us, just in general, is “How do we remove any human effort required to do the testing? How do you make it easier and more transparent and just remove any barriers to doing that?” And so –
Shimel: No, heck, how do you – you gotta build it into your CI/CD pipeline. I mean, it has to be part of that. The same way you breathe air, right? If you stop breathing, you die; you don’t deploy software that’s not been tested and scanned. And –
Eng: Right. Right.
Shimel: You know, it sounds so simple, us talking about it, Chris, but here’s the fact, right? And not everyone listening to this in our audience has been in security as long as you have or even I have. And a lot of people see the reports, Chris, and they say, “Oh, my God, this is year nine,” and I forgot how many years Verizon is doing their report, and, every year, it’s hard to show progress in some things, right? We do show “Hey, if you scan more, you’re gonna have less vulnerabilities go out the door that aren’t fixed and that were at least – they don’t stay – you know, they get fixed faster.”
Eng: Yeah.
Shimel: But the fact is we’re still seeing – you look at the executive summary here, Chris, and it’s not a rosy picture, as you – I think those were your own words. And we still read about breaches every day and 30 million records here and 20 million here. You know, the non-security person says, “What are we gonna do? When does this fundamentally change?”
Eng: Yeah, I mean, things don’t get solved overnight, right? I mean, if companies have only realized a few years ago that they need to kinda get a handle on all the things that they have and start understanding what their risk posture is, and they’ve got 10, 20, 25 years’ worth of software, thousands of applications, it doesn’t happen overnight; it’s baby steps. Certainly, starting any new greenfield projects with a proper SDLC that’s identifying and fixing issues early is gonna prevent them from building up even more security debt.
But, yeah, a lot of these – the way I explain some of these things – for example, pass rates on OWASP Top 10 not getting a lot better or the prevalence of certain flaw categories, like cross-site scripting and SQL injection, staying relatively constant – is that some of the more mature apps that have been scanned for a while now, those are actually decreasing. If we sliced those separately, we would see the rate going down. But, when we look at it in aggregate, out of those 700,000 applications that are part of this data set, a lot of those are things that are just being scanned for the very first time ever. And so they’re gonna have a higher prevalence of some of these issues, that the more mature applications will have gotten rid of already. Right? And so they kind of balance each other out.
And we see this every time. In fact, we should probably start to report on some of those things separately – it’s kind of a difficult thing to do – but I think what’s happening is some of the newer applications are bringing those numbers up and that’s why we see the lines staying relatively flat. There’s a long way to go, but we do see – you know, and things like the DevOps remediation case I just described. I mean, we see little points of optimism, where you can see a clear correlation between a certain behavior and a certain outcome. And so we like to focus on those things, and, yeah, it’s gonna take a while for most companies to really start to chip away at the legacy applications and getting those locked down.
Shimel: Absolutely. Absolutely. Chris, we’re just about out of time. I should – we should put this out there: for people who are interested in really kinda digging in on the report, where can they get it from? Go to Veracode.com?
Eng: It’ll be on Veracode.com, starting on October 24th, and, yeah, they can download it there. There’s a huge amount in here. We break it down into different flaw categories. We look at different industries against each other. There’s even a little geographic breakdown and then, of course, like I mentioned, a huge amount of data around flaw persistence and how long it takes organizations to fix flaws, which, I mean, ultimately, is what’s important, right? Doesn’t matter how often you scan if you don’t fix stuff. Right? You don’t reduce your risk unless you fix stuff. So there’s a lot of great data on that, that we’ve never been able to report out before, so we’re really happy to get those out there.
Shimel: Absolutely. Well, Chris, you know what? I know I got a little hard on you about the – and it’s not your fault that the security industry – and it’s not the security industry ’cause, God knows, we wanna fix the issues and the problems that cause these breaches and things; it’s really just the state of software and where security lies as a priority in a lot of these organizations and managing their risk and so forth.
But, by the same token, Chris, you know what? First of all, thank you, thank you, thank you, for being involved enough to do this report every year, and, whether the metrics paint a rosy picture or a sad picture, nevertheless, it’s a picture that needs to be told, right? We need this kind of insight; we need the metrics to understand where we are and where we need to be. And so great job, once again, in putting this together. You know, I think it’s as important for non-security people to read these reports as it is for security people. Right? To really –
Eng: Yeah, we hope –
Shimel: To –
Eng: Yeah, we definitely hope people will get something out of it and, hopefully, kind of use that to think about how they are building their own programs, right – how they’re remediating, what do their numbers look like – and, hopefully, kind of informs how different organizations think about scaling up their programs. That’s what we’re hoping, anyway.
Shimel: Good. All right. Chris, let’s call it a wrap on this; it’s a Friday afternoon here. And, again, thanks for your insight on the report. Thanks for managing this report year in, year out and being involved with it and helping. And we’ll see you soon. We’re coming into conference season; I’m sure you’ll be busy presenting and getting more –
Eng: I’m sure, yeah. I’m sure we’ll be bumping into each other pretty soon.
Shimel: Absolutely. Chris Eng, VP of research at CA Technologies Veracode, our guest on this episode of DevOps Chat. This is Alan Shimel, everyone. Have a great day.