DevOps Chats

DevOps Chat: Is the Software We Create More Secure? Veracode’s 10th Report

Application security is top of mind now more than ever. For more than a decade, Veracode examined increasing amounts of code as it passes through their source code vulnerability scanning service. During this period, automation is increasingly prevalent, making it easier to run scans more frequently and regularly. But has automation helped? Is the software we create more secure? We gain key insights about this in Veracode’s The State of Software Security Report X (10th edition).

Chris Eng, chief research officer at Veracode, joined us on DevOps Chat. We talked about many insights uncovered in the latest report, such as 50% of applications are accruing security debt over time, the regularity of scanning correlates to vulnerability fix times and that scanning frequency directly impacts security debt.

There is a wealth of information in the report, and you can get a jump on the key findings on this podcast episode with Chris.

Transcript

Mitch Ashley: Hi, everyone. This is Mitch Ashley with DevOps.com, and you’re listening to another DevOps Chat podcast. Today, I’m joined by Chris Eng, who is chief research officer at Veracode. And we are talking about their latest report, the Veracode State of Software Security. This is volume 10. Chris, welcome to DevOps Chat.

Chris Eng: Thanks, Mitch.

Ashley: Great to have you on. Why don’t we start out by having you tell us a little bit about yourself, what you do at Veracode, and for those that don’t know Veracode, just a little bit about Veracode?

Eng: So, as mentioned, I’m chief research officer here at Veracode. I’ve been here for 13 years now, and my teams are responsible, essentially, for building the knowledge that goes into our products.

So, we have a number of different products that help our customers identify vulnerabilities in their software. And my team essentially identify—well, what are the patterns that we’re looking for? What are the common mistakes that developers are going to make? What are the important things that we should scan for in each language? And specify to our engineering teams how we do that.

I also own product security for Veracode, so making sure that we deliver secure products to our customers. Our customers are basically anybody that builds, buys or deploys software. And so, we help them build programs around securing that software at scale, using a number of different technologies, including static analysis, dynamic analysis, software composition and so on.

So, we work with anybody that deals with software, which is pretty much everybody.

Ashley: [Laughter] Just about everybody these days with digital transformation, no doubt. Thank you very much for that background on yourself as well as Veracode.

Well, let’s start out with, you’ve been doing this report for 10 years, and your longevity with the company, you’ve been able to oversee and be part of that over that decade process. What kind of things have you learned since you started out? You know, kinda the size of how many applications and things that you’re scanning to what’s happening today and any trends that you’ve noticed from that?

Eng: Yeah, it’s been a great process to do this every year, primarily because we’re in this unique position where we actually can see what’s going out there in the industry and when we started doing this in volume one, we only had about 1,500 applications in the data set, and then fast forward to this year, we had 85,000 unique applications.

Ashley: Wow.

Eng: So, 50 times more than we had before. And I’m not aware of any other study that has quantitative information on software security that’s anywhere close to as big as this one.

So, in addition to the size of the data set getting bigger, we looked at fix times, and we found that those have roughly stayed the same, which is sort of a little bit depressing—but that being said, there’s a lot more software now than there was before. So, there’s a lot more to scan, there’s a lot more to secure, and so, you only have so much bandwidth to do this stuff.

So, we saw fix times stay roughly the same. We did see that, over that 10 year period, there are fewer apps that have no flaws. So, that’s more of a factor of our capabilities than anything else. We can detect more than we could before. But we did also see that there are fewer apps these days with no high severity flaws. So, customers are getting better at identifying and remediating high severity flaws.

We looked at a number of different angles, including compliance trends and things like that, but for the most part, I think that the fix times is kinda the interesting part, and we dug a lot more into that and explored some of the factors that lead into fix times.

Ashley: Well, if you just think about what’s changed over a decade, I mean, 10 years ago, we were thinking about SQL injection, cross site scripting—you know, those are the kind of app security flaws, at least a lot of what the focus was, things have changed drastically since then. [Laughter]

Eng: Yeah, we are still seeing the vast majority of the same flaw categories that we saw 10 years ago. And I don’t think anybody who does app sec on a daily basis would be too surprised by that.

Ashley: Mm-hmm.

Eng: These things are really easy to fix technically, but when there’s so much of it and it spans across such a huge application inventory, it does take a lot of effort to close these. So, we still see SQL injection at roughly the same rate that we saw it 10 years ago. Other categories, like cross site scripting, have actually gotten more prevalent.

And we’ve seen some go down. We’ve seen, like, buffer overflows and numeric errors reduce in prevalence. And part of that, I think, is due to the change in programming languages as well, right? There’s a lot of that that’s stuff that’s prevalent in native code like C++ and we’re seeing less, fewer applications being written in those languages today. So, I think that’s contributing more to the prevalence differences that we’re seeing.

Ashley: I wonder, too, if service oriented architectures, microservices, things that promote reuse of code maybe help some of those things. Maybe it won’t—we’ll see. [Laughter]

Eng: Yeah, it certainly code. I mean, any time there’s reuse, you know, you inherit the functionality, but you also inherit the risk, and that’s sort of the trends as we’ve seen people start using a lot more open source than before. Again, you get the functionality, but you also get the risk, and so you introduce new vulnerabilities to your applications that way.

So, certainly, with code reuse, like you mentioned, you could get that effect as well.

Ashley: Mm-hmm, potentially even infrastructure as code. There were some really great things that came out of the report. I’d love to have you talk about some of the applications incurring debt, how many folks are driving that down—you know, what kind of effects that had.

Eng: Yeah, we took a look at security debt for the first time in this report. And, you know, most people have a concept of technical debt—just, you know, things that get old and crusty in your software over time, maybe architecturally, or things that you meant to go back and fix but you never did, and security flaws kinda have the same tendency to build up over time.

If you think about it like financial debt, right, if you charge something on your credit card and then you only pay the minimum amount every month, you’re gonna be paying a lot, for a long time. And when you add that up, it’s gonna be a lot more than it would’ve just been to pay off your balance right when you accrued it, right?

And so, security debt, when we look at security debt from that angle, we look at, are applications kind of accumulating new security debt over time, or are they driving it down? And when we look at the number of flaws that are kinda left unfixed in an application, we kinda view that as the security debt, right?

And, you know, there’s only a certain amount of capacity, right? It’s very difficult for an application that, let’s say, has been accumulating security issues over many years to just say, like, “hey, we’re gonna pay that all down today,” right? You’d have to dedicated a lot of engineers to doing that, you’d have to really focus all your efforts to do that. And so, it’s not typically very practical to do that all at once, but it is important to kinda be measuring whether or not you’re fixing more than you find or finding more than you fix, because that’s kinda directional, right?

And so, we found that nearly half of the applications are finding more than they fix. And so, essentially, they’re accruing that debt, so they’re never gonna climb out from underneath that unless they start to reverse course there, right? Put more effort into fixing issues so that they, over time, reduce that debt and get to a point where—okay, now there’s nothing kinda outstanding, and you can then make a rule that says, you know, “we’re not gonna allow this application to be promoted into production or whatever if we accrue new findings.”

Ashley: Mm-hmm.

Eng: But that’s rare, right? So, barely anybody is there yet. And so, we looked at how different DevOps tendencies or methodologies, practices affect security debt. So, I think those are some of the findings that I’d like to talk about a little bit.

Ashley: Sure, let’s go to that, because I’m very curious about, as we shift left, right, we’re trying to get security in earlier—

Eng: Exactly.

Ashley: [Cross talk] earlier, all of those things, hopefully you fix it before it ever goes into a QA function or an automated testing.

Eng: Right, there’s all those studies to kind of show that it costs a lot less, right, when you fix it earlier. And that makes sense—if you’re fixing a flaw as a developer is writing the code, if you can sit there and tell them what they did wrong before they even check that code into the repository, it’s gonna be a lot cheaper than if you find it in a penetration test, let’s say, after the code has been deployed and then you’ve gotta go figure out the root cause and then you’ve gotta find a developer that understands that code and get it in their backlog. So, that’s kind of an accepted understanding that it costs more, the later you do it.

Ashley: That’s after you get done pointing fingers on whose code is it in or is it in the network or the servers or the—you know?

Eng: Exactly, exactly—then it’s kinda tracing it down. And so, we wanted to really say, like, does DevOps make a difference in how quickly we can fix things and how much security debt we accrue?

And so, sitting from where we are, remember customers submit their applications to us and DevOps is not just automation, it’s also culture and process and a lot of things that kind of feed into whether you’re doing DevOps or not. But from our vantage point, we can’t see culture, we can’t see process, we can’t see automation. We can’t see to what extent automation has been incorporated into an applications testing cycle, for example.

We use scan frequency as a proxy for whether an organization is using DevOps. So, by that, I mean, how many scans do they run per year on an application? And so, you have a lot of applications that are scanning literally once per year, right—36% once a year.

Ashley: Wow.

Eng: And if you think about how quickly software is changing and how many features are getting added every week or every month, once a year is—that’s not very good. On the other end of the spectrum, you have about 0.3% of apps that are scanning basically every day or more. So, you know that that’s probably being done by automation. You don’t have a person sitting there submitting the application, hopefully.

And so, if you look at—there’s some charts in the report that kind of show what that distribution is, but essentially, if you scan once per month compared to if you scan every day, your median time to remediation is significantly faster. So, 19 days median time to remediation for a flaw if you’re scanning daily, versus 68 days if you’re scanning monthly or less.

And so, we then kind of broke it down into different, even more buckets, right? So, we did kind of what does one to three scans per year look like, four to six, seven to 12 and so on. And it’s kinda hard to describe this just via audio, [Laughter] but if you pull out the report, you’ll see all these kind of like iceberg charts. And so, you’ll see, like, pink and blue, and there’s—the pink represents the debt. It’s kind of below the line and that kind of shows the number of findings per app that are outstanding.

And what you’ll see is that, for the apps that are scanned more frequently, less debt accumulates. Essentially, the teams were able to get after that debt faster, and while it doesn’t go away completely—again, because this is aggregating all of the apps we have—it accumulates less. And so, we see a direct correlation between that scan frequency and both the fixed time and the security debt accumulated.

Ashley: It certainly makes sense automating that. That’s gonna show up in your numbers around the 19 days to fix, scanning once a day. Is there a way to tell if it’s happening? Do you see scanning happening more frequently than on a daily basis? Is that helpful, or are there other things that help—again, help this shift left?

Eng: You know, I think once you get to a daily basis, you’re probably—I think there’s gonna be some diminishing returns after that. I mean, imagine you did a full scan every time somebody checked something in. You’d just be getting a lot of information—you wouldn’t be able to act on that quicker than, you know, I think the span of a day or a few hours. So, scanning every few minutes really wouldn’t buy you anything.

Ashley: Mm-hmm.

Eng: So, another thing that we looked at that we thought was interesting would be scan cadence. So, not so much how frequently were you scanning, but how regularly are you scanning? And so, you can imagine—and again, there’s this great diagram, it’s my favorite diagram in the report, actually, and it’s just a bunch of dots on a chart. But what it does is, it maps out that every dot represents a scan.

And so, if an application is scanning on a very steady basis, you would see evenly spaced dots across the course of the year, whereas if you were scanning in kind of a bursty fashion—so, basically, a lot of activity followed by no activity, then you would see a clustering of dots followed by just a bunch of white space.

And so, we actually calculated that cadence for every single app in the data set, and then we grouped them into buckets, again, so of the ones that were either scanning on a steady basis or a bursty basis or something in the middle which we called irregular—and irregular is basically just a bunch of mini-bursts, right?

Ashley: Mm-hmm.

Eng: So, what we wanted to answer was, how does that scan cadence affect security debt? Are you less likely to accumulate debt if you’re scanning steadily or in a bursty fashion, alright? Which one’s gonna be more effective at wiping that out?

Because you could see it going either way, right? You could set it, you could just—people could get used to seeing results and then they get to the point where they ignore them, maybe. Whereas bursty, you’re like, “okay, well, I’m paying a lot of attention to this. We’re gonna focus all our efforts on this” and then, you know, drive it down.

So, there was actually a question there. We didn’t know what we were going to see. But when we did break it out, we found out that when you do the bursty scanning, you have all that white space and you just put a flurry of activity over time, like, you’re just—that security debt that accumulates is just massive. You see this huge increase in the amount of pink, which represents the findings that are not addressed, whereas with the steady and even to some extent the irregular scanning, you see the security debt grow a little bit, but then start to decrease. And so, the curve is actually going in the right direction towards the end of the time frame that we’re able to chart out. And so, that was another good finding for us.

So, what we can do is, we can actually take those conclusions and advise our customers and, really, anybody that’s building software that, if you want to reduce security debt, the data suggests that you should scan frequently and that you should also scan in a steady basis. You shouldn’t ever let up, right? It has to become—and that makes sense, right? Anything that we make a habit of tends to just become part of the way that we do things. And so, it was nice to have some data that actually reflects what we think would be true actually does turn out to be true.

Ashley: It’s kinda like, brush your teeth daily, right, not the day before you go to the dentist. [Laughter]

Eng: Yeah, exactly—that’s not gonna do you much good.

Ashley: [Cross talk] [Laughter]. You know what also occurs to me is, this bursty style is, there’s really no predictability around how long a security flaw is gonna exist in your code, because you may find it and fix it now, but it may be—if it’s another three months or whatever before you scan again, who knows when it gets fixed? It could live out there for a long time.

Eng: Right. We actually did find—it’s funny that you mention that—that the highest probability for a flaw to get fixed was in the first month or so. And so, the longer you wait, the less likely a given flaw is to get fixed.

And so, essentially, developers are prioritizing kind of, like, last in, first out. They’re more likely to prioritize something to get fixed if it’s kind of fresh. And that’s not what we wanna see, right? We wanna see developers fix things that are more important. We wanna see them fix more severe items or items that are in applications that are more critical to the business or items that are more exploitable than others.

But when we measured all of those and we looked for patterns that would suggest that they are prioritizing in that way that’s sensible to a security practitioner, we found that that recency, like, how recently was it found was really the highest correlation in terms of whether something was going to get fixed.

Ashley: Mm-hmm.

Eng: And so, we have to do a better job of prioritization, not just understanding what a security person would do, but getting a developer to adopt those same priorities.

Ashley: You’re obviously not measuring the human psyche element of this, but I have to believe you’ve built up such a mountain of that debt, at some point, it gets too hard to grok, understand, fathom, and there’s probably even an abandonment rate that’s just like, “too big—let’s just deal with what’s on our plate right now, and here’s the scan results” and jump on it.

Eng: It does, and that’s not—that’s not uncommon to see customers adopt that type of strategy. That, the idea that—alright, well, I’m starting this program now, on day zero. I’m responsible for appsec now. And, you know, anything that happened before I got here? Well, that’s not my problem. Let’s just focus on not introducing new flaws—which is fine. Like, not introducing new flaws is great, but that doesn’t help you with anything in the past.

Ashley: Right.

Eng: And ultimately, it’s all risk to the application, right? You can get attacked any number of these ways. But we have seen—we’ve seen big customers try to take that approach and, you know, it’s not advisable. You have to chip away at that debt. Even though you can’t do it all at once, maybe you work in security sprints into your life cycle, you find a way to pay it down over time, and eventually, you get to where you wanna be. But you cannot just ignore it and say, “No new flaws going forward.” It’s not gonna get you where you want as far as the debt.

And it costs more to fix stuff later, too, right? The longer we wait—you know, imagine you’ve got a library to patch and that library is, you know, one year out of date or five years out of date. It’s gonna cost you a lot more in terms of effort to fix the one that’s five years out of date. Things will inevitably break, so the cost of fixing also increases over time.

Ashley: There are hundreds of findings, if not more, [Laughter] in this report. Not to pick out just one, but it stood out to me that there was one stat in there of C++ carries three to five times more unresolved flaws than .NET over the same period of time. So, this is to the point of, languages make a big difference, and why is that so?

Eng: Yeah, you know, we did break it down by language and just kinda took a look at how much security debt accrued into each one. And it’s not really to say that—okay, well, if you’re using C++, you should rewrite that in .NET or some other language. That’s not really practical for most people.

Ashley: Sure, yeah.

Eng: But what it does show is that certain languages are more likely to accumulate debt over time. And it’s important not to so much look at the raw number of flaws, because some languages just are inherently more secure against certain classes of flaws, right? It’s a lot harder to shoot yourself in the foot in certain languages than others.

Ashley: Mm-hmm.

Eng: But in looking at the shape of the curve, like, does security debt tend to increase for certain languages? That’s kind of—that’s something to look at and at least be aware of, if you have certain parts of your application inventory written in, you know, PHP, for example, you should be aware that those apps are probably gonna be more likely to accumulate debt than, say, the .NET ones.

Ashley: Mm-hmm.

Eng: And so, it’s something to take into account as you’re doing your planning.

Ashley: Okay. Very good. Well, we could talk about this for maybe days, [Laughter] this is so much information, here.

Eng: [Laughter]

Ashley: So, we don’t leave everyone with a mountain of, “oh, gosh, there’s so much in here to learn and understand,” are there two or three takeaways if you’re a developer, lead developer, development manager, architect sitting out there listening to this, going “okay, so what do I need to know?” What are the couple of takeaways you would suggest?

Eng: Yeah. You know, I’ll kinda reiterate some of the things that we talked about, but essentially, security automation, especially if we look at scan frequency, it is definitely lagging the adoption of DevOps in general, right? DevOps has kinda just taken off like a rocket ship and security automation, like I mentioned, only 5% of the apps were being scanned weekly or better. And so, there’s some catching up to do there.

The good news is that, when you actually do that, and when you actually do that frequent and steady testing, you will probably get to a point where you can start chipping away at security debt and eventually, over time, drive that down so that you can take a strategy of, you know, no new flaws and keep yourself at a clean pace.

And the last one, I think there’s conversation that needs to happen between security teams and developers in terms of prioritization. You know, I talked about how developers are not prioritizing in, really, a security appropriate manner, though recency is appearing to kind of outweigh every other factor. And so, there’s, I think, some improvement that most teams could make there where, even with the same amount of bandwidth to fix flaws, they could spend their time fixing things that are more important to be fixed as opposed to the ones that are just appearing most recently.

So, I think those are some of the takeaways, and like you said, there’s a lot in the report. It’s a pretty interesting read, and I would encourage people to go grab a copy and read through it.

Ashley: And very well done, if I do say so myself—very well put together, there, Chris. Where can folks get the report?

Eng: It’s on our website, so Veracode.com, and it should be on the front page there, and—yep, there’s about a 50 page PDF behind it.

Ashley: Okay, perfect. We’ll include a link in the description for this episode, too, so.

Eng: Excellent.

Ashley: Well, thank you so much. I appreciate you being on, Chris.

Eng: Yeah, my pleasure.

Ashley: It’s been great to have you. Again, thanking my guest today, Chris Eng, who is chief research officer at Veracode, and of course, thanking you, our listeners, for joining us today. We know your time is valuable and it’s a great topic—security is important and having this information, I think, is very valuable to all of us.

This is Mitch Ashley, with DevOps.com, and you’ve listened to another DevOps Chat podcast. Be careful out there.

Mitchell Ashley

Recent Posts

Copado Applies Generative AI to Salesforce Application Testing

Copado's genAI tool automates testing in Salesforce software-as-a-service (SaaS) application environments.

3 mins ago

IBM Confirms: It’s Buying HashiCorp

Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.

18 hours ago

Embrace Adds Support for OpenTelemetry to Instrument Mobile Applications

Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…

1 day ago

Paying Your Dues

TANSTAAFL, ya know?

1 day ago

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

3 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

3 days ago