Perforce Product Manager Stuart Foster, and Evangelist Steve Howard, join Mitch Ashley to discuss the importance of creating security software from the beginning of the development process. We discuss shift left, SAST, source code scanning and other pursuits towards this goal.
The video is immediately below, followed by the transcript of the conversation. Enjoy!
Mitch Ashley: I’m pleased to be joined by Stuart Foster, who’s product manager with Perforce, and also Steve Howard, who is static code analysis evangelist, both with Perforce. Welcome, gentlemen.
Stuart Foster: Hey, Mitch, how are you doing?
Ashley: Very good. How about if we—let’s start off with just a little bit of an overview of Perforce, and then, Steve, why don’t you do that and also tell us a little bit about yourself, and then we’ll toss it over to you, Stuart, to give you introduction.
Steve Howard: Right, yeah. Well, here at Perforce, really, our role as we see it is to help developers write better code more efficiently and more quickly. So, these days, we tend to talk about DevOps and the whole idea of developer operations and how they put that code together, how it’s constructed, how it’s managed, how we control the processes around it as well, of course. There’s more and more regulation, it seems, these days in software development and we’re making that whole process from start to finish, of writing code to delivering it, that whole cycle. It’s hopefully being made easier, more comfortable for our developers that use the products.
So, that’s really where Perforce sits, and we think of that is really enterprise DevOps, and doing it at scale in large scale organizations.
So, my role, really, is specific on the Static Code side or Static Code security testing side, SaaS tools, if you will, trying to make sure that that piece of writing the code, actually developers writing the code itself is done to the highest possible standard from both the security and the quality perspective and also, at the same time, getting maximum productivity from our development teams.
Ashley: Wonderful. Stuart, introduce yourself?
Foster: Yeah, absolutely. I’m Stuart Foster. I’m the Product Manager or Perforce’s Static Analysis products, both Klocwork and Helix QAC. So, I work with Steve, he’s kinda my partner in crime, amongst a number of other sales engineers and developers and everything, and we’re just out to produce a really solid static application security testing tool.
Ashley: Excellent. Well, one of the great things about talking with you both is, since you have products in market, you talk to a lot of customers, especially in an evangelist role, too, Steve. It really helps us kinda get an idea of where the state of security is in applications, what people are doing today. The whole idea of shift left, I think, has evolved a great deal. I think when I first heard it was “move left in the workflow.” And then it became, “Well, move the when the security people get involved.” And you’ve really taken it much farther than that to really thinking about how we start writing secure code.
Maybe one of you could kinda give us an idea of where you think we are as a community on getting to that point of when the code’s written, it’s written securely.
Foster: Yeah, so, I mean, from my perspective, you know, you kind of look historically at people using the term shift left and, you know, from our perspective, that would’ve been, you know, doing some sort of code analysis on a server and then moving that left in terms of the software development life cycle into the developer’s IDEs.
What you see nowadays is that, you know, historically, a lot of those responsibilities would’ve been siloed. You’d have a security team, a QA team, various other teams that, in many ways, because they’re siloed and in different stages of the process, you’d run into kind of bottlenecks. What you’re seeing a lot more with the concept of DevOps, DevSecOps, a lot of development teams themselves are becoming owners completely of the code that they write. And what I mean by owners is that, you know, they are writing unit tests, they’re writing more functional tests, they are ensuring that it’s part of their responsibility that the code is secure.
So, that’s a shift left, but then you have to think how developers work—they’re working in IDEs, they have their own CI/CD pipelines, they may be using cloud services, various containers that are all getting spun up and broken down. So, it’s not just, to me, shift left any more, it’s analysis anywhere. It’s basically having something that can work anywhere in a workflow, right? And in this sense, you know, the Security teams still exist, the Quality Assurance teams still exist, but what it really is, is a culture of security collaboration that are coming together, right?
Steve, I think you may have a little bit of a perspective on this, too.
Ashley: Yeah, I’m interested in your from the field perspective, too.
Howard: [Laughter] Yeah, no, it’s—well, it’s exactly that. I think it’s a really good point that no, we’re not just talking about moving to the desktops, we are talking shifting left throughout the whole cycle. I think that is a key point. And it is really an opportunity using tools like ours in Static Analysis security testing, it’s about being able to enforce the requirements and the needs from that expert security team where there were bottlenecks previously, trying to put that into rules and regulations that we can then push down to the development tasks and processes that are to the left of that kinda final sign-off.
So, yeah, that—in that form, very much, I’d say it’s shift left, and it’s about that optimized process now where we can be doing that checking all of the time. So, it’s making sure we’re continuously compliant to those security requirements or quality requirements, as the case may be, yeah.
Ashley: You know, it’s interesting, we talk about the workflow and we think about the whole tool chain and pipeline, but if you really break it down into much more granular pieces, developers are working on a linear piece of development, right? They’re in multiple places jumping back and forth, solving bugs, adding features, developing new capability—whatever it might be. So, the ability to concentrate on any one aspect of an application, sometimes, is not even possible, right?
So, you need an environment where you’re aided in how you write and create secure code as well as think about writing secure code. You agree with that?
Howard: Yeah, absolutely. I think that’s the point. We’re trying to make sure that all of those pieces that they’re working on at different times and, you know, our colleagues in the Version Control System team know this very well, but obviously, there are tens or hundreds of branches running in parallel, as you’re sort of saying, and developers are jumping between, you know, fixing problems on the current release, maybe, and then carrying back on with their feature work for the next release as well.
And all this time to try and manage what security requirements each one of those had or quality requirements as well would be a really nightmare without sort of tooling and regulation around that, around the whole process, which is a significant part of developer operations as a whole, the idea of automating a lot of that and building it into the system. So, I think it’s just enforcing that with making sure we’re adding all of the extra pieces that we can also include in those processes.
Foster: And also, if you think about those developers, too, they’re experts in their fields or experts on some of the functionality that they’re developing, but they’re not also necessarily experts as a security team might be, right? So, part of the trick, too, is to offer kinda help, knowledge, learning via the tool set so that, you know, the developers can keep the velocity up and develop to those standards more easily.
And I think Steve mentioned a little bit of a buzzword that kinda we like to think about is, it’s continuous compliance, right? It’s having this throughout the process and enforcing it and continuously monitoring, remediating, fixing, and reporting that become important for these processes to run really cleanly.
Ashley: You know, it’s interesting, there was—kinda stepping back in time, and again, I think about the myth of developers don’t wanna, don’t care about writing secure code. Well, of course, they do. They don’t wanna write insecure, vulnerable code. But I think there’s a period where we thought training was—you know, let’s teach them about how to avoid writing code with vulnerabilities in it.
But when you talk about a continuous governance environment, you’re kinda talking about a whole layer of things, right? It’s from the person doing the work to the environment that it’s being checked in and integrated and being tested into the workflow, and then really, the data, the information about that, that’s happening on a continuous, ongoing basis. And that’s really your data set for producing the compliance information, right? We don’t have to go ask you or check your code, it’s already been done, it’s done every step of the way all the time, and here’s the evidence of it, and here’s what we learned from it and here’s where we’re doing well and here’s maybe where we can improve.
That’s how I think of continuous compliance. Yes, it shows up in a report down the road, but it’s continuous, because we have all that data.
Howard: Yeah, I think that’s a really important point from the efficiency as well. I mean, the Security team or the Development Management that want to know when this product or project will shift obviously need to know where they are relative to that compliance requirement. And if we can keep it compliant from day one, we don’t ever have that risk of building up technical debt at the end of the process that we then have to solve, and I think that’s a really important evolution that we’ve seen over the last few years, that we have that knowledge to hand immediately at our fingertips, as you say. So, that’s a really important piece.
And the other point you made there about the developer education, I think, again—yes, you can teach developers and I know myself, in the past, I’ve learned the right way to do things, wrong way to do things, but there’s nothing quite as good as actually showing the examples of when things are potentially risky within the code you are writing, because you really understand, at that point, what you’re writing and how this could then be a problem, and it is the classic on the job experience that you really want for a perfect education, almost.
Ashley: Let’s take—I’d love to hear your kinda definition or how you describe Static Code Analysis SaaS to people about, if you’re embracing that, what does that mean to a development team, to a software project team?
Howard: I think from my perspective, it’s the idea of having an automated code reviewer, almost, looking at the work you’re doing in a fairly pleasant and passive way. [Laughter] Not being critical in any way, shape, or form as well. So, from a cultural point of view, I think the developers perhaps accept it more when it’s from a machine rather than a colleague that perhaps is picking holes in the code they’re writing, which is another good point about having a computer do that initial check.
But yes, it’s like having that guard at the end of the process, and then we often joked historically about the value of the tools and saying, “Well, you know, if you had two developers that were both going for the job, you know, you’re advertising for a new development role, and one of those developers is, you know, the cost of the license is more, but they never make a mistake in the code, they check in clean, compliant code with every commit, whereas the other developer, they both have the same experience, they’re both writing good code, but the other one could make mistakes and could check in, potentially, security vulnerabilities. Which one would you pick as a manager?” That’s quite an interesting way of looking at it, but of course, you would always want to have that assurance that, essentially, it’s clean code from day one.
Foster: Yeah, and also, you know, in terms of the perspective, Steve talked a little bit about the developer perspective or the hiring perspective. From the business perspective, you have to think about the costs of reputation, lives, revenue, anything of a product being in the field and discovering an issue. You know, the costs to fix a bug exponentially gets more expensive as you go further down the software life cycle, right? So, it’s cheapest to fix it then and there, right at the start. And, you know, in terms of kind of pain points that the software is trying to fix is, you know, providing that knowledge, providing that quick remediation as developers are working, ensuring the velocity stays up, ensuring that the continuous compliance enforcement throughout the process means that there’s not a heavy lifting of work later in the process, because you know, you weren’t reaching compliance requirements.
You know, the ease of using the software however a developer wants to work, it’s all about velocity, you know? And in terms of the business having that oversight and understand, you know, what are our risks, and how soon are we gonna be able to ship this product? Because, you know, we know with many products, the faster to market you get, you know, you have that first mover advantage, and that’s what’s needed in this fast-paced software market.
Ashley: Well, that’s very tough to recovery from reputation damage and quality issues or perceived quality issues around your products, too.
You know, one of the ways that I explain it to folks outside of being a developer or within the security world is, it’s kinda like taking your spell check of a document and adding in the grammar checking—so, this is the higher order, more complex, specific things around grammar. In this case, it’s around security and helping you as you’re developing or creating code, right, make sure that you do it the best way, you know? Avoid introducing security issues.
I know that’s a gross oversimplification, so don’t worry, I’m not saying Perforce is like a grammar checker. [Laughter] But that is sort of a non-technical way of describing what we’re doing. I mean, we want it to be that natural, as part of the process.
Howard: Yeah, absolutely. And I think that’s very true. I think, you know, we’ve seen the evolution of Static Analysis tools very much in the same way we’ve seen the evolution of things like the Microsoft Word, you know, spell checker to grammar checker and so on, and it’s kind of evolved to be more and more all of the time, you know, of the systems similarly.
But on top of that, there’s also the opportunity, then, to, what we find quite interesting is to write your own checkers and rules that are specific to your application. So, we can find in some places where there have been security problems in a certain application for some special reason—not necessarily related to the constructs of the code, as most of the coding standards check, but something that’s specific to the project because of the way the project has designed all the code, and that project interacts with outside external interfaces.
And it has been interesting to work with customers in some cases. I worked with very large telecom customer on one occasion and he said, “Well, you know, we have this translation module between sort of the application and the platform, and nothing should ever, you know, go direct between the two unless they’re going through this one piece in the middle.” And it’s kind of the golden rule we’ve taught to all the developers from day one and nobody will ever break this rule.
So, you know, obviously, I was slightly challenged by this and said, “Well, it’d be nice to check the rule. Let’s put that enforcement in just to see if we can, you know, use a custom rule to see if there are any cases and, of course, as you say, it should be clean.” And obviously, you know, a couple of hours later, once the analysis is run, it’s a very big system, we found a couple of cases—only a couple, but there were a couple of cases where exactly what they said should never, ever, ever happen for the security of the entire system, it happened.
And so, we were able to then remediate those problems and keep that rule in for future development, and of course, it will always be enforced. So, that was a really interesting case where there was also that, where you have something specific, it’s not just generic grammar issues, if you will, or spelling issues, but something very specific to that document.
Foster: Yeah, you can imagine some of these things coming out of requirements or an internal coding standard or things that a Development team might be following and be able to enforce these. Especially when you start looking at legacy code, giant monoliths, you know, that have been around for 5, 10, 15, 20 years, decades, you know, and you don’t have those original people who are developing, right?
So, you know, it’s—we sometimes think of our tool also as, you know, kind of like a quality assurance on a product or a code base that, you know, it’s legacy and all the new people have no idea how to use it, but you need to expand on it, you need to develop it, because it’s valuable IP, and how do you manage developing new functionality and moving forward when, you know, it could be as brittle as a glass house, right?
So, you know, that’s the other benefit you get out of being able to enforce and write your own standards, I’ll call them, and rules, yeah.
Howard: Another classic case there, Mitch, is—and you’re well aware of this as well from the field is, this system was never intended to be connected to the Internet, you know? This was something that—
Ashley: [Laughter] You’re guaranteed it will be, then.
Howard: Yeah. [Laughter] It was an embedded system that was entirely supposed to run in an embedded environment with no connection to an outside world at all, and all of a sudden, you know, it or something around it that’s connecting to it is now connecting to the outside world and there’s open attack interfaces, and now we have to make sure that they’re not gonna compromise the original legacy code base.
Ashley: Mm-hmm. Yeah, there’s just sort of the rule of unintended uses always occur, right? The thing we don’t want to happen is usually what happens. [Laughter] So, yeah, the demo becomes the production system is another great example.
I’m really interested in your perspective on where you think we are. If I made an analogy to the network security world, the best time to be a network equipment or network security salesperson is the day of or the day after a compromise, and automatically, you’ve got budget and we need to solve this problem. I mean, I think we’re well past that in most cases, but where are we in the software world? Do you see this as a, it’s just a natural thing, “Okay, what’s our SaaS strategy, how are we gonna be doing security within the application, what tools should we use for that?” Or is it something where you have to mature to a certain point or have experienced some issues before you’re really gonna take it seriously?
Where are development—and I’m asking for a generalization I know, but what’s your sense of where people are today?
Foster: I’d say, you know, in general, it seems as though most people don’t take these kinds of things seriously until it bites them. I think, you know, there’s some kind of safety feelings of some types of products that you can use where, you know, you find out about a security vulnerability or intrusion and you’re using a piece of software that basically just tells you that, you know, the barn doors have been opened and all your horses are gone, right? But that’s too late in the process. I think we all need to understand that, right?
So, the point of dealing with it sooner and using tools like security application and security testing tools like Klocwork or QAC is that, you know, you’re trying to build the risk reduction, the known unknowns that you can at least deal with sooner, right? So that, you find out later, in the worst case, that the doors were open, but, you know, nothing happened, you still have your horses, because the stables didn’t open at the same time or whatever, right? Just to make a kind of analogy to it, there.
Howard: Yeah, I think in the—obviously, we do a lot of work in the embedded space, and in that area, having seen it over the last 10, 15 years, I think, you know, there was a time where we were educating people a lot more about what it was, you know, what it was that we could do with Static Analysis to actually find some of these problems. Whereas I think—and there’s a lot more awareness now of the risks in the security field and, obviously, what Static Analysis can do to help. So, most people are starting to think about it if they’ve not already got it built in.
So, I’d say it is becoming more of an expected insurance policy against those kind of things within the tool environments. And obviously, the other kind of driving force there is the regulation. So, we’re starting to see standards themselves creeping into the market like the new ISO 21434 standard for automotive, for example, where they’re actually going to require some level of looking at the security of the system and making sure that we’ve done some of those things correctly and, you know, probably that will include what we expect it to include, some level of Static Analysis and coding standards and so on and so forth.
So, that stuff is really becoming almost regulated into the process just like functional safety regulation has been in that process from the beginning.
Foster: Yeah, you’d imagine—
Ashley: Sorry, go ahead.
Foster: I was gonna say, just on that topic there about cyber security for automotive, you think about it, the cars are effectively rolling entertainment systems, right? So, there’s a lot of new kind of vectors for attack, right? An insecure Bluetooth module inside a car and its infotainment system that is also connected to the bus controller that’s driving all the data from every sensor on the car that’s controlling your fly by wire driving and your fly by wire steering and all those things, right? It’s very important—very important.
Ashley: And usually, you have, I believe, you know, both cars and trucks that have starters can start by Bluetooth as one of the mechanisms. So, yeah, you have potentially the same attack vector, you need to be able to isolate that, and now you’re into the function of the engine versus the entertainment system, and there’s a lot of work going in, I know, to the automotive systems. They’re very complex. I mean, just as complex as many, you know, many business applications—maybe even more so because they’re integrated from so many different suppliers, and they are, of course, largely based on, built on embedded systems.
It seems like embedded systems, though, are much more accepting of an update process, of keeping the software current and requiring some way of being able to update software in an embedded system or maybe making it an automatic process so it’s kind of touchless. Is that your experience as well?
Howard: Yeah, I think so. I think that is coming into many systems now. I mean, there are certain manufacturers driving that kind of process—automotive, for example, is a prime case. And it is interesting, because if you go back 10 years, that just was never even considered. You know, once you kind of flashed that system, it was staying as it was until you climbed up the mobile phone master or whatever it was that the embedded system was doing and flashed it again with the latest version. So, yeah, it is, the over the air updates is creeping in, and that will lead us to a more continuous delivery type architecture with the delivery of the software as well, that we’ll be able to go to that.
So, yeah, it’s gonna be an interesting time. And as part of that, I guess now the focus has moved slightly from the cost of fixing an issue out in the wild to just, we don’t want to have the issue because of the potential impacts on having that issue even for a day, you know? [Laughter] The idea that a car might be possible to take control of from an external source, for example.
Foster: Yeah, and I think you’re seeing a lot more emphasis on the security around a lot of these embedded systems because a lot of them are so, you know, requirements driven, compliance driven. They have ISO requirements that are needed to be validated by external bodies to make sure that these things are safe to use, because you know, a lot of our infrastructure is built on the backs of these things, that they’re, in some ways, leading the charge of seriousness for these reasons, right?
Ashley: Mm-hmm. And it seems, also, that industries like, I mean, we were talking about the automotive industry, are putting together their own standards, I think in part of fear out of, “We don’t want government standards to dictate to us, we’d rather us solve the problem, you know, industry solve it” in a way that they’re more accepting and flexible to them. But do you see more regulation coming at a kinda geographical country, government level? Is more of that happening? You talk about ISO standards, of course, being more industry.
Foster: Yeah, I’d say so [Cross talk]—
Ashley: You see it happening? Like, our privacy standards, now we’re talking about software security standards?
Howard: [Laughter] Well, yes, there is that as well. Perhaps in the European level, for example, of something we see on this side, but yeah, mostly, it’s industry standards, from what I can tell that we’re experiencing and seeing come downstream at the moment.
Ashley: Mm-hmm. [Cross talk]
Foster: Yeah, I think it depends on the level of kind of sensitivity of the data. So, you do see requirements as they relate to finance and, like, PCI, DSS, and things like that. You see security in terms of—I’m running a blank, here, but they typically come from when the data is so sensitive, you know, the U.S. government has their STIGs, DISA STIG rules.
Foster: So, it tends to come down from the level of sensitivity. I wouldn’t be surprised if we do start seeing more requirements, perhaps, for public data, but right now, we have things like the HIPAA compliance and things like that, as they relate to very, very sensitive data.
Ashley: Yeah, the defense industry is also a good one to start to drive standards, too, and I know they’re doing a lot in security.
So, just to change, shift focus here, we have a few minutes left. We were talking earlier about where people are in their maturity level of adopting SaaS and writing more secure software using those kinds of tools. If you were someone leading a project, maybe an architect or a lead developer or somewhere in the management chain that said, “You know what, we’re almost there, I just need to do a little bit more convincing, or I need something else to help my organization get there to see why we need to do this, and I’d like to do it before we have a major incident”—what’s your best suggestion for kinda doing that? I don’t know if it’s internally selling or if it’s demonstrating the value of it or how do you see people successfully getting the organization to begin to support adopting it?
Howard: Hmm. I think that’s a tough question, just because there’s so many industries and so many different kind of topics of particular interest, it always helps to have experience of a well-publicized kind of issue [Laughter] that’s gone out into the—
Ashley: “We don’t wanna be like those people.”
Howard: Exactly, yeah. That always helps drive things a little bit when you’ve got something tangible to say, “We don’t want to end up there.” I think, again, the regulation piece is where I see a lot of that being driven. You know, the industry bodies are actually saying, “As a whole, the industry doesn’t want to end up there. That’s not somewhere we can afford to be, and so let’s try and make sure we get this right from day one” and that trickles down into individual projects and so on. [Dogs barking]
Ashley: Well, speaking of dogs [Laughter]—okay, hold on.
Howard: Okay. Yeah, so, that’s really where we’re—
Ashley: Okay, just start your answer again.
Howard: Yeah, so, that’s really where we’re seeing it, predominantly, from the—you know, first of all, we don’t want a case where there is an incident, because, you know, if we can use that as a tangible case and say, “We don’t wanna be like those guys, because that didn’t end well for them.” So, that’s, obviously, always a good driving force. But then it’s regulation from the perspective of the industry bodies themselves getting together and saying, “We really can’t afford for this kind of thing to happen, and we want to make sure that, you know, across the industry, we start to follow a best practice that’s gonna do our very best to prevent that,” I guess, is a good way of describing it.
Ashley: It seems this is also an area where security team CISOs can be a real colleague asset for making the case to the rest of the organization, because if you’re _____ on the software team, you have one set of justifications or definitions for why you need that. Another is, there are corporate—you know, the CISO is trying to accomplish these goals, these things with the organization. And it’s always easier to fit your objectives within a larger objective and get that supported and get funding—that kinda thing. So, that’s another suggestion I would throw out there, too, that also helps folks—and plus, it brings security and software teams closer together.
Howard: Yeah. [Cross talk] Sorry.
Foster: Yeah. I think we do see—we do see, we do have, you know, security evangelists that are, as you just said, who are championing it, right? And when we talk to those people and we go over Static Analysis with them and this is the security focus, right? It comes down to, you know, being able to help understand their size of team, what kind of current development practices they have, and what that means and how you can deploy, roll out the tools and, you know, not slow down that velocity, because that’s always the goal. Don’t slow down the velocity, but add more value.
So, that’s what we try to help them with, help them understand. And also, you know, help—sometimes, you know, Steve is sitting with these people and we almost help them mature in their software development process to, you know, move from a couple guys who are just doing nightlies and they have no version control, they’re not using quality gates, they have no type of automated pipelines to setting them up so that they can scale their development groups, they can speed up their testing, their development processes, and that all comes out of engaging with vendors like us.
Ashley: Great. Well, gentlemen, this was a lot of fun. I really enjoy talking security and creating secure applications and, you know, it’s something that I think everybody wants to improve at to different levels and degrees, and hopefully, before those bad things happen. It’s always better to do it before, but if you have to do it after, you do. Maybe sometimes that’s what it takes.
So, it’s been great talking with you. Again, this is Stuart Foster, who is Project Manager with Perforce—thank you, Stuart—and Steve Howard, Static Code Analysis Evangelist with Perforce as well. Take care, gentlemen.
Foster: Great, thank you.
Howard: Thanks, Mitch.