For developers, there’s nothing more satisfying than looking at a web app or service and being able to say “I made that.” Or at least part of that. Being able to develop code that works and helps millions of people do what they need to get done is an excuse for bragging rights.
But the opposite is also true. You don’t want to be the developer whose code inadvertently enables hackers to take control of online apps or services, for instance. There are many examples of these kinds of errors. Hackers are better and more sophisticated than ever, and no programmer wants to be the culprit in the ever-increasing number of attacks based on poor coding practices.
Where and when do errors like these creep in? It could actually be at any stage of development. Vulnerable code could be the basis for a main module in an app, including in the very early stages of development. Once implemented, bad code travels through all stages of development, making apps vulnerable to exploits. Hence the importance of shifting left–deploying security at as early a stage in the process of code creation as possible.
What exactly does shifting left mean in the context of code development today? Tools need to be integrated in as early in the process of development as possible, and at every stage of the software development life cycle. Today, programmers will work on code and commit their changes several times a day. They need to be able to examine that code for security issues before committing that code to the repository that other teams will be basing their work on. That, of course, is what they would do if they were checking for functional defects, and the same must be done to exclude insecure code.
What’s the best approach to shifting left in the early stages of development? Developers need tools that can supply them with actionable information, highlighting specific issues in specific lines of code. Sort of like a “spell checker” for security risks, the tools would point out the location and issue in a line of code, and even make recommendations for how to mitigate the risk.
With that, there are some caveats that come with tools that report risks at such a granular level. Tools should be flexible enough not to over report issues. One general problem with security tools is the possibility of false positives. Security testing tools are notorious for generating very long lists of reported issues, with many of them turning out to be false positives or issues that are not critical to fix. So, to be taken seriously and provide real value, security testing tools need to effectively highlight the high-severity alerts that you cannot ignore and also separate out true vulnerability signals from noise (i.e. false positives).
Another caveat is that other tools need to be deployed further on in the development process. When a module is completed, it becomes a building block for other modules that will eventually be part of the full program. Testing needs to be done to determine if there are security risks when modules work together. Thus, you should not consider security as shifting left, but rather expanding left, maintaining a presence alongside the development process as it moves forward. With this kind of strategy, programmers can be sure they will have those bragging rights, proudly pointing to what they built.