Security teams have always been perceived as an impediment to delivery by software teams who feel that security imposes arbitrary and unreasonable policies and use poorly-integrated tools that are beset with high false-positive rates. With the advent of DevOps, security has been seen as an increasing obstacle to rapid deployment cycles.
Security teams believe developers do not care about security and will try their best to avoid security controls or policies. Security teams are often frustrated by developer pushback when remediating vulnerabilities, particularly when they fail to appreciate the cascading effects of making code and dependency changes late in the life cycle.
From my perspective, with a foot in both camps and a decade of experience helping developers produce more secure code, I believe the key to addressing the disconnect between teams is to develop greater empathy for the developer.
Why Insecure Software Exists
Software development is a complex endeavor with many (often conflicting) requirements. Blaming software developers for insecure software is an overly simplistic and counterproductive approach, best summarized by a former colleague:
“I can tell you that virtually all developers I talk to care about application security. They care about the quality of the products they create, the integrity of the data their software handles, and the reputation of the organizations they work for.” — Jim Jastrzebski, Veracode
From my experience, developers don’t deliberately set out to create insecure software; instead, there are a number of reasons it happens.
Devs are overly optimistic: Developers are creative folks and natural problem-solvers. They can often visualize the perfect end result of their labor without necessarily considering the technical and security debt they may encounter along the way. Remember—things do go wrong.
Overconfidence: I often refer to this as the Dunning-Kruger effect for developers. Faced with complex platforms, languages and frameworks, developers tend to overestimate their ability to comprehend and master them. Complex systems take time to fully understand and being overly confident can lead to unintended consequences and compromise security. Beware of insecure defaults and configurations.
Bad things only happen to other developers: Think of this as a kind of schadenfreude for developers. During code reviews with developers, I have been surprised to hear comments like, “ … But that won’t happen to my code!” Despite vast evidence to the contrary, some developers think that bad things only happen to others and to others’ code. Assume a defensive mindset; expect the worst and hope for the best.
Taking shortcuts: Developers are goal-oriented and want to produce working code on schedule. And schedules are tight. The inevitable consequence is that sometimes shortcuts are taken (missing error handler or input validation, for instance) leading to security vulnerabilities. Track and tackle your technical debt.
Technical debt or legacy code: Speaking of technical debt, it’s probably one of the biggest issues facing developers: The burndown of technical debt and maintenance of legacy code. Software systems are complex and minor changes may wreak havoc, especially in microservices-based applications and architectures. Remember that developers may have legitimate concerns about making changes that are not absolutely necessary.
This stuff is difficult: The biggest thing to remember to develop the empathy muscle is to appreciate the difficulty of the software development process. Developers are facing multiple conflicting requirements against tight deadlines and are required to master a variety of technologies. Mistakes will happen; the important thing is to learn lessons and improve the process.
The last point is perfectly captured by this quote:
“90% of security problems are just complexity problems. It’s technical complexity, it’s network complexity, it’s organizational complexity.” — Adrian Ludwig, Atlassian
Care and Feeding of Developers
Once security engineers have a better understanding of the challenges facing developers, they can focus on their messaging when communicating about security issues. Remember that the developer is unlikely to be personally responsible for every issue and should not be blamed or scolded. This will produce defensive behavior and is unlikely to achieve a positive outcome. Focus on the shared security challenge at hand and how a joint approach can solve the problem.
A frequently-cited problem for developers is the relatively high false-positive rates security tools produce, particularly with code scanning tools. Security engineers should make sure that the tools are suited to your technical stacks and fine-tune the tool as much as possible. Ideally, the security team should triage the initial report and remove obvious false positives.
Additionally, certain legacy security tools do not integrate well into CI/CD pipelines and cannot be easily automated within the build process. If a developer has to use a manual or asynchronous process to perform a security scan, they will become frustrated and possibly ignore this requirement. Make sure security tools work well with the developer’s environments and workflows.
Do not assume every developer has a deep understanding of the security issues identified—take the time to explain them thoroughly and/or document them internally. A great approach is to conduct shared learning sessions where developers and security teams work through a vulnerable application together to learn the concepts.
Another frustration developers cite is a lack of clear understanding of the security requirements or requirements that are constantly changing. Ensure security policies are clearly stated and available to developers, and explain the reasons why the policies exist in the first place—for example, to meet PCI DSS accreditation.
Finally, make sure the combined teams are given time not just to learn but to absorb lessons and that success is recognized and rewarded.
“The sky is always falling—we never celebrate success.” – Chris Romeo, Security Journey
Empowering Developers
So, what does developer empathy look like for API developers?
First, API developers and security engineers can both benefit from the positive security model that uses the OpenAPI definition as the contract for API behavior. Anything not specified by that contract is considered invalid—in other words, the contract specifies the allow list. This is in stark contrast to the traditional approach for web security which is reliant on blocking according to a block list—obviously, not all bad input can be known leading to false negatives with this approach. Overly aggressive blocking may block valid input and typically result in false positives; familiar to users operating web application firewalls (WAFs).
The OpenAPI definition offers another advantage: The applicable policies can be expressed as code in the definition as opposed to via some verbose, abstract policy documentation. The benefit of policy-as-code is to uniformly enforce policy across the life cycle starting when the developer first starts coding the API.
Finally, developers need security tools that work in their preferred environments (for example, first-class IDE and CI/CD plugins) and provide value in the form of remediation guidance or actionable findings that help developers reduce technical debt.
Security and development don’t have to be at odds; help developers to help themselves by building empathy and working with them rather than against them.