A survey of 110 security leaders finds all are investing in software supply chain security, with application security posture management (ASPM) and DevSecOps automation and orchestration topping the priority list, followed closely by security composition analysis (SCA) tools, application programming interface (API) security and dynamic application security testing (DAST) tools.
In addition, 30% of respondents expect to be piloting a software bill of materials (SBOM) initiative in the next 24 months, the survey finds.
However, the source of the funding for these initiatives is becoming more of a shared responsibility, with only 21% of respondents reporting that security budgets are the sole source. In fact, half of the respondents (50%) noted application development teams now own responsibility for application security.
Overall, only 25% of respondents said there is limited collaboration with application development teams, resulting in occasional friction, compared to 59% that said there is good collaboration with room for improvement. Only 16% said there is a tight partnership based on shared goals.
Fernando Montenegro, vice president and practice lead for cybersecurity, said it’s clear there is now more collaboration between application development and cybersecurity teams, with more security tools being incorporated into DevOps workflows. However, improving software supply chain security it’s also still very much a work in progress, he added.
The degree to which application developers might share the same level of optimism about the state of DevSecOps is less clear. Historically, many developers have resented requests to search for vulnerabilities in code that either never made it into an application running in a production environment or are only found in an application that isn’t accessible externally.
Additionally, no one is quite certain to what degree artificial intelligence (AI) is improving the quality of the code being generated versus making it worse. On the one hand, far too many application developers lack cybersecurity expertise, so much of the code they have generated has been flawed. However, general-purpose AI models have been trained using examples of flawed code randomly sourced from across the internet. Not surprisingly, large language models (LLMs) are generating code with known vulnerabilities.
Hopefully, the next generation of LLMs will be trained using examples of code that have been vetted for security flaws, which, when coupled with more advanced tools that are leveraging AI to discover vulnerabilities as code is being written, should eventually improve the overall state of application security.
Of course, it’s also safe to assume cybercriminal syndicates and various nation states are now also using AI to identify vulnerabilities in code, but also generate the code needed to exploit those vulnerabilities. As a result, the number of cybersecurity incidents involving legacy code is likely to dramatically increase in the months ahead.
DevOps teams at the very least should by now be reviewing what changes to existing DevOps workflows need to be made to achieve that goal. After all, the easiest vulnerability to fix is the one that was never created. The challenge, of course, is determining where the funding needed to acquire the tools and platforms that make it possible to build and deploy more secure applications is actually going to come from.