An analysis of 121 organizations published this week by Black Duck Software finds a 67% increase in organizations performing software composition analysis (SCA) on code repositories and a 22% rise in the number of organizations creating software bills of materials.
However, the same report also finds that only 51.2% of organizations provide basic security training to their application development teams, marking the lowest rate observed to date.
Mike Lyman, associate principal security consultant for Black Duck, said it’s not clear why organizations have been cutting back on training, but some may be relying more on tools that provide developers with insights to help make code more secure as it is being written.
Regardless, the analysis makes it clear organizations are employing best DevSecOps practices more widely, he added.
Those best practices, however, will need to evolve in the age of artificial intelligence, said Lyman. As organizations rely more on AI coding tools and AI agents to create software, the amount of code that needs to be reviewed and tested is going to exponentially increase, he noted.
On the plus side, there has already been a ~30% increase in organizations engaging research groups to discover new potential attack methods. Additionally, the use of adversarial tests (abuse cases) has more than doubled in the last year, according to the report.
In theory, at least, guardrails that organizations should be putting in place alongside AI tools will ultimately reduce the number of vulnerabilities that might otherwise be generated, Many of the AI tools being used today were trained using samples of code of varying quality, which increases the potential AI tools have for inadvertently creating a vulnerability. However, guardrails that make use of other AI models should be able to discover those issues before that code is added to a production environment.
In the meantime, enthusiasm for AI coding tools may need to be tempered until it becomes clear that developers are spending less time debugging code they don’t deeply understand because it was created by an AI model. Right now, unfortunately, too many application developers may be putting too much faith in the ability of AI tools to generate quality code, noted Lyman.
Conversely, however, many developers may be writing higher-quality code because they are using AI tools. The simple fact is that far too many developers have not received any formal cybersecurity training, so AI tools might help them write code that doesn’t include, for example, a simple SQL injection vulnerability.
Ultimately, it’s more a question of when the quality of the code being generated by these tools improves. The immediate challenge is defining a set of DevSecOps workflows for the AI era.
Hopefully, those AI tools and agents will be integrated within those workflows sooner rather than later. In the meantime, AI from a DevSecOps perspective, might be too much of a good thing depending on how well the code being generated is actually reviewed and tested.