A survey of 404 IT professionals conducted by Snyk found the use of artificial intelligence (AI) to write code is creating a security paradox. On one hand, just over three-quarters of respondents (77%) said AI tools improve code security. At the same time, however, 59% are concerned that AI tools trained using general-purpose large language models (LLMs) will introduce security vulnerabilities that first appeared in the code used to train them.
The challenge with general-purpose LLMs is that they are trained using a combination of excellent, mediocre and flawed code. As a result, any recommendation surfaced has the potential to be a suboptimal security result.
Randall Degges, head of developer relations for Snyk, said that as AI tools increase the velocity at which applications can be constructed, it’s probable that DevOps teams will discover more vulnerabilities in their code.
In general, 61% of respondents also noted that automation has increased the number of false positives, with 62% of respondents reporting that at least one out of every four vulnerability alerts they received from automation tools were false positives. Well over a third (35%) said false positives represented over half of the vulnerability alerts.
On the plus side, however, more organizations are paying closer attention to application security issues in the wake of several high-profile breaches of software supply chains and disclosures of vulnerabilities. A full 96% reported their organizations are addressing supply chain security problems on an ad hoc basis, yet only half have a formalized supply chain security strategy in place.
The survey also finds 87% of respondents work for organizations impacted by a software supply chain security issue in the last year, with 61% having implemented new tools and processes as a result.
A total of 62% of respondents work for organizations that have a software life cycle assurance process in place, but only 42% of organizations are using software bills of materials (SBOMs). In addition, only 40% have formal security ratings for the open source packages that developers employ, and 31% ignore risks that arise from indirect dependencies.
However, only 40% of organizations have embedded security testing tools into their integrated development environments (IDEs), with 40% not using software composition analysis (SCA) or static application security testing (SAST) tools at all. In fact, even though 80% of respondents reported working for organizations that ship code daily, only 27% claim to continuously audit their code.
As a result, many vulnerabilities remain unpatched, with Java (42%) and JavaScript (31%) topping the list in terms of code with the most ignored vulnerabilities.
Finally, the survey also noted that vulnerabilities involving open source code are now being remediated faster than proprietary code. Maintainers of open source software are now paying more attention to cybersecurity issues thanks in part to the tools and best practices provided by the Open Source Security Foundation (OpenSSF), said Degges.
There will, of course, come a day when AI is more widely used to analyze vulnerabilities in code. In effect, AI will be used to solve some of the security issues general-purpose LLMs create. In the meantime, however, DevSecOps teams would be well-advised to proceed with caution.