ArmorCode at the 2025 RSA Conference this week made generally available Anya, an artificial intelligence (AI) agent added to its application security posture management (ASPM) platform that has specifically been trained to augment DevSecOps teams.
Karthik Swarnam, chief security and trust officer of ArmorCode, said Anya is based on a large language model (LLM) that has been trained to identify existing and emerging risks data collected via more than 285 integrations with security tools, multiple sources of code and IT infrastructure platforms.
Anya then uses that data to deduplicate and correlate actionable insights surfaced in natural language to enable DevSecOp teams to prioritize remediation steps by severity, fixability and team ownership, he added.
Additionally, Anya leverages retrieval-augmented generation (RAG) techniques with every query and interaction to continuously fine-tune results based on the data exposed to the LLM.
The overall goal is to provide an easily accessible software-as-a-service (SaaS) platform that dramatically reduces the number of false positives that existing application security tools and platforms currently generate, he noted.
It’s not clear how many organizations have adopted an ASPM platform yet, but a survey of 51 senior security leaders of the Purple Book Community (PBC) fielded by ArmorCode finds more than three-quarters (76%) now view ASPM as their top investment focus for 2025.
A full 86% are already using or exploring generative AI tools in their security programs, and just under two-thirds (65%) also noted they believe AI technologies will reshape application security workflows.
Overall, software supply chain vulnerabilities were noted as the most significant enterprise application threat (84%) followed closely by open-source software risks and cloud misconfigurations at (73%).
Managing the sheer volume of vulnerabilities and false positives was identified as the biggest challenge in securing code (78%), followed by the speed of software development outpacing security priorities (71%) and lack of visibility across application security tools (65%).
In response, 64% report they are growing their application security teams, with 84% noting the role of the AppSec leader is now more important than it was two to three years ago. In total, 92% report that insecure code has become a bigger concern.
There is, of course, at this point no shortage of ASPM platforms that make it simpler to address DevSecOps issues in a way that minimizes disruptions to existing workflows. The one certain thing is that in the age of AI, the volume of code that is likely to have known vulnerabilities is only going to increase exponentially. Most of the tools generating that code have been trained using examples of code that have known vulnerabilities that can randomly surface in any of the output generated by an LLM and far too much of it, despite best intentions, is still likely to find its way into a production environment, said Swarnam.
Longer term, there may come a day when AI results in code that has been reviewed for security flaws long before it ever makes it into a production environment. In the meantime, however, DevSecOps might want, in the short term, to prepare for the worst while still hoping for the best.