Endor Labs today added a set of artificial intelligence (AI) agents to its platform, specifically trained to identify security defects in applications and suggest remediations.
Fresh off raising an additional $93 million in funding, Endor Labs founder and CEO Varun Badhwar said going beyond simply identifying vulnerabilities in code, these AI agents are trained to assess the architecture of an application for security flaws, along with suggestions for improving security.
Those AI agents have been trained using both the data that Endor Labs collects from its code scanning tools and the software engineering expertise provided by its internal research and development teams, he added.
Specifically, Endor Labs has spent more than three years analyzing 4.5 million open source projects and AI models, mapping more than 150 risk factors, building call graphs to index billions of functions and libraries and annotating exact lines of code where known vulnerabilities exist.
That level of context enabled Endor Labs to build multiple agents to function as application developers, architects and security engineers that work in concert with one another to analyze applications at their most fundamental architectural level, said Badhwar. Those agents are, for example, able to review every pull request (PR) for architectural changes that might impact application security, he added.
That approach also enables Endor Labs to set up guardrails that ensure the AI agents it has developed generate accurate outputs that application developers can trust, noted Badhwar. Additionally, Endor Labs is also adding support for a Model Context Protocol (MCP) plugin for the AI coding tools developed by Cursor, along with existing support for GitHub Copilot coding tools
That level of integration is critical because, as application developers increasingly embrace AI tools to write code, the number of applications that DevSecOps teams will need to review before they are deployed is about to exponentially increase, said Badhwar. Much of that code, however, contains vulnerabilities simply because the AI tools creating it were trained using samples of code randomly collected from across the Internet, much of which is inherently flawed, he added.
The only way DevSecOps teams will be able to keep pace with the rate of application development and deployment is to employ AI agents capable of identifying issues as code is being created, said Badhwar.
It’s not clear at what pace DevOps teams are embracing AI tools to write code that actually finds its way into a production environment, but a Futurum Research survey finds 41% of respondents expect generative AI tools and platforms will be used to generate, review and test code. The challenge, of course, is that the more code generated using those tools, the more likely it becomes that application security issues will arise.
Hopefully, AI will, in effect, be relied on more to secure the code being created by both machines and humans, both before and, when needed, after it is deployed. There is a clear opportunity to apply AI to reduce the massive amount of security debt that exists in legacy applications, noted Badhwar. In the meantime, however, DevSecOps teams would be well advised to review the quality of the code being generated by these tools before any security incident inevitably occurs.