Legit Security at the 2025 RSA Conference today extended the reach of its application security posture management (ASPM) platform that leverages artificial intelligence (AI) to identify vulnerabilities and other weaknesses to now include suggestions for remediating issues in code.
The company has also extended its existing discovery capabilities to now include AI models that are part of the software supply chain, while simultaneously infusing AI capabilities into its risk assessment capability to provide severity rankings of potential threats.
Legit Security CTO Liav Caspi said, in effect, the ASPM platform assigns AI agents to perform these and other tasks using an AI agent that has been trained to orchestrate a DevSecOps workflow. Longer term, Legit Security also plans to add support for the Model Context Protocol (MCP), an emerging de facto standard for integrating AI agents across multiple platforms.
The Legit ASPM platform is designed to provide DevSecOps teams with a complete view of the entire software development lifecycle, including assets, owners, security controls, vulnerabilities, and the impact they are having on developer productivity. It uses AI to correlate scans and run code analysis to reduce false positives while at the same time making it simpler to discover secrets that have been inadvertently embedded in code.
That approach makes it possible to, for example, integrate those agents to suggest code changes as the time pull-request checks are made, noted Caspi. The overall goal is to reduce the current amount of manual effort required to discover and remediate vulnerabilities that today are rife across software supply chains, he added.
While a lot of progress has been made in terms of adopting best DevSecOps practices, most application development teams are not entirely sure how many artifacts with known vulnerabilities are being incorporated into software builds. Hopefully, those issues will be discovered before an application is deployed in a production environment, but the earlier they are discovered, the less costly they are to fix. The challenge is finding a way to accurately identify these issues in a way that application developers will embrace. All too often, application developers will turn off legacy code scanning tools simply because the number of alerts being generated becomes overwhelming.
At this point, it’s not so much a question of whether AI agents will be incorporated into DevSecOps workflows so much as it is to what degree. Many of the code review tasks that could be assigned to AI agents are not elements of the software development cycle that most humans, especially, enjoy performing. Just as importantly, every vulnerability remediated before an application is deployed is one less fix an application developer is going to be asked to make weeks, sometimes, even months, after they have moved on to another project.
Eventually, there will, thanks to the rise of AI, come a day when shipping code with known vulnerabilities will be widely considered unacceptable. The sooner DevSecOps teams start working toward achieving that goal, the better off everyone affected by the current state of application security will become.