Under an early access program, Checkmarx today made available query builder and guided automation tools that take advantage of OpenAI’s generative artificial intelligence (AI) technologies to make it simpler for developers to resolve application security issues.
AI Guided Remediation surfaces actionable remediation recommendations for vulnerability issues such as misconfigurations directly from within integrated development environments (IDEs).
Meanwhile, AI Query Builder makes it possible to use natural language text to create a query for both the Checkmarx static application security testing (SAST) and the infrastructure-as-code (IaC) security tool that creates rules for scanning code. Those rules can be easily fine-tuned or modified and queries for other use cases can easily be added.
In addition to reducing the amount of time it takes to create a query by 65%, that approach also dramatically reduces the number of false positive alerts that arise based on rules created by a security administrator.
Checkmarx CEO Sandeep Johri said these additions to the Checkmarx One Application Security Platform are aimed at improving the application security experience for developers. Most developers don’t want to be inundated by alerts that lack any real context, nor do they want to be bothered with remediation details.
It’s not likely developers will be interested in how AI could help them write more secure code from the start, but the faster a reliable fix is surfaced, the sooner developers can return to writing code, noted Johri.
In the longer term, Checkmarx will add support for multiple large language models (LLMs) beyond those provided by OpenAI to provide other AI capabilities that are based on more domain security knowledge, said Johri.
However, despite these advances, vulnerability remediation will not become fully automated using AI any time soon, he added. Instead, it will become much simpler to identify code that has either inadvertently or deliberately introduced a vulnerability, said Johri.
In fact, generative AI tools such as GitHub Copilot can themselves introduce vulnerabilities into code. As a general-purpose AI platform, the recommendations surfaced are based on a mix of instances of clean and flawed code, Johri noted. There will also be instances where cybercriminals attempt to subvert an LLM that creates code by injecting snippets loaded with malware into the samples used to train a generative AI model.
On the plus side, however, generative AI tools should narrow the divide that currently exists between application developers and cybersecurity teams as more issues are discovered and remediated before applications are deployed in a production environment. The challenge has aways been surfacing application security issues at the time developers are writing code rather than sending them a list of vulnerabilities to address weeks (sometimes even months) after a developer has moved on to another project.
Naturally, there is a lot of trepidation when it comes to all things generative AI; one area of certainty is that the benefits far outweigh the risks—especially when it comes to developing secure-by-default applications.