Secure Code Warrior has made available a set of security rules for application developers using artificial intelligence (AI) tools to generate code.
Company CTO Matias Madou said the AI Security Rules made available on GitHub are intended to encourage developers to review code generated by AI for security issues that these tools may inadvertently introduce. Most AI tools were trained using samples of code that were randomly collected from across the web. As such, they can introduce vulnerabilities simply because they were trained using code that contains similar flaws, noted Madou.
Additionally, the AI Security Rules encourage developers to establish guardrails that steer AI away from risky patterns and common security missteps, such as insecure authentication flows or failure to use parameterized queries.
Previously, Secure Code Warrior has made available AI security training courses for application developers that Madou said will continue to be expanded.
In general, AI coding tools have, from a cybersecurity perspective, plusses and minuses. AI coding tools are, for example, unlikely to introduce a SQL Injection vulnerability. In fact, there may come a day when this particular vulnerability is completely eliminated as AI coding tools become more widely used, noted Madou. However, AI coding tools also introduce other potential security concerns, including hallucinations.
It’s not clear to what degree developers are relying on AI tools to generate code, but a recent Futurum Group survey finds that 41% of respondents expect generative AI tools and platforms will be used to generate, review and test code.
Additionally, The Futurum Group finds organizations are exploring multiple paths to employing artificial intelligence (AI) capabilities across the software development lifecycle (SDLC). Over the next 12 to 18 months organizations plan to increase spending on not only AI code generation (83%) and agentic AI technologies (76%), but also existing familiar tools that have been augmented with AI.
In effect, there is now a battle underway between incumbent providers of tools and startup rivals for the hearts and minds of software engineers.
Regardless of how code is written, an application developer remains accountable for its quality. The challenge is that over time there is often a tendency to trust the output of an AI coding tool too much. Inevitably, the code generated might not only contain vulnerabilities but might also be deeply flawed in ways that might prevent an application from running at all. Just as challenging, AI coding tools might not be of much help when updating existing application environments simply because they might not have enough context to generate code that will work, noted Madou.
On the other hand, there are many developers that only have a rudimentary appreciation for security issues. AI coding tools in those instances can significantly improve the quality of the code that might otherwise have been created by an application developer.
There is, of course, no going back. AI coding tools are here to stay. The real issue is putting the guardrails in place that minimize any security vulnerabilities or other types of flaws that might compromise the security of an application.