It is all the rage to talk about how significantly large language models (LLMs) and generative AI will change every niche and corner of IT. And change IT it will. But the hype cycle is at its highest right now, and I’d caution against getting too worked up about it. We’re in for a few interesting years, and the spaces I cover (DevOps and security—with a more application-centric perspective but with ops experience) are definitely going to benefit from the various AI bits being worked in. All testing—including AppSec, but all testing—will benefit both in test generation and results analysis/filtering. In fact, I’ll go ahead and say that the question in testing will no longer be, “How much time to we have to devote to it?” and will rapidly become “How many systems’ resources do we have to dedicate to it?” It will still be a long process—AI isn’t speeding up the cumulative time it takes to run thorough testing (be it functional or security). It will make that the bottleneck though, because test generation will at least be LLM-assisted, and will eventually become completely generative AI-created while at the same time, results will get more and more filtering and analysis. A minor result here plus a minor result there will automatically be detected as an extended vulnerability. There are a ton of other possibilities, but these two seem pretty obvious when I look across the testing technology space. More testing and better results analysis are on the horizon—without requiring more staff. Of course, you can do so much more, which would require the organization to add staff to follow up on results, but that’s nothing new; testing finds issues that are interruptive and take man-hours to fix—we all know of bits in the application portfolio that we should probably fix but that aren’t high enough priority to do so, for example. But fixing is different than finding. I have seen demos in security scanners that offer auto-fix options for developers. Run the test during CI, create a ticket and let the developer see suggested code rework right in the IDE. It is early days for this, but if you look, you can find vendors doing it. And that’s really rather cool.
But we’ve all heard the “this changes everything” storyline over and over. 4GLs, XML, Java and JavaScript were all going to eliminate developers. In fact, they all created more developers. AI has a better chance of actually removing parts of our workload than any of these technologies did, but we’re still in “watch and wait” mode. It seems obvious that grunt coding will fall to LLMs … And aside from keeping staff trained to manage source, that is a good thing. We use open source, modules and libraries to do much the same thing today. I see this as equivalent to entry-level writing jobs. We’ll have to transform from the current model, but the current entry level development model wasn’t ideal anyway, so lets make a better one. Testing capabilities will be greatly enhanced, but testing of all kinds was so far behind new code generation at most orgs that the impact will be all positive as testing catches up. AI-based test generation today can generate thousands of permutations for a given piece of code, and analyze the results to just have analysts look at the truly disconcerting or confusing test results. And the question will become, “How many servers do we want to dedicate to run this?” Which is different, but just means we have more tests running to make a more solid and/or secure product and environment.
So, in summary, is change coming? Yes. And that is not significantly different than the last couple decades where change has become IT’s constant companion. For most of you, the way you do your job today is not the same as the way you did it even two years ago. ML/GAI/LLMs are just another iteration. The apps still have to be built, deployed and maintained. So until HAL comes along, keep kicking it. You’re keeping it together, and who knows what the future brings.