Caveat: I currently cover DevOps – including test – and security – including the *AST. This does give me a viewpoint that others may not have, and also potentially gives me blinders that others may not have. I am not generally a predictor but a reader of direction. Proclamations like this are rare for me and are shorter-term/more tactical than the grab bag of predictions usually seen this time of year.
I was thinking about this one this morning; the direction of travel of traditional and security testing is very similar at the moment. We are seeing the vast complexity of the overall space and the rapid growth of automation come together to once again make all testing a topic worthy of consideration.
I loved test-driven development (TDD). I can’t speak for others, but RPC/API is just a cool way to implement centralized functionality, and TDD is almost mandatory to develop APIs and the clients that call them. There is no UI to distract or have separate requirements in API development; there is a stated function and correct or incorrect behavior. To a computer scientist, this is predictable beauty. TDD made it more than predictable—it made it a trail through the woods. “We know it needs to do X, Y and Z, with the only side effects being A,” can be detailed with tests to prove that is what it does, and then look to prove the “It doesn’t have other side effects” part. And you can do it all while developing—no need for dev/test iterations. Like any IT solution, it isn’t a great fit for everything, but even if it isn’t called TDD, it is nearly impossible to develop API-based solutions without some level of TDD.
The thing is, TDD is absolutely a developer’s tool. Developers and DevOps pretty much must use it to turn out usable APIs. The test team only has a use for a subset of those tests. Because if the dev team did it right, they passed all the TDD tests, and the test team’s role in that scenario is to look closely at what the developers might have missed. Integration is often beyond the control of an individual dev and/or API, for example, because many different clients may be integrating with it. Test teams, in a true TDD shop, fill the gaps with meaningful tests that evaluate the overall application quality.
Security testing—at least static application security testing (SAST) and increasingly dynamic application security testing (DAST)—are also becoming a core part of the development process. This is as it always should have been. It is far easier to fix a security flaw (or any flaw) at the design/early code stage than in a testing process after deployment. The quality of security testing inputs is also increasing, making it far more viable. And finally, the tools are being effectively integrated. Being a developer who did a lot of security work because we often needed it and had no one to do it, I will say I get a little rush about being told right in my IDE about the security risks in my code. I write, I see, I revise where necessary, and I turn out better code. All without having to pause for security or bang heads over policy.
To date, all this testing is separate, but I suspect, having vaguely the same location in the process of development, that testing will all occur together and the markets may even start to merge a bit—though merger of security testing and functional/performance testing tools is a bit of a stretch, it is more possible than in the past. In most orgs, even though developers have an increasing stake in both sets of testing, the target purchasers are still two different groups—security versus test—with different priorities. That’s why I think they aren’t likely to merge any time soon.
While AI in most technology spaces is currently experiencing the same level of FUD as XML, Java, JavaScript or low-code was in the past, all super-hyped tools in tech do have a sweet spot. Security and testing are in that sweet spot for AI. Both in test generation and in results evaluation, it does it better, faster and with more coverage while giving staff a concrete set of results to pore over and improve security posture and/or code quality. I cannot say loudly enough that in terms of enabling the steps in the SDLC that have suffered from a lack of resources and timeline to write, execute and evaluate tests, AI is a game-changer.
So, all of these things together mean that 2024 is a good time to take a hard look at your testing environment(s) and architecture, both traditional testing and security testing. Now that AI has made advanced automation a fact, it is time to consider implementing the level of testing we always knew we should have but never had the resources. Build it right into the build process and make it part of the organizational DNA—just like Agile is.
If you could hire a specialist who would paw through your code and give you a list of the 25 things you need to fix first, you would do it. That is, essentially, what modern testing tools are. They can do more tests, evaluate more issues and side effects and filter down to the “No—this is really important!” list without costing the team a ton of time.
And keep rocking it. We’re all sitting on bugs and vulnerabilities. I do hope we are way past denying that fact. Apps are big old complex monsters; these days, sometimes built from generations of technologies cobbled together. Of course they have issues. Even yours. So let’s take steps to clean it up, just a little bit. And keep clipping along.