Mindless testing, whether it is manually or automatically executed, can cover only a very narrow scope of possible bugs: the ones you’ve already thought of. Exploratory testing however, endeavors to think outside that scope. Thoughtful testers look at a problem from all angles, weighing factors such as feature usage, psychology, user stories and more. Their goal is to chart the uncharted territory of an application.
The faster our development cycles become, the more thoughtful our testing should be. It’s only through techniques such as exploratory testing that we’ll be able to get there.
Continuous testing is the process of executing automated tests to obtain rapid feedback on the business risks associated with a software release. Where does that leave exploratory testing? It’s not automated, but it’s certainly critical for determining whether a release candidate has an acceptable level of risk.
Test automation is perfect for repeatedly checking whether incremental application changes break your existing functionality. However, where test automation falls short is in helping you determine whether new functionality truly meets expectations. Does it address the business needs behind the user story? Does it do so in a way that’s easy to use, resource-efficient, reliable and consistent with the rest of your application?
Exploratory testing promotes the creative, critical testing required to answer these questions. Obviously, it doesn’t make sense to repeat the same exploratory tests continuously across and beyond a sprint, but exploratory testing can be a continuous part of each delivery cycle.
Here are a few ways teams embed exploratory testing throughout their process.
This is the exploratory testing equivalent of peer code review. When a developer completes a user story, they sit down with a tester. The tester starts testing while providing a running commentary on what they are doing and why. Next, the developer takes control, explaining how they would test the software given their knowledge of the implementation details and challenges. The developer gains a user- and business-focused perspective of the functionality, and the tester learns about the inherent technical risks.
Another tactic is to have the developer and a tester separately test the same feature simultaneously, then discuss their findings at the end of the session. Often, this turns testing into a competition, where each participant tries to uncover the most or “best” issues in the allotted time.
It’s simply not possible to perform exploratory testing or full regression testing on every code commit. That’s what smoke testing is for. Instead, many teams run full regression testing and session-based exploratory testing in parallel a few times per week, whenever they’ve implemented new functionality that a user could feasibly exercise.
For optimal results, these sessions should be lightly planned and tightly timeboxed, include diverse perspectives and really take the six thinking hats theory seriously.
The best way to uncover user experience issues before users do is to get a broad array of feedback prior to release. One way is to host “blitz” exploratory testing sessions. When you’re wrapping up work on critical new functionality, invite people from a variety of backgrounds and teams to participate in a short timeboxed session. Incentives can help drive participation, maximize results and make testing fun.
Using test automation to continuously check the integrity of existing functionality is certainly critical. However, if you’re not also making exploratory testing a continuous part of your process, how will you know if the new functionality meets expectations?
The goal of continuous testing is to understand whether a release candidate has an acceptable level of risk. Exploratory testing is perfectly suited for helping you answer that critical question.
Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.
Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…
The data used to train AI models needs to reflect the production environments where applications are deployed.
Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.
Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.