Exploratory Testing: Expanding Testing Across the Delivery Cycle

Mindless testing, whether it is manually or automatically executed, can cover only a very narrow scope of possible bugs: the ones you’ve already thought of. Exploratory testing however, endeavors to think outside that scope. Thoughtful testers look at a problem from all angles, weighing factors such as feature usage, psychology, user stories and more. Their goal is to chart the uncharted territory of an application.

The faster our development cycles become, the more thoughtful our testing should be. It’s only through techniques such as exploratory testing that we’ll be able to get there.

Continuous testing is the process of executing automated tests to obtain rapid feedback on the business risks associated with a software release. Where does that leave exploratory testing? It’s not automated, but it’s certainly critical for determining whether a release candidate has an acceptable level of risk.

Test automation is perfect for repeatedly checking whether incremental application changes break your existing functionality. However, where test automation falls short is in helping you determine whether new functionality truly meets expectations. Does it address the business needs behind the user story? Does it do so in a way that’s easy to use, resource-efficient, reliable and consistent with the rest of your application?

Exploratory testing promotes the creative, critical testing required to answer these questions. Obviously, it doesn’t make sense to repeat the same exploratory tests continuously across and beyond a sprint, but exploratory testing can be a continuous part of each delivery cycle.

Exploratory Testing in Action

Here are a few ways teams embed exploratory testing throughout their process.

Perform Ad Hoc Exploratory Testing as Each User Story is Implemented

This is the exploratory testing equivalent of peer code review. When a developer completes a user story, they sit down with a tester. The tester starts testing while providing a running commentary on what they are doing and why. Next, the developer takes control, explaining how they would test the software given their knowledge of the implementation details and challenges. The developer gains a user- and business-focused perspective of the functionality, and the tester learns about the inherent technical risks.

Another tactic is to have the developer and a tester separately test the same feature simultaneously, then discuss their findings at the end of the session. Often, this turns testing into a competition, where each participant tries to uncover the most or “best” issues in the allotted time.

Align Exploratory Testing Sessions with Full Regression Testing

It’s simply not possible to perform exploratory testing or full regression testing on every code commit. That’s what smoke testing is for. Instead, many teams run full regression testing and session-based exploratory testing in parallel a few times per week, whenever they’ve implemented new functionality that a user could feasibly exercise.

For optimal results, these sessions should be lightly planned and tightly timeboxed, include diverse perspectives and really take the six thinking hats theory seriously.

Host Blitz Exploratory Sessions for Critical Functionality

The best way to uncover user experience issues before users do is to get a broad array of feedback prior to release. One way is to host “blitz” exploratory testing sessions. When you’re wrapping up work on critical new functionality, invite people from a variety of backgrounds and teams to participate in a short timeboxed session. Incentives can help drive participation, maximize results and make testing fun.

Using test automation to continuously check the integrity of existing functionality is certainly critical. However, if you’re not also making exploratory testing a continuous part of your process, how will you know if the new functionality meets expectations?

The goal of continuous testing is to understand whether a release candidate has an acceptable level of risk. Exploratory testing is perfectly suited for helping you answer that critical question.

Ingo Philipp

Ingo Philipp

Ingo Philipp is on the product management team at Tricentis. In this role his responsibilities range from product development and product marketing to test management, test conception, test design, and test automation. His experiences with software testing embrace the application of agile as well as classical testing methodologies in various sectors including financial services, consumer goods, commercial services, healthcare, materials, telecommunications, and energy.

Recent Posts

IBM Confirms: It’s Buying HashiCorp

Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.

13 hours ago

Embrace Adds Support for OpenTelemetry to Instrument Mobile Applications

Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…

21 hours ago

Paying Your Dues

TANSTAAFL, ya know?

23 hours ago

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

3 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

3 days ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

5 days ago