Continuous Testing

4 Steps To Achieve a 66% Reduction in Test Run Time

Here are ways organizations can reduce their test run times to improve speed and efficiency

Consult any recent DevOps survey and the same theme emerges: Testing remains one of the biggest roadblocks impeding organizations’ collective efforts to deliver better software, faster. Unlike days past, however, when agile development was still in its relative infancy and investments in testing lagged other areas of the delivery pipeline, organizations today are increasingly aware of the critical role continuous testing plays in delivering quality software at speed, and are aggressively investing in the requisite testing tools and technologies.

No, the problem today is not lack of awareness or tooling. The problem, quite simply, is that testing is difficult, and only gets more difficult as organizations look to drive automation at scale. Nowhere is this challenge more evident than in organizations’ ongoing struggles with test quality. A recent study showed only 24.37% of organizations pass at least 90% of their desktop tests and only 25.45% pass at least 90% of their mobile tests. Remember, the purpose of agile development is to accelerate application delivery and shorten the release cycle. Organizations can do neither if they constantly have to manually follow-up to determine the source of repeated test failures.

Doing It Right

Organizations must find ways to improve test quality and speed. A major reason so many tests fail is they take far too long to run. The longer it takes to run, the more likely it is to end with a failure. According to the aforementioned study, those that complete in two minutes or less are twice as likely to pass as those lasting longer than two minutes.

With that in mind, let’s take a look at four key steps testers and developers can implement to reduce test run times.

Script Atomic Tests

There’s no more important best practice to create a better and faster automation experience than to run atomic tests. These focus on just one piece of application functionality and are much faster and easier to execute than longer tests assessing multiple pieces of functionality.

Let’s examine what this looks like in practice. Suppose an online retailer needs to validate that users can log in, view their gift card balance, add items to their cart, proceed to checkout and successfully process a transaction. Even an experienced tester will often make the mistake of scripting a single test to validate those five functions. The far better approach, especially if the aim is to reduce run time and improve test quality, is to write five separate tests, each specifically focused on one piece of functionality. Running individually but in parallel, these five “atomic” tests will execute in far less time than one longer test.

Run Tests in Parallel

Of course, if those five atomic tests are executed sequentially—that is, one at a time, one after the other—they’ll take even longer to complete than the one screen flow test, negating any potential runtime benefit. That’s why running tests in parallel is an equally critical component of any strategy to reduce test run times. Consider the hypothetical example of a test suite with 100 atomic tests, each of which takes two minutes to execute. Run those 100 tests sequentially and they’ll take more than three hours to complete. Run them in parallel, and all 100 will complete in just two minutes.

Remember, too, not to get thrown by the volume of tests in a given suite. It’s tempting to think that a suite with 10 long tests that each take 5 minutes to complete will execute faster than one with 100 atomic tests that each take 1 minute to execute. But even if executed in parallel, the suite with the 10 long tests will still take 5 minutes to run—five times longer than the suite with 100 atomic tests.

Reduce the Number of Selenium Commands

Having too many Selenium commands in your test script flies in the face of atomic testing and is one of the most common underlying causes of long and unstable tests. Every single command takes time to execute and represents a new opportunity for something to go wrong. Minimize the number of commands required to execute a test case and run time will shrink accordingly.

Use Explicit Waits

Another effective way to reduce test run time is to use explicit waits rather than implicit waits. Implicit waits set a default wait time between each step or command across a test script, such that the subsequent step only executes after the pre-defined amount of time has elapsed. Explicit waits, on the other hand, enable the next step in a script to execute as soon as the preceding step is complete. Though more complicated to implement, using only explicit waits can have a significant positive impact on test run time.

Putting it Together

The example below shows an unoptimized test suite with 311 commands run against a mobile device. This example took 1,265 seconds (or more than 21 minutes) to execute. If delivering better software faster is the aim, that’s not going to work.

Applying the aforementioned tactics to the same test suite executed against the same device, however, drove the total number of commands down to just 127 and enabled the suite to execute in just 430 seconds (or just over 7 minutes), a 66% improvement in test run time.

Though the prospect of reducing test run time may be daunting, by focusing on a limited set of key tactics and best practices, the end goal of shorter, more stable tests is indeed attainable.

Nikolay Advolodkin

Nikolay Advolodkin is a senior solutions architect at Sauce Labs and CEO and test automation instructor at UltimateQA.com, a training site covering the gamut of testing topics and technologies.

Recent Posts

IBM Confirms: It’s Buying HashiCorp

Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.

3 hours ago

Embrace Adds Support for OpenTelemetry to Instrument Mobile Applications

Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…

11 hours ago

Paying Your Dues

TANSTAAFL, ya know?

13 hours ago

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

2 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

3 days ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

4 days ago