In my prior blog, Continuous Testing – The Quest for Quality at Speed, I described five tenets and some of the practices for continuous testing to help explain what continuous testing is. In my consulting work, I find it necessary to use 15 categories of practices to assess an organizations’ continuous testing capabilities. Given the large number of practices, I am publishing the complete list in three parts.
This article is part two, which covers the following categories of continuous testing practices:
• Test automation
• Test tools
• Test infrastructure management
• Test scripts
• Test results
You can read part one here.
• Test automation requirements are defined.
• New automated tests are created for each new feature in compliance with the test strategies for the product or service.
• Automated test cases are reviewed and tested to ensure they satisfy the purpose of the test.
• Automated test cases are compliant with test case standards for the product or service.
• Automated test cases are designed to run independent of each other.
• All testing tools in a pipeline are integrated into a common framework which enables orchestration of test resources and automation of test tasks.
• A test framework supports automation of everything needed for testing including test cases, test tools, applications, test infrastructures, and test data.
• A test framework can be invoked, controlled and released through an application programming interface (API) so that it can be integrated into a DevOps continuous delivery toolchain.
• Features provided by test frameworks include capabilities to orchestrate the test infrastructure, select tests, control the execution of tests, monitor tests in progress and report test results, logs and verdict data.
• A selection criterion is defined for test tools. The criterion includes requirements for coverage, automation features, support, compatibility with the automation framework, compatibility with test progress reporting systems.
• Test tool software versions are maintained in a source version management system.
• A common test automation framework is defined for the entire organization to ensure all test tools can integrate and be orchestrated with common test automation practices.
• Test automation tools are available for testing all features of an application that can be automated.
• Manual testing tools are available for testing features that must be tested manually. These tools can report results to a common test management reporting system.
• Test authoring tools support input error checking and features which help encourage scripts development according to best practices. Examples of such features include the ability to create and use a repository of known good test utilities for control and analysis of the system under test.
• DevOps-ready test tools are employed that easily integrate into a test framework and toolchain and elastically scale on demand vertically or horizontally, to match the varying capacity and workload demands of tests of software changes going through the continuous delivery pipeline.
• DevOps-ready tools can be orchestrated, scaled, invoked, controlled and monitored from an API.
Test Infrastructure Management
• Standards for describing resources and topologies for test configurations are documented.
• Techniques used to accelerate the time for executing tests are used such as vertical and horizontal scaling of test campaigns.
• Test resources and topology files are maintained in a software version management system.
• Orchestration and automation tests are conducted to verify the software that supports orchestration and automation layers satisfy their assessment criterion
• The test infrastructure used for each stage in the pipeline is a replica of production configuration variations, as much as is practical.
• Infrastructures for testing are elastic, with the ability to stand up and release (i.e. orchestrate) infrastructure configurations as needed for use with specific tests. Typical techniques that are employed to orchestrate test environments include dynamic infrastructure configuration tools, infrastructure-as-code (IaC) strategies, cloud services, testing-as-a-service (TaaS) providers and use of tools to stand up and release applications and test resources packaged in immutable containers.
• Database migration, replication and restore tools also need to be tested to ensure they can function quickly enough to keep up with the speed of the DevOps continuous deployment pipeline cadence.
• Health checks on the test environment are used to reduce the chance of false negative test verdicts that could be caused by test environment failures.
• Test scripts are maintained in a software version management system.
• Guidelines for test scripts and test suites practices are documented.
• Test utilities, tests and test suites follow guidelines for documentation, test results reporting, reuse and maintenance practices.
• Test script peer reviews are conducted to encourage collaboration and to ensure test script best practices are used.
• Each test script has a unique identification.
• The test type of the script is identified (for example, sanity test, build test, smoke test, security, acceptance test, conformance test, functional test, unit test, patch test, bug fix test, integration test, regression test, performance test, system test, other).
• Evidence, including an execution log, demonstrates that each verdict of tests work.
• Tests determine at least one verdict, which may be ‘Pass,’ ‘Fail’ or ‘Inconclusive.’
• Verdict and assertions priorities are recorded or reported.
• Tests are data-driven. Data inputs control execution of the test.
• Each test script is is tagged with a codified risk level.
• Failure verdicts reported by the script include a tag for risk priority.
• Techniques are used to ensure test results analysis keeps up with accelerated tests. For example, test design techniques, analysis tools built into test frameworks, test results dashboards and/or test results analyzers that run in parallel with the tests.
• Quality assurance goals include specific thresholds for test coverage prior to release such as run-to-plan (RTP) and pass-to-plan (PTP) minimum thresholds. RTP is a measure of how many tests of the total planned tests have completed. PTP is a measure of how many of the completed tests pass. For example, 95% RTP and 90% PTP.
• Quality goals include a measurement and threshold for reliability. For example, S-curves for pass rates can indicate reliability when the pass rate stabilizes for a period. Example: 95% pass rate for three consecutive test cycles.
• Time to recover a failed service is routinely tested and used to calculate mean-time-to-recover (MTTR).
• Monitoring and management tests are conducted to verify monitoring and management capabilities of the pipeline satisfy their assessment criterion.
• Technical and business metrics are designed to monitor the success of DevOps testing for technical continuous improvement and to justify business cases needed to implement those improvements.
• Input and output metrics and acceptance criterion exist for each stage in the value stream.
What This Means
Continuous testing (CT) practices are critical to success with DevOps. Many organizations underestimate both the importance and the practices needed to engineer continuous testing. This article is the second in a three-part series that provides a comprehensive list of practices that can be used to understand and assess an organization’s continuous testing capabilities. To learn more about my blueprint for continuous testing, refer to my book Engineering DevOps.