Fast continuous testing is counterproductive if the analysis of test results does not keep up with the speed of testing. In my prior blog Continuous Testing – Accelerated! I discussed the importance and practices for speeding up continuous testing. Unless test analysis is as fast as the tests, all that will happen is an accumulated debt of test results to be analyzed resulting in, at best, no net gain in time savings and, at worse, valuable results going overlooked and mass confusion that actually slows down the overall CI/CT cycles. Typically many development team members are looking for results, so the time savings for speeding up the availability and the total work associated with results analysis is huge for every minute of results analysis time saved on each CI/CT cycle. When aggregated over multiple code branches a few minutes saved per test result can be man-years of time saved over a year!
So what can be done to ensure continuous test results analysis keeps pace with accelerated continuous testing? Below are some suggestions in a checklist format that have proven useful in high performance DevOps deployments.
- Determine clear responsibilities and SLAs for test results analysis:When the organization sets specific responsibilities and goals for results analysis the entire DevOps process will be accelerated.
• Assign first responders and 2nd line escalation team members with definitive workflows.
• Define a strategy for responding to test failures such as pre-determined actions if test failure rates exceed certain thresholds for each category of test. Examples of actions could include “continue with CI”, “revert changes”, or “stop and fix faults” depending on failure priorities.
• Train the team on best practices test design and peer-review the scripts to ensure they define clear verdicts.
Design test scripts for efficient test results analysis: When the test scripts are designed for efficient results analysis the process is accelerated. Here are some examples of good practices:
• Each script must have a clear “Pass”, “Fail” or “Inconclusive” verdict, and a description of the verdict.
• Each verdict should be reported together with pre-defined tags that can be used to categorize the verdicts.
• Assign a priority code to each verdict. The highest priority verdicts should be analyzed first.
• If a script has multiple verdicts, the highest priority verdicts should be reported first if possible.
Choose test tools that are designed for fast test results analysis: Look for the following features when you are evaluating test tools. Not only will this save you a lot of headaches in the long term, it will give you the flexibility to design tests that integrate with other DevOps tools.
• Restful APIs that do not require wait states between commands and responses.
• Support a framework that enforces test results standards that ensure each test, when completed, reports a clear verdict including “Pass”, Fail”, or “Inconclusive” , a description of the verdict, event logs from both the test system and system-under-test POV, and tags that identify the build campaign, and functional attributes which can be used for cause analysis.
• All timers can be sync’d to a common clock across the entire test environment and the verdicts record time at a precision that is appropriate for the fastest events relevant to the system under test.
• Caching features that enable results data to be processed for one test while the tool proceeds to execute other tests.
• Results data mapping and processing utilities integrated within the tool without requiring slow speed scripting to parse results.
Choose and configure test analysis tools to process results as they are available:In a best practices large scale DevOps environment test cases are running in parallel across many machines simultaneously and there may be multiple code branches and versions being tested at the same time. According to best practices powerful test result analysis tools collect and aggregate test results data from all of the test machines, filter the results in real time for processing according to functional tags, analyze the aggregated data to determine probable causes and report the analyzed results together with probable cause information in priority order to the most relevant first responders, in real time “results-so-far” cumulative format.
• Choose powerful servers for results aggregation and results processing, with ample processor speed, I/O, large and fast memory and data storage. Consider separate servers for aggregation and analysis to all result collection and processing threads to be run in parallel without interfering with each other.
• Configure the system with fast high priority channels to collect all results data to a common results repository.
• The results repository should be big enough to save all the results of many past test campaigns, such as a month of results data, because best practices test analysis tools will mine the past data to determine failure trends.
• Results are collected across all test agents, processed and reported in real time as the verdicts are reported.
• Results are reported in categories by build, tag and priority, and probable cause, giving highest priority in the reports to code areas or features that exceed pre-set failure thresholds
• Probable causes are automatically determined from statistical trend analysis which weight the most recent build results heaviest and also use static code defect density information to point to code modules that most likely caused the failures.
Configure the workflow for optimal test results analysis: When optimized, test analysis can prevent DevOps cycle interruptions significantly. The optimal workflow will use just-in-time concepts such as the following:
• Only send results to the appropriate first responders when failure thresholds are exceeded.
• Automatically escalate failures to the appropriate 2nd line escalation team members when the SLA time for first responders expires.
• Analyze the highest priority verdicts first.
• Prune or refactors tests that have a low failure history.
• Define threshold levels that will cause an automatic revert of a change but make sure the team gets all the results for analysis and probable causes so they can fix the problem off-line before re-applying the corrected change.
The above is a partial list of suggestions for continuous results analysis that have been proven to yield good results to complement accelerated continuous testing and DevOps.
At Spirent we think testing has a bright future in DevOps. You can read more about our views at
What do you think of these suggestions and do you have others that should be mentioned?