As continuous delivery and DevOps speeds the cadence and volume of development, quality assurance professionals and the groups they work in must find ways to transform their roles. Because in order for businesses to truly get value out of the applications they’re delivering through DevOps automation and iteration, they don’t just need to attain speed from the process but also quality. And if they don’t, the costs to the business may be more than people think.
“Software is transforming from a business enabler to a business differentiator,” says Wayne Ariola, chief strategy officer for Parasoft. “The costs and risks associated with failure has shifted dramatically over the years.”
Ariola recently presented some cost figures at the STAREAST conference to show how dramatic the costs of quality failures can be. Over the past few years he’s made it a pet project to follow headline news on notable public company software failures and track them to stock prices to do a quick back-of-the-napkin equity analysis. According to his analysis, the typical net decline in the value of a company associated with the announcement of a failure is $2.3 billion. These types of events can be anything from something like American Airlines’ announcement of that a glitch in iPad software used by pilots in the cockpit grounded a number of flights this spring to Chrysler’s admission that software incompatibilities between the electric control unit and the battery pack control module in Fiat electric cars could suddenly shut cars down.  Ariola says that in 2015, the average net decline of publicly-traded companies that made headline news for software failures was 4.18 percent.
“When you’re about to release something, and it’s about to go out the door, do you hang out with the developers and say, ‘Hey, if this fails our stock price is going to decline by 4.18 percent? ‘” he says. “These are conversations we’ve got to start having (as QA professionals).”
Market valuation is only one indicator of the costs, too. There’s also the more difficult to track issue of customer churn, which is only going to increase as the software-driven world evolves and newer generations’ expectations of software quality ramp up. Whereas Boomers, GenX and GenY consumers simply got used to things like the blue screen of death and troubleshooting wonky software, Millennials have been less desensitized to quality issues. What’s more, they have more alternatives than ever, so when problems come up, they’re more likely to ‘nuke’ a product or service and choose a competitor—say switching between one app to another on their phone—rather than even bothering with failed software.
“Their tolerance for faulty software is much, much lower than ours is, and it’s going to change the way people consume software,” Ariola says. “We need to be aware of that as a team of testers, because we need to drive the agenda much differently than we are today.”
One of the key elements of that is switching the mentality of testing as a time-boxed event into a set of practices that “infiltrate” development and deployment processes and “escalate the process maturity.”
“You’ve got to go from manual to continuous tests. You’ve got to get beyond this idea of just pure automation, and strategically look at this application so you understand the risks associated with it today, and the concept’s going to change,” Ariola says. “So with this speed comes a totally different paradigm.”
In the past, the question that gated releases was whether testing was done or not.
“I’m going to propose that the real question is does the release candidate have an acceptable level of risk?” he says. “I have to understand the business expectations associated with the risk of that application, and if it’s in an acceptable boundary, we’re going for it, we’re releasing it, and hopefully we can roll back really fast if we need to.”
Â