It’s an accepted norm in software engineering that quality assurance (QA) engineers and developers will be perpetually at odds. We are so used to this dynamic that some organizations have decided it’s a good thing. They believe some animosity is beneficial, because QA has to hold developers accountable for good quality, and developers need to pressure QA to not impede time to market with too many tests.
I disagree. I think infighting between development and QA is a sign of a larger, more endemic problem: They can’t align on who is accountable for quality at what stage, and more importantly, what is mission critical.
Developer Versus QA Incentives
Developers are incentivized to ship as much code as possible, as fast as possible. This is because the KPIs they care about most are temporal in nature (time-to-market, feature velocity, sprint velocity, etc.).
However, when it comes to QA engineers, the things that incentivize them are somewhat counter-intuitive. You might think the most important goal for QA engineers would be to ensure quality, which is at odds with time-to-market and velocity, because more quality assurance will consume more time and resources. QA engineers will tell you (because they believe it) that this is their incentive and the source of the conflict. What nobody’s able to admit is that a QA team’s incentives are actually not to ensure quality but to ensure they are not blamed. They get rewarded by having fewer bugs or other issues that can be blamed on them.
The reason why minimizing blame is the number one priority for QA engineers is that in the QA realm, there is a general acceptance that bugs are always going to make it to production, no matter what. This is something we expect because a 100% guaranteed bug-free product would take years to ship rather than weeks, and would therefore be economically unviable. Since they know there will be problems to deal with no matter what they do, they want to show that they did everything in their power to prevent those problems. Naturally, they want to write as many tests as possible to minimize the risk of bugs that they should have caught. But since it’s impossible to write an infinite amount of tests, they have to prioritize what to test for.
A QA team is given no data by which to prioritize what to test, so this prioritization is essentially a guessing game. It may be an educated guessing game based on experience and expertise, but it’s still predicting what users are most likely to do on an application without objective data as to what they really care about and how they really will use the application. QA teams may add a good amount of structure to their brainstorming and guessing so that they feel like it’s a rigorous process, but without hard data, it never will be.
The fact that there is not an objective, data-driven method by which testing priorities are set means two things. First, it means potential quibbling between QA engineers and developers over what should and shouldn’t have an end-to-end (E2E) test built around it. Second, there’s likely to be disagreements over test suite size and what that will do to run time. Since the QA engineers’ incentive is to not get blamed when things go wrong, they naturally want to write as many tests as possible. The more tests you write, the higher the chances of bugs being detected by one of them. But as we discussed, developers are incented to maximize speed, and want the product out the door as fast as possible. The more tests that have to be run, the longer it takes the product to get to market.
Without data, your QA strategy devolves into an argument between developers and QA engineers. Different organizations resolve this argument in different ways. In my experience, I see some teams that don’t have any E2E tests at all. These organizations have much more of a speed-centric, DevOps-first mindset. At the other end of the spectrum, I see companies that have over a thousand E2E tests, and it takes two days to run a regression cycle. The former, where developers have won the argument, accepts a substantial amount of risk when they deploy. The latter, where QA engineers have triumphed, is unwilling to sacrifice the feeling of security that comes with thousands of tests to achieve fast deployments.
Testing Is Too Subjective
Too often, organizations burn out and break down due to constant infighting between silos; sometimes between QA and development. This is a common management problem that consultants (including myself) often work on. Yet, this battle between QA and developers continues to play out over and over again in software organizations, with seemingly no end in sight. But I don’t believe it’s destined to go on forever. I envision a future in which QA and development live in harmony.
So, how do we end the eternal battle between development and QA? The way to resolve this conflict is to realign both sides’ incentives and departmental goals. And the way to do that is by basing decisions on objectivity instead of opinion.
When arguments over different opinions on testing strategy arise, those opinions are subjective; they are informed by individuals’ own personal incentives and perspectives. An objective way of defining a target or scope eliminates the space for argument. Obviously, two departments working together toward a shared target are going to operate better than two departments that have competing incentives. This alignment requires an objective standard by which we decide what tests are important—what we should be testing and what we shouldn’t.
A lot of people reading this might be thinking, “Well, we have an objective standard. We have a process that determines our testing priorities.” But let me ask you this: If your QA engineers were swapped out with a group of equally smart and capable professionals, would they come up with the exact same testing strategy and set of tests to prioritize in the testing suite? The answer, of course, is no. It wouldn’t be exactly the same. And if it’s possible for a different person to look at the same data and come to a different conclusion, then the process is by definition not objective.
Bringing Objectivity to Testing
When prioritizing what should be tested, you need to know what will be most important to your user. So instead of having people guess what’s important to users, why not simply let your users tell you?
Machine learning has unlocked the ability to bring objectivity to testing. By analyzing real user behavior on your app, you can prioritize and orient your test suite to what your users actually care about. You will objectively know what to test based on actual user behavior, you’ll catch more bugs that actually matter to your users, and you’ll catch them much sooner, with fewer tests than if you’d come up with them yourself. The quality of your applications will greatly improve without having to create unnecessary tests that clog up JIRA boards and deployment pipelines. Both sides win.
A New Day for Features Testing
This also enables a seismic shift in features testing. With new features, we don’t understand what’s important to users yet because we don’t know how they’re going to use them. So how do we test for it? In the near future, machine learning and autonomous testing technologies are going to be able to solve this problem simply by leveraging experience.
We label many things “new,” but truly new things rarely enter the world. This is especially true of user interfaces (UIs). UIs should be similar to other UIs. They should be intuitive to brand new users. They must be fairly familiar to customers so that they can use them without having to relearn them. Machine learning and autonomous testing technologies can easily compare new features to feature sets that have been live in other applications before. When the datasets are similar, such as with UIs, login pages, checkout pages, etc., you can reasonably predict from past experience how people are going to use them, and leverage that data to create new tests. The new tests won’t be perfect, but they’ll be a lot more accurate than QA engineers taking their best guesses.
One Team, One Mission
You can imagine this future where your users are constantly informing a machine as to what should be tested in regression, and the amassed experience of thousands of other applications that have similar feature sets is available for analysis. This mass collective experience, which humans couldn’t possibly create, is going to be used to decide what your future test suites should look like. The battle between developers and QA engineers will be over, because the testing strategy will be aligned and there won’t be any argument over what should and shouldn’t be tested.
Now, you’ve got a team that has an objective, data-driven balance of speed and testing, with much greater confidence that development can move forward quickly while ensuring quality. Both sides can coexist peacefully and finally work as one team with one mission.