“Come to think of it, no one speaks of automated programming or manual programming. There is programming, and there is lots of other stuff done by tools. Once a tool is created to do that stuff, it is never called programming again.”
– James Bach and Michael Bolton, “Testing and Checking Refined
That’s one of those points that tends to silence a room, as there’s really no way to successfully argue against it, but since similar positions of frustration came up time and time again at the 2015 STAREAST conference, it obviously deserves some further discussion.
The issue lies in the surprisingly stark differences between the phrases “automated testing” and “test (or testing) automation.” And if you don’t know the difference:
- You’re not alone.
- You’re likely not a software tester.
- You’re discrediting and undervaluing the testing craft.
The good news is that learning the difference is easy—and testers will forgive you for not knowing it until now! They’re very nice people.
Here’s what today’s brightest and hardest working testers want you to know. “Test automation applies tools to testing.” They do not automate testing, and they certainly won’t ever replace it. There’s simply no such thing as automated testing.
And I’m part of the problem. I wrote a column just last year, titled, “Automated testing’s ROI is right under your nose.” I’m somewhat embarrassed that I told the world something that doesn’t even exist provides an immediate ROI—though at least no one went out and bought a completely fictitious tool off of my suggestion.
Judging by the boisterous cheers as numerous speakers at STAREAST pointed out the absurdity of believing that testing can be automated, this belief is something that I predict testers will likely soon make a concerted effort to erase.
As companies focus more on collaborative, cross-functional, and quality-minded efforts like agile, test-driven development, and DevOps, testers have an incredible opportunity to clear up any misconceptions about their immense importance to their business.
Respected tester, author, and public speaker Dorothy Graham recently pointed out a number of these misconceptions in her session, “Blunders in Test Automation” and the very first one on the list was “thinking that testing tools actually do testing.” Sounding almost exasperated at having to point this out, she chuckled, threw her hands up, and said, “They just run stuff, whatever they’ve been programmed to execute, including bad tests.” She almost got a standing ovation.
When you get that kind of reaction from a room of a thousand software testers from around the world—we’re beyond semantics, and are in a real problem. This is a talented, valuable, and understandably fed-up group of people who are being unfairly cost and productivity-measured against software that provides a completely different service.
So how did it get this bad, when entire conferences are dotted with calls to action for testers to clearly differentiate themselves from tools, especially when the differentiation is so easy to see?
One group you may be surprised I’m largely letting off the hook is the tool manufacturers and their marketers. Sure, there are companies that make some claims that testers may wince at, but it’s really hard to find anyone claiming their tool beats, is smarter than, or should replace any human. There’s no reason for them to make these claims; they’re not true, and they certainly won’t help sell the product.
What about journalists and writers like me who cover the software development and testing industries? After witnessing the frustrations of testers when they feel like they’re being replaced by non-human, non-equivalents—you make a mental note to scratch that phrase from your style guide. I hope other writers do the same.
How about product owners and others closer to the business side of an organization? Are they contributing to an under-appreciation of the human aspect of software testing? That’s…a little tricky.
Scott Barber has spent more than fifteen years as a performance tester, but recently experienced (and left) the role of a product owner. While I’m more than familiar with, and a real fan of Scott’s “the truth hurts” speaking style, when he revealed some truths he recently learned, the room grew incredibly uneasy. Many got their first introduction to Scott when he started his session with:
“It’s not all about testing, you know. Testing is not a product. You are not a product…. Products are the source of profit. And as a product owner, I paid very little attention to testers and testing.”
It may sting, but it’s the truth. Scott went on to point out that “there’s no need for certain job titles to know or care how things are built or tested. Nobody will ever ask them that.” However, by testers working to gain a much better understanding of the product and the risks that each release candidate poses, there is a massive opportunity to make the value of their work blindingly obvious and positively revenue-impactful.
Scott’s pro-tip of testers asking a product owner, “Can I do some work with dev and prod so we can get some early observation going on to catch some of these bugs early and head off some of the damage that could come?” drew both a hushed “Wowwww” and “Yeah, I’m using that” from behind me.
This is where the differentiation between testers and their tools should become clear to everyone. The human levels of empathy, ingenuity, and proactive approach cannot be scripted, automated or baked into any tool, and the best testers care enough about their product and its customers to prove it.
About the Author/Noel Wurst
Noel Wurst, managing editor at Skytap, Noel is responsible for setting and executing Skytap’s content marketing strategy. That includes overseeing the creation and distribution of high-quality, targeted content, and managing the team and external resources associated with those efforts. By leveraging deep online marketing, project management, and editorial experience, Noel plays a crucial role in helping generate and nurture leads with compelling content.