With DevOps comes implementation of CI/CD, automation and testing. Is this sufficient to achieve “shift left testing”? Or does testing need to shift further left? Do we need new or additional tests for acceptance (ATDD) or behavioral testing (BDD)?
In this episode of DevOps Unbound, we are joined by Christine Fisher of NAIC, Leandro Melendez (aka Señor Performo) of Qualitest, Ken Pugh of Ken Pugh Inc., and Mitch Ashley of Accelerated Strategies Group to explore the state of testing in DevOps and CI/CD pipelines, and where testing can evolve to keep pace with software development and delivery.
The video is immediately below, followed by the transcript of the conversation. Enjoy!
Alan Shimel: Hey, everyone. Thanks for joining us on another edition of DevOps Unbound. We have a really great DevOps Unbound topic to explore today, and that is that testing is not a monolith. A lot of people, especially people who aren’t familiar with testing, just say testing—QA, testing—and we think of, oh, it’s a test. And we know there’s different kinds of tests, but do we really know all the different kinds of tests that need to be done, as we’re developing and deploying and managing and monitoring software applications and so forth? I don’t think so.
I think, really, we could cater this show to those of you out there who do know, but I think today, we want to talk a little bit to those who don’t know as well and let’s try to get you educated on what some of the various aspects of testing are that you may not be familiar with. They’re important, nevertheless, and how they influence what you see and the apps you use and what we may see going forward.
I’ve got a great panel to introduce you to today. I’m gonna allow them each to introduce themselves. Let me first start off with Leandro Melendez, I can’t say it with that Spanish roll of the tongue that you can, Leandro, I apologize. And Leandro is known in the industry as well as a name—I’m gonna let, I don’t wanna steal your thunder. Leandro, introduce yourself.
Leandro Melendez: Thank you very much, and don’t worry, you made a great pronunciation of my name, way better than what I’m used to, so that was pretty good.
Melendez: Yeah, repeating, I’m Leandro Melendez, a performance test manager from Qualitest Group, and as well, I’m known on the internets as Señor Performo, as you all said, Performo with strong R’s.
And well, we provide several services to—my specialty is performance testing, integrating it right now with full continuous integration DevOps and agile methodologies. But as well—and you mentioned it very well earlier—it starts to permeate in other areas of testing when you start to do it holistically and at a large scale, and thinking on all of the steps on the SDLC. So, we will talk a little bit more about it.
The last thing about me, as I said, Señor Performo, on the internets, I have a YouTube channel, I try to help educate people on especially performance testing, but anything that concerns testing and methodologies, agile, DevOps, and all that, I’m very happy to help provide information as well as hear whoever gives me a chance to speak and share the knowledge, I’m very happy to.
Shimel: We’re honored to have you here, Leandro, thank you. Next, we’re gonna introduce Ken Pugh. Ken is—well, Ken, you introduce yourself. Go ahead, I’m sorry.
Ken Pugh: That’s okay. Hi, I’m Ken Pugh. I’ve been in the programming software development field for about two-fifths of a century, and in agile for about a fifth of a century, and doing everything from gathering requirements to final testing.
I’ve got a book out called “Lean-Agile Acceptance Test-Driven Development: Better Software Through Collaboration,” and I’ve been teaching, training, and emphasizing acceptance test-driven development-slash-behavior driven development for the past 15 years.
Shimel: Fantastic. And for our fraction challenged friends in the audience, a fifth of a century is about 20 years, so that’ll give you a little idea.
Next up, we have Christine Fisher. Christine, welcome, introduce yourself, please.
Christine Fisher: Thank you. My name is Christine Fisher, I manage a team of BAs, QAs, and UX at the National Association of Insurance Commissioners. My team and I have been on a journey with behavior-driven development over the last two and a half years, and as we get ready to talk about testing here, my big secret is, I’ve actually never been a tester, I just have managed some. But with our jump into DevOps, I felt very strongly about advocating for what our role should be and how the work should change to keep up with the other changes that were happening with DevOps, which helped lead us to BDD.
Shimel: Fantastic. Pre-COVID times, we’d say, “But I did stay in a Holiday Inn Express last night,” but we can’t even say that any more.
Last but not least is my co-host and anchor on our show, Mitchell Ashley. Mitchell, why don’t you—quick introduction?
Mitch Ashley: I don’t know what fractions I’ve worked, but I do know that I used that joke last night and my family had never heard it, so—so much for the Holiday Inn Express joke. [Laughter]
But, no, I’m an old software person, started in software, and have done lots of things around both cloud as well as, I’ve also been on the vendor side, so network security and SaaS services. So, testing people are some of the most unique and, in my world, beloved people, because those are the folks I really look to to say, “How are we doing? Where are we?” And I think even more so in the DevOps time, so I’m excited to explore this with you all. So, thanks for joining us.
Shimel: Fantastic. So, first of all, let’s assume that our whole audience aren’t test experts, they’re more DevOps generalists, there’s a good smattering of cybersecurity folks, cloud native, infrastructure, ops, and we’re throwing out some ADD—behavior-driven, BDD—you know that they may not be familiar with.
So, why don’t we do a quick kinda definition? And we’re gonna ask the non-tester, if you will, Christine, if you wouldn’t mind leading off with what are these references we’ve made to these types of testing?
Fisher: Sure. So, when we’re talking about behavior-driven development, we’re talking about understanding the behavior of the end user and understanding what the workflows are, why they are trying to accomplish that task, and working on testing, understanding the requirements, testing those requirements in that way, not just writing a requirement, developing a requirement, testing a requirement without any true understanding of what the person who’s going to use the software on the end does.
And when I talk about not being a tester, more on the functional side with requirements as a BA, things like that, that was always the interesting part to me. And then sometimes, we would test something and it would be tested correctly, but we still missed the mark. And so, having the whole team understand what the end user wants to do, putting it in the common language, that’s really important for behavior-driven development.
Shimel: Agreed, agreed. Ken, do you want to add anything?
Pugh: Yeah. So, one of the big things in behavior-driven development or acceptance test driven development is having the triad, and Christine sort of made a reference to it. The customer, developer, and tester get together prior to implementation, create the scenarios, the scenarios become the tests, and the developers are given those tests to run against their code, and those tests are the agreed upon, shared understanding of how the system should work.
Shimel: Got it. Señor Performo—is that correct, yeah? Sí?
Melendez: Very well said, yeah. [Laughter]
Shimel: Señor Performo, how does acceptance-driven, behavior-driven development figure into your performance testing kind of regimen, if you will?
Melendez: Well, as I mentioned earlier, these type of concepts permeate in all the testing practices, if I could say, and very well as it was said here, I like to call it the three amigos, that you put them together and even before you start rolling anything, drafting anything, you get the requirements well together with what you are going to be testing and what has to be marked as passed before you even—or a developer—can mark a task as completed. And I might have said it wrong, a developer, because in general, you’re supposed to be a team, and everyone is responsible for each element.
And this comes together with performance metrics, where, in general, most of the people, when they think about performance, their mind goes right away to load tests, when I have to put hundreds of users to slam a system, where, if you see it from the beginning as well as some of these requirements when you generate them that, I don’t know, I’ll give an example, that the text field has to just allow a certain given of characters or be this color, rounded—little definitions like that, it should have included performance metrics as well inside of it. So that, when you are generating any piece of software or functionality, it’s measured there and then.
I have to say it’s an issue that I found often in the old waterfall days. I would arrive to do a load test after six months of development and start to automate some things that I’m like—this, hey, everybody, this is visibly slow. Just, I’m a single guy. I don’t need to load test it, why no one caught this before?
And those are the most general problems, in terms—I think, as well, in functionality, some things that come from even unit tests, low level tiers where you could catch many of those bugs and performance SLIs that are not met and that’s how it integrates a lot with performance. I always preach that you need to start to think about it as early as possible. And as Christine mentioned, getting the people together, what are the goals, what is the user gonna require, and what are the tasks? Because not everything should respond super fast and not everything should or can have a good response. I mean, we all wish that a—I don’t know, monthly billing period can be closed in three seconds, but that’s just for that response time from a click.
So, we need to filter that out depending on what is the user needing, willing to endure, to call it in a way, and put it together, and this permeates as well. It’s some sort of functionality like—yeah, I need this to respond in this period of time for me to be comfortable using it, for it to be useful, functional.
Shimel: Comments, thoughts from the team?
Pugh: So, as you were mentioning that, I don’t know if you’re familiar with healthcare.gov.
Melendez: [Laughter] Oh, yeah.
Pugh: I think they should’ve had a few performance tests done well before they released it. It seems like—I don’t know, that it was like, maybe the day before, they went, “I guess we’re gonna have a lot of people on this.” [Laughter]
Melendez: Yeah, and I think, and to paint and explain what I was saying, with a single element that doesn’t perform well, that doesn’t work well, even from the initial tier, they should have created these requirements, saying, I don’t know, “When I log in and enroll, healthcare starts the process when I click this. It should respond this fast, it should not do, I don’t know, this many full table scans, connections to the database,” things that you can go way, way low level in the IT tier, and others and it’s just like, “Yeah, the user should not wait more than five seconds on that click”—a single user, one person.
I am almost sure that most of the problems that they had was that they didn’t start to tune that, and they allowed the developers to check in all that awesome code, to call it euphemistically. And that’s when you start to scale up problematic things is when everything derails. And yeah, healthcare.gov, aside from not doing good load testing, they had multiple issues at the single piece level, and that’s where it is, push it left. Start to think at a unit level, single item requirements, when you have the three amigos together talking about these things, take everything into account and don’t let your little Post-its to be marked as done and move to the next area of your board without covering some of those basic requirements, which most people doesn’t think about or include them. Or whenever, and this is a little biddity or tiddity, when the developer is working on it, should have in mind all those requirements—functional, performance, security.
And that’s where I say, all this permeates to all the testing areas and other metrics when other areas of testing are doing their tests—manual, functional, automated, even security—my goal is to start to gather performance metrics so that everything happens together and we don’t have to wait until the end for big load tests to figure out that healthcare.gov will have what happened, sadly. [Laughter]
Pugh: So actually [Cross talk]—no, so, actually, you were talking about something that I’ve, in fact, I’ve done a few blogs on this, but it’s been around for a while is the testing matrix. I.e., that shows, since we’re talking about testing and it’s not just about functionality, it describes all the different sorts of tests, from the functionality tests which are on one side of the matrix, to the quality tests which are on the other side, including performance, security, usability and so forth. And that you’re testing—I mean, ATDD and BDD tend to concentrate on the functionality, because we’re gonna be talking with the customers, that’s what you want.
But you ought to have all those other tests, the performance tests as you were mentioning, already established ahead of time—100,000 users, how fast should it be, and so forth. Oh, and in fact, just the usability you were talking about—how long should it be after I click that I get to the next page? And those things could be established ahead of time, and if a developer goes up and it’s not meeting it, he knows to fix it. That’s, we don’t have to—
Melendez: And I would say here, not only established, but worked on from day one. Because as you mentioned, even some usability, security—those are to be checked when everything is assembled, and there are many techniques that we can start to embrace, starting out with automations at different tiers.
As I said, I’m a big proponent or supporter of—yes, have unit testing, automated everywhere, have the coverage according to the pyramid. Don’t focus so much on manual, front end, or after the fact, leaving everything for the second Friday of the sprint when everyone wants to leave and here, tester, there you go—start our meeting. That’s where everyone has to start to work together and pushing it left.
Since day one, start thinking about performance, security, functionals. And even some have told me, “Yeah, but there are things that, as you said, usability, that cannot be done until the end.” Well, you can start creating mocks, you can start figuring how things should work and start to work them. And, again, I think the mindset of keeping testing phased, waterfall ways—we need to start to work everything on par and get it rolling, yeah, I totally agree.
Fisher: I’d like to speak to that point a little bit with DevOps and with the testing. We’ve been talking about a team and we’ve been talking about getting people involved, and what I was seeing in my organization when we first started talking about DevOps is, everybody is an engineer. We can all do these jobs, things like that, right? And jobs might change, there’s room for everybody, but what you’re doing might change a little bit. But that can be really scary to hear, and especially if you are a manual tester who really doesn’t have an inclination or desire to learn automation—and that’s okay. Like, that’s a different conversation, but I am always a proponent of manual testers.
So, one of the reasons that really pushed me toward that is hearing that conversation, seeing people on my team being told that their job might change, but we don’t know how, you’re an engineer, you can do what everybody else on this team does, but that’s not true. Not everybody has those skills.
And so, making sure that the team is involved, and when you can have these conversations, BDD, all of these things that we’re talking about now, right, are getting the testers involved in the conversation earlier, and that makes them part of the team. And we have taken our BAs, we’ve taken manual QA, we’ve taken our automation QA—we get them all involved in that conversation. We make sure the developers know where they are in the process, we make sure the developers are adding to the conversation and part of that process.
And it was really important to me in that DevOps environment to get out in front of that and help define that role when somebody is saying, “We don’t”—you know, again, your role might change, but we’re not quite sure how—I wanted to help my team define that and put us in a good position so we weren’t just sitting there to have, God forbid, a developer come down a few years later and tell us, “No, you do it like this.” We wanted to put ourselves into that position and having those conversations, I think, is such an important part of that and changes some of that dialogue a little bit about we can all do the job. We can’t, but we all have important jobs to do to make it work.
Ashley: You know, Christine, I think about just an analogy—robotics for surgery. I still want that doctor in there guiding what’s happening, right? We don’t need any bad things happening. [Laughter] Well, same with software. You know, some of the worst folks at testing software are developers, right? You’re just too close to it, even if you’re writing the automated tests.
To your point, and I think several of you have made this, is, it’s an opportunity for us to (a) collaborate much more closely together rather than testing to be the thing right before documentation at the very, very, very end of the release, right, which is what it used to be. But also, what do we automate? It doesn’t help to automate the wrong tests—let’s get the right things in there. And where is the user in, the customer in performance, in functionality, in speed?
By the way, one of the things that kind of baffled some development teams is when I would say, “Speed is quality.” They’re like, “What do you mean, speed is quality?” I’m like—well, if Netflix takes three seconds to start your video, okay, that’s good. If it’s pausing for three seconds every five minutes, that’s bad, right?
So, speed makes a difference in terms of the perception of the experience. So, it is really kind of a harmonious effort where we have to bring all of that together.
Melendez: And now that you mention it, I’m gonna play a little bit devil’s advocate towards and against at the same time manual. Some forums that I participate in here in Mexico talk about that a lot, like—hey, too much automation. Are we manual testers gonna be out of a job and the machines will replace us, and all those paranoid ways of speaking.
And I generally explain to them with analogies—I like, a lot, to use analogies. In New York City, before there were any automobiles, cars, there were lots of horses, and horses produce a lot of stuff that was dropped on the streets, and there was a job needed for that, to pick up some of the horse. And I’m sorry if this is an analogy of manual to what the horse produces, but the point that I’m trying to do is that once cars became available, that job decreased considerably. But we still have places where there’s people that work with the horses, maintain them, brush them, and do all those tasks which, as many cars as we may have, we won’t ever have a horse polishing machine or anything like that. There are things that, truly, a human being is the only person that can do it really well—manual testers are very good for those tasks.
I think, on the other hand, there are too many things that are tried to be done with manual testing. I have seen in many organizations that there are tasks and tests that could be pushed left, that could be started to be done before automated programmatically and leave whatever is truly necessary and where the manual testers are going to be the rock stars and the only ways to do it allow them to.
So, I think—yes, I would suggest most manual testers to start to pick up a little bit on technical and developing, yadda yadda. I think in the modern days and the upcoming times, everyone should know some sort of development, that should be in schools. And we’re moving into a technological world at a pace that whoever doesn’t pick it up might be a bit handicapped. But, on the other hand, the skills of a good manual testers, when you have that clinical eye to say, “Hey, I see the error” or you don’t even have to touch the application; let’s say, most probably, it accepts fields that it shouldn’t in that field. And most of the times they are right, and you need that input.
So, on both sides, they won’t be replaced, but they should get used to using the tools as they are. Technology is not here to replace them but to empower them, and they should be able to use it, I think.
Fisher: Skills can be—excuse me. Skills can be enhanced, absolutely, and I agree with what you say. Nobody wants another meeting on their schedule, but on my team, we have added what we call a test strategy meeting and we get the BA and the manual and the automation QAs in the room and we talk about each of our stories, and should we automate it, what are we looking for manually? And they took that further than when we first started, and they came up with a scoring guide to say, you know, what is our level of effort with this, what’s the repeatability for the user?
Those types of things, and they’ve realized that those conversations are so important and you’re exactly right—the manual testers aren’t doing the same job, but now that we’re able to really give them the time to pull those skills out, the critical thinking and what you were mentioning, you know, looking at this field, we are getting so much more quality from our tests, from both sides, and it’s really been a fantastic change.
Melendez: Yeah, that’s great, and optimizing and, in a way, calling it where you get the best bang for your buck, where to use those efforts, how to use them, prioritizing. And even some things that—hey, this is not a big impact if we get a defect here, we won’t even pay attention to it manually, whoever made it, or some others that this is critical if we get a problem that has to be done manually with two pairs of human eyes, capable eyes, to identify it, or we can let the machine do it.
I mean, there are balancing on testing, and I think falling into the cognition mental fallacy of, it’s called man with a hammer—someone learns how to use a hammer well and will see nails everywhere and try to hammer them. There are multiple other tools that we can use when it’s needed, when it’s more optimal, and I think it’s, as you say, prioritize, distribute, and make the best effort with each one of them.
Pugh: So, just getting into that, one of the things in the matrix and one thing I always suggest is that exploratory testing. You take your manual testers and turn them into exploratory testers—finding those things that the functionality tests are not gonna cover.
I always give an example, for those who don’t know exploratory testing, of—you’re on the web, you’re ordering an item. You press the submit button and the screen comes up and says, “Do not hit the back button. Do not refresh this page.” And the exploratory tester goes, “Yeah, back, forward, back, forward, refresh, refresh—maybe I’ll get it for free. I don’t know!”
Pugh: There’s no way of telling. [Laughter]
Shimel: We live for those moments—we get it.
Ashley: Yeah, don’t make any buttons red, they’ll be pressed, for sure.
Shimel: Right, yeah. But let me kinda take our conversation a little bit of a way, here. You know, we’re 25, 30 minutes in. We haven’t spoken a lot about DevOps, which is interesting, right? Especially in light of our original conversation off camera.
You know, when we talk about behavioral-driven testing and ADD testing and performance testing, these testing disciplines all existed before DevOps, right? Ken, you’re doing them for a fifth of a century or more, right?
Shimel: That certainly predates what we call DevOps. And I was here when I started DevOps.com back in 2013-’14. There were a lot of testers who were running around like, you know, Henny Penny, “The sky is falling,” that they were gonna be out of a job, they were gonna be obsolete overnight. And, of course, that hasn’t happened.
Christine, you said that DevOps was kind of a driving factor in why you took the baton, here, why you took this position, why you feel it’s important. I’d like to bring it back, our conversation, a little bit to DevOps. How has DevOps changed BDD, ADD, right? Automation? Absolutely, right, but automation is not necessarily DevOps, right? We automated. We’ve mentioned a little shift left, you know, pushing it further, not pushing it further on the developers’ shoulders, because as, Ken, you said, we don’t meet the new boss same as the old boss, right? We don’t need that.
But how can we be more efficient, better at testing? Is it just a matter of automation, or is there something else there? Christine, I’m gonna ask you first, because it’s something that I think is near and dear to your mission.
Fisher: Sure. And I think for me, really, it is the conversations, how long has QA sat outside of the development organization, right? I mean, we can bring them in if we talk about agile and scrum and things like that—even, again, before DevOps, we can have a tester embedded in that team, but we still have lots of organizations that don’t do that.
Realizing that quality, not testing, that quality is the responsibility of the entire team, and that everybody needs to contribute to that conversation, we talk about breaking down the silos and everything with DevOps and we’re all in this together, and that includes testing. And for me, that’s been the biggest part. We can talk about BDD or whatever we want to talk about, however we want to test something. The most important part to me is that the testers are at the table and that they’re part of that conversation.
Shimel: And I think that’s inherently part of the DevOps cultural kinda mantra is that everyone should have a seat at the table. Ken?
Pugh: So, one of the things, getting back to the DevOps, is the fact that every time you have a test failure after something has been actually developed and put into the pipeline, it means you’ve gotta loop back. It means that you are slowing down the flow of business value through that pipeline.
So, the reason we want to create all these tests first is so that we pass them, and when it goes into deployment or staging, wherever you put it, that you don’t get a test failure. You don’t have to triage everything. And so, that’s why creating all of the tests first that the developers can be running while they’re developing their code or in their development environment is so important.
Now, I’d just like to add one more little phrase that I always use or talk about is—requirements should have a test for it, right? Pretty straightforward. How do you know whether the requirement is actually being met? But every test is a requirement. If you cannot ship a system with a failing test, then that test is a requirement the system must meet.
One of the things that developers always complain about is not having enough requirements, unclear requirements. The test that you create prior to implementation—BDD, ATDD or however you get ‘em—are the requirements. And now the developers have the test that they can go against those requirements, make it past, and then the flow through the pipeline doesn’t stop. It doesn’t, you don’t have a blockage because of failure later on.
Shimel: Fair. Comments on that?
Melendez: I think catching those errors, defining them, and being a bit more synergistic, to call it, in a way, it’s crucial. You very well mentioned all the testing practices, areas, and disciplines that exist come from waterfall and even before.
But I think a shift that was forced by DevOps and the agile methodology, continuous releases forced us to think in a different way. And I don’t know if it was the chicken or the egg where we started to see that we needed to throw down these silos and as well that the communication channels and capabilities to observe what was happening. Because as a tester, 15 years ago, I remember, you would just receive something finished, you had no idea, and somehow, you had some peculiarly written requirements that you had to write tests for and try to figure out, but everything was already packed up, backed up, ready to ship. And you were just trying to stop, like, a meteorite that was about to hit, and you were like—I don’t even have control. I detect something, I send it before a production release, and I have this box that I detected. I’m sending them somewhere, probably the developers were even gone.
All those situations that we experienced before on any testing area that now that everything is happening continuously, fast, we have feedback, we have—we don’t own only the development side, the testing side, the operations side, but we own all the loop, we’re involved, and hopefully, we have communication with each one of those areas. And I would say not communication that the lines that divided us are blurry now. We are just multi-capped T-shaped people that can jump into, “Hey, now, this is a deal with development on Ops, I understand that this is happening, we have logs. We have different release methodologies.”
I think DevOps and the openness that is happening as well as, from my perspective performance that I have some tools now that allow me to see into the code, to monitor it, telemetry—all these fun things that we can work with have allowed us to evolve and to integrate as, it was very well said, from clear requirements, clearly knowing what is needed. If we detect something wherever, not only before releasing to production, but now it’s in Ops, there’s a bug that was not caught and we catch it right away, we roll back or we release, quickly, a fix—those things were forced, given those things. Or I don’t know if forced—again, I don’t know if it’s a chicken and an egg situation, what came first. If now we have more visibility and capabilities, we start to release more often and communicate development and operations.
But all this synergy has to happen and is needed for us to truly embrace DevOps, have quality in it, and keep releasing often with quality.
Ashley: Alan, I have a question I’d like to pose to the team about DevOps. We don’t do DevOps just for DevOps’ sake, right, there’s usually some drivers behind it. And when you think about the conditions we’re in today, economic conditions, changing markets, rebuilding our supply chains, contactless services—there’s a great example—in days and a few weeks, businesses had to respond. Point being, one of the things businesses are looking for from their technology, from their software, is agility—the ability respond quickly, experiment, go after opportunity, defensive, whatever it might be.
From a testing standpoint, how do you think that’s changed how you respond and how you do the work that you do, knowing that there are situations you have to react fast, and how has that changed how you work? Any thoughts on that?
Melendez: Well—sorry, go on. I have played too much today. [Laughter]
Pugh: Well, let me just give an example of a current situation. There is one place that I was consulting with—their manual testing effort took 5,000 hours to release a product. You’re not gonna be very agile if it takes you 5,000 hours, that’s 50 testers working over a month, and I don’t even know what they did, if they found a defect, whether they restarted again.
So, the whole concept in being agile is, you need, as we talked about all the automation around everything, because if I’m gonna make a little change to something, I wanna make sure I haven’t broken anything else. So, I need the automation that matches exactly what the requirements are so that, then, I can run it through the pipeline. If I don’t have the automation, I have to pause for three days for manual testing, I’m not gonna get things out pretty quickly.
Fisher: And to build onto that point with BDD, you can see so clearly where you have a failure in your test. When I joined my current organization, they had just really started with automation, and there wasn’t a lot of direction there—which is fine, that is not any kind of a criticism. But those tests got so complicated and only the people who were writing the tests knew what was happening in them and where those failures were and troubleshooting a failure could take just as long as troubleshooting the code.
So, taking something like BDD, we’re gonna build on the agility, we wanna make it open to the team, the manual tester can find out where that failure was, a developer can find out where that failure was, and then we can very easily test to see—test the test to find out if it was a problem with the test or if it’s a problem in the code and then send it to the appropriate place to get it addressed. I think that that piece has just been a huge improvement for what’s been happening.
Melendez: And I want to add, as well, from what you mentioned, what’s changed from the old waterfall days to how we are working now with the new trends and everything—I think the first problem that I personally experienced, and I fought against this agile B stuff that started to happen. “Why don’t you allow me to still do automated, big-ass load tests every, before every release? How am I going to be able to keep up with this if I have constant releases every two weeks?”
And here, I think the biggest thing is a change of mindset, because many of us—and again, I said this myself, I was indoctrinated into how the waterfall days and silo-based release cycles happened. And I cannot even call it cycles, it was just like the big bang release and that was it and you’re out. And you used to think that way. You had to do big cycles of automation that you could execute that took hundreds and hundreds of hours, people involved. And now you need to think about, how can I tackle this now that it’s supposed to happen so often?
But a big change is that it’s not so massive any more. We are not doing a whole full release that we need to do multiple regressions here and there and everywhere to make sure. Now we can just say, “Hey, this sprint, I’m just gonna release this tiny, little, little feature that we can just quickly test. Something breaks, something happens, we are sure that our team, ourselves included, can switch direction, detect where is the issue, and implement the fix. And probably tomorrow, if we are super efficient, super quick, and doing things differently, we can recover from it.
As you mentioned earlier, when there’s tons of tests that you need to execute and there’s an error that you detect, it’s there. There’s a bug, but we want to release every week, every two sprints—sometimes even modern mindsets can be, “Well, let it go to production. It’s gonna pester us for a day or two, but we know that we can quickly apply a fix, release it. Let it pass. It will cause a tiny bit of problems, and even change the course, change the direction if you need it, like, hey, that’s a feature we don’t need, we can take it off.”
So, all that mindset changing on how you approach testing with this has to change. It’s very different, and I work with lots of customers where I jump in and I’m like, “Yeah, why are you still doing these things that are a decade, two decades old where all this DevOps, continuous, agile is already happening and you’re not even trying to do agile, but you have this big bang release, huge pieces of functionality that you want to push, that’s not gonna help you. And you need to be paying attention constantly and enable that you can recover. Your testing needs to be aimed at enabling that tiny functionality tests everywhere and just relevant to what you are doing.” It’s a complete switch of mind of what was testing before and what it is today.
Shimel: Nice. Go ahead, Mitchell, I’m sorry.
Ashley: Just that idea of being able to release software quickly, right, instead of 6 months or 12 months—that’s a mind boggling shift, right there. Sorry, Alan, you were taking—
Shimel: No, no, I’m actually saying I think we’re about out of time. I was gonna ask us to wrap up—I was gonna wrap up, but you know what, Mitchell, I gave you the last word, that’s great.
Hey, guys, as I said earlier on, the time does go really quick. We’re out of time for this episode. I’d love to have you all back, though, and maybe we can continue this conversation. You know, I find it fascinating as someone who doesn’t come from the testing world, right, that testing in its infinite variety has been so profoundly impacted by DevOps and agile and these technologies. But rather than being left behind, it’s been accelerated, right? Through automation and speed and, quite frankly, more focus, in many ways, DevOps has been one of the best things that’s ever happened to testing, right?
And that’s the thought I want to leave you with out there or DevOps Unbound. This is Alan Shimel, we’ll see you on our next show. We have a great roundtable coming up, by the way. Check DevOpsUnbound.com for all the latest. Thanks, everyone. Thank you to our panel. Mitchell, as always, thank you. Have a great day.
Fisher: Thank you.