Mark, Torsten and Alan discuss the findings of the “Disrupting the Economics of Software Testing Through AI” report. The report identifies critical factors that hinder software engineering and DevOps teams including the escalating costs of quality control, and growing complexity associated with the increasing release velocity, number of smart devices, operating systems, and programming languages. The video and a transcript of the conversation are below.
Announcer: This is Digital Anarchist.
Interviewer: Hey, everyone. Welcome to another segment here on Techstrong TV. I am really happy to tell you, introduce our guests for this next segment. First of all, he’s a returning guest with us. He’s from our friends at Applitools. It’s Mark Lambert. Hey, Mark. How are you, man?
Interviewee 1: Fantastic. Thank you, Alan. Great to have you – for you to have me on. Yeah, great to talk to you.
Interviewer: Absolutely. And then joining Mark and I today is Torsten Volk. Torsten is – I just drew a blank – EMA, right? Enterprise Management Associates?
Interviewee 2: Yes, exactly.
Interviewer: Yep. And we were talking off camera. Torsten is actually a friend of our friend, Andi Mann, who he knew or was his predecessor at EMA 11 years ago.
Interviewee 2: Yeah.
Interviewer: And, Torsten, welcome to Techstrong TV. It’s great to have you on.
Interviewee 2: Thank you very much. It’s awesome to be here.
Interviewer: Thank you. So, guys, look, Applitools, leader in visual testing. We’ve had, beyond Mark we’ve had many Applitools people on over the last almost two years now of doing Techstrong TV, so I think our audience is familiar. But just in case people aren’t, Mark, how about a little quick Applitools backgrounder?
Interviewee 1: Sure, yeah. So Applitools is known for inventing Visual AI, so helping software development teams accelerate their releases by mimicking the human eye and brain and how we as humans analyze computer screens, highlighting differences and issues within your UI so you can do not only visual testing, but also fundamentally change and modernize your functional test automation. So it’s really the glue that plugs right into your existing practices, so Selenium, Cypress, we support over 50 SDKs. There’s a free account as well, so you can get in there and get going, kind of feel the water, so to speak. But what we’re seeing is that it’s helping leading brands accelerate their digital experiences, and really it ties directly into the research obviously that Torsten’s been doing around AI and how that can help organizations disrupt the economics of software testing.
Interviewer: Love it. And, Torsten, look, I know EMA from before 2010 even, but for those folks who maybe, you know, a lot of our audience are technical people. They may not necessarily follow the analyst beat. Why don’t you give them a little EMA background?
Interviewee 2: My boss describes it always as the industry and analyst firm for enterprise grade solutions, and in the beginning – I’ve been here for a while. In the beginning I was always a little bit wondering, you know, what does that really mean? And what it at the end of the day means is we have all of those emerging technologies coming up, especially in the area of artificial intelligence and cloud native applications and intelligent automation, and what we are looking at is really how can we leverage those capabilities within an enterprise setting. So once the dust has settled and we’ve all gorged ourselves, you know, by consuming the leading edge of all of those tools, the question then is how do I expand that footprint of those tools across the enterprise so that the enterprise as a whole is more competitive in the end. So basically projecting those constraints of all of those cool new products and technologies into the enterprise, that is really what EMA is all about. And my specific practice is focused, as you can see in the background here, as one of my research graphics is focused on looking at machine learning, deep learning, reinforcement learning, and how those capabilities can enhance the enterprise.
Interviewer: I get it. Great use of the background, Torsten. Thank you. All right. So, Mark, we’ve got all that out of the way. Let’s jump into what we really want to talk on today, and I’m gonna let you kick it off if it’s okay.
Interviewee 1: Yeah, I’d love to. Thank you. So yeah, so recent report from EMA, and Torsten is the author and I’ll kind of throw the ball to him very quickly, obviously, but really talking about how artificial intelligence and machine learning is revolutionizing the software testing space, and is really important right now because there’s so much buzz around AI, machine learning, and kind of where it comes in. You want to understand where does the rubber meet the road? What is impactful today? What’s impactful for the future? And then what’s, what are the drivers that are getting us there? So I mean, Torsten, I’ll turn over to you, and maybe kind of like talk about some of the capabilities that you talk about in the report and the challenges that they, that you’ve seen come into play.
Interviewee 2: We started that report by looking at the key pain points, like we start almost all of our reports. Right? What is not sustainable anymore in a specific discipline, and in this case it’s test engineering. And we have pressure factors. One of them is that we, the business needs us to release software a lot more often, and in a really granular manner where somebody’s asking for a feature capability and the software development team is expected to release that thing very quickly, ideally in an ongoing manner. So that is one part, and the second part is that the end users are very, have become very much more diverse than they even were in the recent past, where we have multiple charts where we show the escalation of the number of devices with different screens, different processors, a different network connection, different interfaces for your fingers and for voice control. And all of those things make testing much, much more difficult, and one of my favorite parts always is the little war stories that you get told. Right?
So in the beginning when we look at al the material that we have, we talk to a lot of developers and test engineers and DevOps guys, and my war stories or their war stories always started with, “Oh, yeah, regression testing, Torsten. That is really not doable anymore. This is kind of a dreamland that you are living in as an analyst if you think that we can regression test all of those releases. It’s just too much, and it would take forever to be able to do that.” So if we can’t regression test, we are running a significant risk, Mark, that something goes awry. Right? We always are sure that in theory things should work, but in real life there are combinations of technologies, of code functions, that they then don’t work. And that’s where we start with artificial intelligence and this whole pair of eyes, Mark, that you are describing, right, where you say, “Oh, yeah, my test engineers don’t have to be that pair of eyes that check through dozens of boxes and just say yes, yes, yes, yes, yes everywhere, and then also overlook some things.
But that pair of eyes is something more. It’s something that has a certain degree of – when I say understanding, I think I’m gonna embarrass myself, but it has a certain degree of ability to comprehend – maybe that’s the same word – but to pick up things of a similar character and to categorize and cluster things in a manner that makes sense for human beings then to review, Mark. Right? And that is –
Interviewee 1: Yeah.
Interviewee 2: – where a tool like Applitools is very interesting, and also unique in the marketplace, right, where it’s a pair of eyes that you guys are building.
Interviewee 1: Yeah, and I think that talks – I mean your research touches on several different areas where AI comes in, and visual inspection, which is obviously where Applitools fits, is really a highly impactful and the most impactful today of those techniques because it does so many different things. So first of all, obviously there’s the streamlining of the manual review effort. Right? So I’ve got this explosion of devices. How do I check them all? Well, because of the high level of accuracy of Applitools Visual AI, and it’s been trained on over a billion images and has got four nines of accuracy, so what that means is as a human I don’t have to review tens or hundreds of screenshots. I’m just focused in on a very small subset. But that’s just the first part of it, because then you layer on top of that and this explosion of release velocity, increasing device combinations, and as well as the, just the complexity of the application and so many things out of your control, you need help in reviewing the volumes of data and really saying, okay, all of these things kind of look like the same.
So and then bringing them together. I talk about basically sorting your visual review into different stacks of paper, so for example I might have 76 screens that have a visual defect, but I can break those down into two categories of 40 and 36. And those types of techniques go beyond just the basic pixel dipping approach, which is the one that would be a traditional visual testing technology, and really the ability to categorize that you talk about, Torsten, is the key driver that, or the key foundation that all of these things get built on top of. So it’s a really powerful technique, and as I said earlier plugs right into existing, so you don’t have to rip and replace. That’s one of the things that a lot of AI technology today is say, okay, it’s great, but you have to build it all new in this track. The thing that is really powerful with Applitools is we plug right into your existing frameworks, open source as well as third-party commercial tools, and then we elevate the process with the dashboard to really enable cross-team collaboration. So it’s not just your testers or your test automation or developers that are engaged. Your domain experts, even other parts of the team like UI, UX design, for example, get to be involved in that collaboration process.
Interviewee 2: Yes, and we see often that communication is key, and it’s not happening in testing where we have those more and more decentralized product teams that are doing their own thing with their own toolkits and pipelines, but they’re not really benefiting from what other teams are finding. And so coming back to that horrible list approach, right, where you have, you check those boxes or you don’t check them. Right? So it’s a very monotonous affair as a human being to review this, but then once you’ve reviewed it usually it comes back to you again and again, and then it comes back and hits other people in that organization because there is no communication and there’s no unified platform to retain that knowledge. Right? And with reinforcement learning, you can teach the system what it needs to come back to a human with, and then what it shouldn’t. So if one team makes decisions, all the other teams are benefiting from that and it’s less and less of this toil type of task where you just also as a human tend to make mistakes and it’s just not scalable. Right? So that whole scalability aspect is an interesting one that deep learning and reinforcement learning brings us there, Mark.
Interviewee 1: Yeah, you hit on the key things. Right? You’re scaling to meet the current demands of release velocity, shorter release cycles. But the other thing I always think about when I think about AI machine learning and how it can help us from a software quality perspective, it’s an assistive technology so you’re reducing or removing mundane and repetitive tasks, which in themselves are error prone. Humans are great at creative work. We’re terrible at boring, monotonous things because we miss stuff. Whereas leveraging AI, and especially Applitools Visual AI, it’s highlighting those things. So that same example I talked about, 76 screens, you’re gonna miss stuff. If you’re looking at them all and you’re like, oh, it’s the same. It’s the same. It’s the same. But the Visual AI will identify key differences between the 76 images to really kind of like focus and improve the efforts.
Interviewee 2: Yeah. And that gets even when you look a little bit into the future, Mark. I know that you can’t share roadmap items, but that whole idea of having a Visual AI can go even beyond just user interface testing, right, where you can do sanity checks in certain areas, where you say, “Oh, yeah, no, that number can never be a billion because of what I know about the data in the background.” Right? And a lot of the things that come through that we continuously see coming through when you release and then you don’t want to be the one in the test team to hold off a, to hold back a release in the middle of the night.
It’s funny that when I was young and had a ton of hair still, that was what happened, and it’s still happening today just as much, right, where people think, “Oh, no. I shouldn’t be the one who says that. I think it’s all fine. Let’s sign off on that.” Right? So the release data kind of getting mushy and mushy and mushier in those areas, and that ability to say, “Oh, yeah, I had somebody look over it, and something horrible had happened,” some compliance violation or some crazy numbers in the shopping cart that can never add up. Can be even simple things where it’s, oh, this cost a billion dollars, or it’s euros instead of dollars but we are in a certain country. Those are things that you can catch more and more.
Interviewer: Fair. Guys, I feel like you didn’t let me, I didn’t get much of any words in here –
Interviewee 1: We feel the same way.
Interviewer: – but it’s okay. You know, I would – but I try to be a good host and let the guests talk, and you guys had it. Right? This is something, Torsten, you’re obviously very, very passionate and involved in and, Mark, it’s kind of, it’s your life. Right? And so I let it go. However, we are just about out of time, believe it or not, and we went on here for 15-plus minutes on this. But –
Interviewee 1: Can I just let people know where they can get the report? Would that –
Interviewer: That’s what I was going to ask you.
Interviewee 1: Sorry, I just wanted to make – yeah. So –
Interviewer: Go ahead.
Interviewee 1: – yeah, go to applitools.com/ema and you’ll be able to download the full report and access all the information there. So yeah, thank you, Alan.
Interviewer: Thank you, Mark. So it’s applitools.com/ema.
Interviewee 1: Yes.
Interviewer: Which also happens to be where Torsten works, at EMA, Enterprise Management Associates. Guys, it sounds like it was a really worthwhile research project here with some really good results and really good findings and insights. Look, this whole, there’s a reason Applitools has been as successful as it has. The whole idea of AI and testing is like peanut butter and chocolate in some way, and we can – but we barely scratched the surface in a lot of ways, right? There’s so much more yet to be done, and looking forward to hearing more about it in the future.
Interviewee 1: Yeah.
Interviewer: All right.
Interviewee 1: Thank you, Alan. Look forward to talking again soon.
Interviewee 2: Thank you.
Interviewer: All righty. Excellent. Torsten Volk, EMA, of course Mark Lambert from Applitools here on Techstrong TV. We’re gonna take a break and we’ll be right back with our next guest.
[End of Audio]