Stephen and Alan discuss the biggest testing challenges in 2022, along with the tools and methods needed to solve them. The video and a transcript of the conversation are below.
Recording: This is Digital Anarchist.
Alan Shimel: Hey everyone. Welcome to another Techstrong TV interview here. My guest for this segment is Stephen Feloney of BlazeMeter. I think it’s BlazeMeter by Perforce. Is that correct Steve?
Stephen Feloney: That’s correct.
Alan Shimel: Yep. And for those who aren’t familiar of course Blaze Meter was originally founded by a friend of mine Alon Girmonsky. Great story around one of the pioneers in open source testing and continuous testing. Blaze Meter was acquired I guess it had to be six years ago, maybe more, by at the time called CA or CA Technologies, Computer Associates. Then it always kind of operated kind of semi autonomously. And then I guess it was last year they found a great home with the folks at Perforce who have also been around a while. And Perforce is of course a leader and really expanded their capabilities from code repository to testing and so many other things. So it was a good match, Blaze Meter and Perforce. We’re happy to see it. Steven so I gave the Blaze Meter story but I can’t give the Stephen Feloney story. I need you to do it.
Stephen Feloney: Oh well we’ll need hours to give my story.
Alan Shimel: Give us the short version.
Stephen Feloney: So I’ve been in testing more or less for 20 some odd years. I came to CA must have been seven or eight years ago. And I was part of the team responsible for acquiring Blaze Meter into CA. And I always had plans for Blaze Meter. And as you mentioned it sort of ran autonomous inside of CA. And then CA was acquired by Broadcom and then once we were in Broadcom I was able to expand Blaze Meter to go beyond performance and we added a multitude of features, virtual services, API testing, API monitoring, grew all that. And now as you said we found a very good home with Perforce and that acquisition happened right at the end of October of last year.
But I’ve been doing product management work for, I don’t know, about 15 – 16 years almost exclusively in the testing space, a little bit in the mobile space and a little bit in the monitoring space. And what’s interesting is moving around going from testing to mobile development and mobile testing and then into monitoring I now have all of that with Perfecto and Blaze Meter coming together at Perforce. So I have the monitoring. I have the mobile. I have – it’s all coming together. So everything I’ve learned over the years I can make it all come together and work in one spot.
Alan Shimel: So give us your best Dr. Evil and tell us isn’t it great when something comes together like that?
Stephen Feloney: Yeah. It’s fantastic and I hope to make one billion dollars.
Alan Shimel: Absolutely. Good stuff Steve. So look. Let’s first of all we’ve had the pleasure of talking to several of the Blaze Meter folks since the acquisition by Perforce. And it really does seem to be a good match. But what I wanted to kind of focus in on today is maybe the state of continuous testing if you will and then a little bit of what specifically at Blaze Meter what’s new. What’s coming down the pike that we can be on the lookout for here.
Stephen Feloney: Sure. Sure. So Blaze Meter historically has been a performance testing solution built upon open source as you mentioned and we were, we play two different roles. We work with centers of excellence who are doing large, massive load tests. And then we work in a shift left world with agile teams. And we were noticing struggles when it came to testing with the agile teams. So when you’re looking at continuous testing, continuous testing is well, a continuum. So it starts off with the development and the agile groups, works along sometimes into a full end to end testing and then continues into production.
And so if we talk about that continuum we look down at the agile teams and we noticed that they were struggling. They weren’t testing as much as they should. And we interviewed dozens of different companies to try to understand what were their challenges with this. And there were a multitude of challenges. One is they didn’t like using multiple different tools to try to get all the testing done, whether it’s a functional test, a performance test, an API test, unit test, all different types of testing, all of these different tools. Some have to be downloaded onto our devices. Some are in the Cloud. They all have different UIs, reporting. That’s a challenge. And then another challenge was we don’t have time to get data. I mean think about it. When you’re testing you need data.
Alan Shimel: Yeah.
Stephen Feloney: But it wasn’t just data to run the test because why can’t you do that. Then I need data for the back end system and then I need data if I don’t have my full environments and I’m using mock services or virtual services I now need data for that. How do I get all that data in and make sure it’s all in sync? Because when I’m developing code I have a very limited time to develop that code and get it out the door and if I have to spend time trying to figure out get all this data and sync all this data together, that’s a challenge.
And then well with that challenge as I mentioned I don’t have my full environment. So now I need virtual services. Where do I get those? Who is developing those? And then I have to figure out what data did they put in? So the mock services, the virtual services as well as the data, two big problems that are hindering people from getting their testing done. And then we have other issues around how do I generation my test. I just want to develop. I don’t want to spend my time creating all these tests. So those are challenges there and then on the flip side if I’m trying to do continuous testing and I’m trying to continue the testing into production what tests do I set up.
How do I monitor? How do I test in production when I don’t know what I tested, when there’s not a full collaboration across the entire value stream? How do I know what to test in production? Who is creating those tests? Do you have your ops team and are they monitoring or testing the same things that I wanted to test or that I did test in preproduction? So those are quite a few of the challenges that people run into when they’re trying to do continuous testing and continuous testing at scale.
Alan Shimel: Absolutely and emphasis on that scale because really that’s where you start seeing the cracks. That’s where you start seeing – when you’re not at scale sometimes you could muscle it. Right? Or fake it even. But when you start hitting scale man, it’s like whatever your weakest points are they’re going to just come right to the forefront.
Stephen Feloney: That’s correct.
Alan Shimel: You’re doing to say oh my goodness. This doesn’t scale. Right?
Stephen Feloney: No. That’s right because you could have like one group, one team, one service that’s able to do something that’s special, unique and they could get that out the door in their own special way. But when you’re trying to scale that either that service or application has grown or you’re trying to scale it in an enterprise as you just said all the cracks show. The whole thing falls apart.
Alan Shimel: Agreed. All right. Let’s talk about what’s new at Blaze Meter then. What are we doing to help?
Stephen Feloney: Well surprisingly a lot of the challenges I just mentioned we’re able to solve just coincidental.
Alan Shimel: I’m shocked.
Stephen Feloney: I’m shocked as well. I mean I didn’t know this was coming. Yeah. So one of the things we actually just released on February 3rd was our Blaze Meter test data piece of Blaze Meter. And this combined the ability to whether it’s for performance testing, functional testing, API testing we can generate the data that you need. And that data that gets generated is not just to generate the data to drive a test. We’ll also generate the data if you’re using mock services or virtual services, generate the data for that as well and generate data for system undertests.
And the idea is all that stays in sync. And we’ll generate the data at the time you want to run the test. So now you don’t have to worry about managing, maintaining the data sets. You don’t have to worry about that. We will generate the data on the fly and to run the tests and to ensure that all that data is in sync. That takes away a lot of the false positives that you’ve seen in the past where things are failing. Oh but it turns out it was the data in the virtual service or the system under test wasn’t set up the right way. And so all of that goes away and you don’t have to put any thought into it.
Alan Shimel: Crazy. It’s crazy that no one has thought of this before or kind of implemented this before when it seems rather elementary at some level. Like of course if you want to do this. But I had a conversation with the last guest here on Tech Strong TV and we spoke about the issue is maturation issue. Right? Until your devops processes, until your CICD processes are at such a level of maturity. Right, you don’t even run into this problem because you have problems that pop up before it that kind of take your focus away.
Stephen Feloney: Correct. Yeah. So in the past what’s happened is when you were handing it off to a testing team the testing team was used to the problem and they had time to get the data. They had time to set up the data. They had a gold copy of the data sitting around that they would use. They were familiar with it. But as you’re saying the CICD process, the shift left process happened the data problem, the virtualization problem is completely exacerbated. ‘Cause they don’t have the time. They don’t have the ability. They don’t have the skill set. They don’t have the desire to do all that. They want to get their code that they developed and they want to get that out the door as quickly as possible. And testing is sometimes seen as a hindrance. It’s a necessary evil – going back to Dr. Evil. A necessary evil if you will.
And so anything that we can do to accelerate that and make it much easier for them that is the challenge. So yeah. You could say why wouldn’t someone have done this whole data thing. There are synthetic data generators. Right? There’s open source synthetic data generators. That’s fine. But that just generates the data in one spot. You still have to figure out how to take that, pull that out, put that here, put that there, put that there. And we’re taking care of all of that for you.
Alan Shimel: Got it. Very cool. Hey. For people wanting to get more information on that Stephen where can they go?
Stephen Feloney: They can go to blazemeter.com.
Alan Shimel: Just straight up right on the front page?
Stephen Feloney: Yeah. Just right on the front page. It’s our newest part of our release so you’ll see it there. You’ll see blogs. You’ll see help. We have multiple discussions on it. Yep.
Alan Shimel: Very cool. Hey. We’re coming up on time here. Anything else coming down the pike or you wanted to let our audience know about?
Stephen Feloney: Yeah. So there’s actually quite a few things that are coming down the pike. But one of the more interesting things that we are working on is one of the challenges that I had mentioned is generating tests. So they don’t want to spend the time to even generate the tests. And there’s two things that come with that. One is just generating your API tests. So we’re going to have coming soon an API test generator to help with that and that is combined with our well synthetic data gen. So those two are combined together to help generate that. The next thing that comes which is a natural evolution is you don’t just want positive tests. You need negative tests as well.
And so that data gen is going to auto generate negative tests for you whether it’s negative data driving a test, putting negative data into your system test or having negative things happen with those mock services or those virtual services. So you want to delay a virtual service. You want to knock a virtual service so it doesn’t respond or just provide different types of data sets to see how well and how robust your system is. And once you do that the next step is all right. So I have negative testing. Then you combine the positive and negative together and now you have chaos testing. And so those are coming down as well.
Alan Shimel: Cool. Love it. Any – I don’t want to hold your feet to the fire, make you say anything you’re not supposed to say but what’s timeframes on those if you can talk about them?
Stephen Feloney: Well I’ll say this. By the end of the year you’ll see this. I don’t want to give solid timeframes but this year those will be coming out the door.
Alan Shimel: Ok. I think that’s fair Stephen. Hey. I want to thank you for coming on and updating, getting us up to speed here.
Stephen Feloney: Yeah. Thank you very much.
Alan Shimel: Continued success. We’re hoping maybe at some point we’ll see Blaze Meter at a real live conference.
Stephen Feloney: Oh I’m hoping to get to real live conference. As much as I enjoy maybe not traveling as much. But the interpersonal interaction.
Alan Shimel: Yeah, no. We all miss it.
Stephen Feloney: Yeah. I mean it’s a needed. It’s a must.
Alan Shimel: Absolutely man. Hey. Steve Feloney. BlazeMeter powered by Perforce here on Techstrong TV. We’re going to take a break and we’ll be right back with another guest.
Stephen Feloney: Thank you very much.
Alan Shimel: Thanks Steve.
[End of Audio]