In this DevOps Chat we speak with Keshav Vasudevan, product manager for performance testing at SmartBear. Keshav talks about the move to “end-to-end DevOps,” the importance of shifting left with testing and the latest state-of-the-art in performance and load testing with SmartBear.
As usual, the streaming audio is immediately below, followed by the transcript of our conversation. Enjoy!
Alan Shimel: Hey, everyone, it’s Alan Shimel, DevOps.com. You’re listening to another DevOps Chat. Joining me in this DevOps Chat is Keshav Vasudevan – and I’m sure I mispronounced that; we’ll let him correct it. Keshav is with SmartBear and he’s our guest in this DevOps Chat. Keshav, welcome.
Keshav Vasudevan: Hey, thanks, Alan. So happy to be here and, yeah, I will correct it – it’s actually Keshav Vasudevan, so that’s how you say my last name. And I’m – I currently work at the –
Shimel: Vasudevan, okay.
Vasudevan: Yeah. I work –
Shimel: I’m glad you _____ –
Shimel: Well, give us your – okay, so – and we’ll call you Keshav for the rest of the podcast today. But, Keshav, you’re with SmartBear, but why don’t you give our audience a little background? What’s your own personal journey here?
Vasudevan: Oh, of course. I mean, personally, I’m coming from an engineering background. Yeah, I love all things code. I love building applications and, in my spare time, I just love building mobile apps and Web-based apps as just hobby projects, which is why it felt like a natural fit to be at SmartBear Software. I’m incredibly passionate about quality as a whole and I understand and know the importance of having consistent quality to ensure the best optimal end-user experience.
So, at SmartBear, currently, I’m the product manager for performance testing and specifically in the LoadNinja application. And I help communicate the value of shift-left testing in the Agile- and DevOps-based ecosystems that typically we’re seeing the world shift towards. And, of course, I’m responsible for the product management and execution of our innovative, new, revolutionary platform in the performance-testing space, essentially allowing people to do real browser-based load testing at scale, which is unprecedented so far.
Shimel: Very cool. So, Keshav, you’re right in the heart of it of what SmartBear’s about, but SmartBear is a company – you know, when we look at the DevOps space, the move is obviously towards “bigger is better.” We have all these companies who are trying to do end-to-end DevOps and complete-life-cycle DevOps. SmartBear is a company that, for as long as I’m familiar with them, have had multiple products and plays – you know, I’m not saying you’re end-to-end DevOps or that’s what you aspire to be at SmartBear, but SmartBear is definitely a company where you aren’t singular product, singularly product-focused on any one area. SmartBear – you know, there was a lot of testing products, but there was a lot of other products too.
And, recently – well, historically, each of those products almost had their own identity, as they should, but the SmartBear moniker was almost secondary to the individual products. And, recently, we’ve seen that change, right? Where SmartBear has made a broader effort to bring all of these disparate products into a cohesive type of vision of going to market under “SmartBear.” You wanna elaborate on that a little bit? Or, better yet, give our audience kind of the breadth of all the different solutions SmartBear can help with.
Vasudevan: You bring up a great point, Alan. So SmartBear’s been around for almost ten years now. And, since our inception in 2009, we’ve grown into a global company, which testers and developers and architects and designers have come to trust with their overall software-development needs. Our initial focus, when we first started, was around software collaboration and testing, but, eventually, of course, as the company grew and as we added more talented people to the company and also as we grew in user base, we realized that we could actually solve much more broader functional use cases.
And so, since our inception, we right now have 11 global offices. We are a big advocate of open-source platforms and tools, and so a combination of our open-source tools, which is SoapUI, which is one of the most popular API-based testing tools, and Swagger, which includes, of course, the Swagger UI, the Editor, the Codegen, and the commercial SwaggerHub, are all part of our ecosystem. And so, in total, we have over 6.5 million users and over 22,000 customers of our commercial platforms. And, since we’ve started, we have over 500 employees. So that is history of SmartBear and where we are in its current state, but you’re absolutely right. We have a breadth of different products, that each essentially cater to different aspects of the overall software development life cycle.
And so, if you look at the overall life cycle, we have tools that cater to solving and solving use cases and problems and seeing the design phase of the software journey, the actual development and testing and collaboration aspect, cross-functional and performance testing. We’d integrated a bunch of our partners in the deployment phase, with AWS, for example, or Kubernetes. And, finally, in the actual post-production and post-deployment, we actually have monitoring tools as well. So we essentially play in the entire software development life cycle, and we cater to different audiences and our tools overall cater to different use cases in those journeys.
Now we’ve gotten a lot of good feedback for each of these products, but we also recognize, at the end of the day, that, today’s day and age, Agile and DevOps is the way products – or software-development teams are moving towards. This is primarily, again, because software development has evolved in order to meet the growing demands of consumers. And consumers right now, today’s consumers of software applications, are looking two important things. The first is they want features, right? The switching cost to go from one application to the next is going lower and lower, so, if I, as a consumer, am not getting the best features on a regular basis or new updates or enhancements on a regular basis, I might get frustrated and I might just move to another vendor. And the second thing is we have started to expect a lot of good quality in all the things that we’re consuming because, again, as we’re digital consumers, we know that, every day, we spend over 11 hours in front of our computer and looking through different applications. So we have come to expect a certain standard of quality.
And all of this ideal of fast development and incorporating testing and quality into every aspect of this journey is achieved with the ideal of incorporating Agile and DevOps into your workflow. And so we know that, to go across all of these different functions in a seamless and automated fashion, you have to essentially integrate all of the different tools that we have that cater to all of these individual use cases, in a seamless fashion as well. And so – oh, one other thing that we pride ourselves on is we, from a product level, we have a seamless integration across all of these different verticals, right?
So, for example, we have the ability in the performance-testing space to essentially integrate with Jenkins, with LoadNinja, that essentially now allows you to automate your load testing with real browsers at scale, meaning with 10 virtual users or 10,000 users. You can be rest assured that is the most accurate representation of load. All of this can be integrated and automated using Jenkins with a simple plug-in. Right? And this also then integrates with other tools. For example, we have an integration with our design tool, which is SwaggerHub, with our API-testing tool, which is SoapUI. So then you can essentially easily design your API, architect it easily, design the API, make sure it solves the end-user use cases, and then push it to the tester and developer to actually test out each of these functionalities. So we wanna make sure that this process, this ideal of DevOps, is actually achieved in execution with the seamless integration.
Now one level higher is the actual story and the marketing and the branding around it. And we’re proud to say that SmartBear recently unveiled its new brand that gives our customers and prospects and users a more cohesive understanding of how each of these tools fit together, feel proud of the fact that they’re now part of the SmartBear ecosystem, and allow them to, essentially, seamlessly move from one tool to the next because of maintaining the branding and user experience across each of these tools. And that was a huge undertaking that we did and we just unveiled it last year at our user conference, Connect. And we’ve gotten lot of, lot of, lot of good feedback about our new brand, our logo, and everything else around that. So it’s a long–winded way of saying, “Yes, you’re absolutely right. And we’re making sure that that we are achieving that as well.”
Shimel: Absolutely. And you know what? I have one of the shirts with the new logo they sent me too, so thanks for that. But, you know, the interesting takeaway here – I hear from so many and I recently received an e-mail from a fellow in the field who said, “Where do you see – we need to be moving to platforms. We need platforms to build on. And how do you build a platform? And where do you see the need for a platform?” And then you look at, as you’ve just laid out here, SmartBear and you take these tools and individual solutions, point-solutions that you guys had put together over the years, and showing how integration among them, as well as integrating with best-of-breed kinda tools out there, like a Jenkins and some of – and we were blessed in DevOps to have a lot of open-source solutions as well, but integrating with a lot of these open-source. I mean, this is how you stitch together a platform. This is how you get “one plus one equals three.” Right?
Vasudevan: Yes, exactly. Absolutely.
Shimel: Yep. So, Keshav, we’ve waste – we didn’t waste but we’ve spoken an awful lot about kind of background on SmartBear. I really wanted to spend our time talking about what’s new and what’s going on with you and your team there, so why don’t we dig down and peel the onion back a couple layers on that? What do you have – imagine you may – you know, you alluded to it earlier but let’s dig in: what’s happening?
Vasudevan: Absolutely. So one of the things, as I mentioned in my introduction, as something which we as a company are really passionate about, is ensuring quality. Right? And, over the last 15 to 30 years, over the last – well, let’s say over the last 15 years, we as users – and when I say “users,” anyone – me, you, our friend; it’s essentially consumers of applications – have started to expect a certain level of quality. And this is how software development has evolved, in order to meet this demand.
And, as we have touched upon, this is why Agile has evolved, which is essentially moving fast, allowing developers and engineers to build features in a much more faster way and deploy them seamlessly so that the end users get them in the quickest way possible. Right? And we’re always listening to the end-user feedback and updating the platform. Then there’s DevOps, right? Which is, again, trying to follow the same ideal and essentially having a much more collaborative approach towards testing, operations, and development together.
Now let’s go dissect quality, the testing aspect of it. Right? So Agile, to allow for Agile- and DevOps-based testing, functional testing tools have done a great job of keeping up with the times. Right? For example, we have two tools like that, which is SoapUI and TestComplete. And they’ve done a phenomenal job in terms of allowing people to quickly come in and create their test scripts, make sure it all works, make sure to write the test cases really quickly because they have features to do that, and then, again, automate the entire process, using maybe a REST API or, of course, using an existing CI/CD tool.
Performance testing, though, which is a huge – which is a big business KPI in today’s day and age, right, is something that people are still, to this day, scared to do. We talk to product teams and engineering teams on a daily basis, who tell us how important they know performance testing is. For example, we know that, if consumers take more than – if it takes more then 3 seconds for a certain e-commerce page to load, for a customer, they’ll leave. They’ll just go to another platform. The switching costs are getting lower and lower and people are getting more impatient, so, if it’s gonna take a while to do a certain transaction, people will leave. And that is a huge business loss at scale for your company, for your platform, and as an organization as a whole.
So product and engineering teams, Agile teams, want to do performance testing. However, performance testing, to this day, has not evolved to keep up with the growing needs of Agile and DevOps-based workflows. To this day, it takes such a long time, for me as a performance tester, to create a test script. I have to essentially learn – and, if I was a developer or functional tester wanting to do performance testing, I have to forget about it. I have to hire a consultant because my learning curve is gonna be so steep. I have to allocate a lot of time in my day-to-day operation in order to create a simple test script. This is because, over the last 30 years of performance testing, the technological process has not evolved at all. To this day, if you’re creating a performance test script, you have to record all the transactions between the browser and the server. You’re doing this under the protocol level. That is typically how you actually create a performance test script.
Now, as you know, the responses and the requests between server and client will have certain dynamic characteristics, like session IDs, cookies, and so on. So, in order to play this back across 1 user or 10,000 users to do the performance testing, you will have to do this programming and dynamic correlation. This process takes anywhere between several hours to a few days to even become successful, for the most basic transactions. Now imagine doing this for the complex applications that we have today, some of the AJAX applications which are very client-rich, in terms of actions and allowing users to do multiple things. The effort that is involved to do this multiplies threefold. Right?
And so this is why performance testing is still, to this day, seen as something which no one wants to touch. They’ll hire a consultant; they’ll have a centralized function to do this. Then there’s, of course, the actual process of emulating and creating the load itself because, to this day, we still use emulators to generate the load, which is not necessarily representative of the end-user experience.
So this is something we saw over the last one year and we decided to do something about it, which is what – you know, our engineers came together. We said, “We are passionate about this. We face this issue ourselves. Let’s do something about it.” And so, essentially, we productized a new approach to doing load testing, essentially allowing users or testers or developers or functional testers – anyone who’s passionate about performance testing, anyone who’s behind an application, be it a marketer, a product manager, a functional tester, or a performance tester – to essentially come in, record, and instantly play back the transactions that they record in the test script, without doing any programming or dynamic correlation whatsoever. Right?
And, from here, of course, you can – and take the script that you have and spin up tens of thousands of real browsers to go ahead and do the load test, based on these transactions. Right? This is a – I don’t use the word lightly here, but a revolutionary approach to actually doing performance testing because this is unprecedented, to essentially create browser-based test scripts without doing any sort of protocol-based recording, which allows you then to create test scripts out of the box. And then simulate tens of thousands of real browsers to run these scripts on them. So, as a user, as a tester, you’re, one, going to save over 50 to 70 percent of your time, doing the actual load testing, because the scripting process has become so much more easier. And you’re gonna get results that are coming straight from the browser, so you can be rest assured that this is the most accurate and realistic representation of the performances that your application will be showing for end users. And this is what LoadNinja is, as a product.
And, over the last five months, since we launched in October, end of October, during our user conference, we’ve seen tremendous traction in the market. We’ve seen so many customers come to us, sign up for annuals and partner with us, to essentially allow them to do load testing. And it’s just phenomenal, as a product manager, and it’s like seeing the vision come to fruition and actually seeing the people who are behind products – product managers, functional testers, all of them – coming and saying, “Hey, we can now do performance testing by ourselves, without necessarily having to get a certification with an expensive tool. Without having to spend so much more money and dollars trying to get a consultant.” They can do this all by themselves and that’s the most gratifying thing I ever heard. So, yeah. So there’s a lot happening, but the LoadNinja’s primarily something that we’re super proud of, as a company. It was built in house by us. The revolutionary technology to do this it’s something which performance testing has never seen to this day.
Shimel: You know, listening to you, all I can think of is back to my own days as a entrepreneur and developing security solutions, and we’d have to go get performance testing done. We were selling to DOD – it was large, large networks – and this is pre-DevOps, obviously, and pre-automation and pre-continuous. And, my god, the budget for the performance testing was bigger than the budget for the development team. It was nuts. I mean, it was – so, when I hear now about how this stuff is being done in the cloud and “automated” and “scalable” and the price points, it’s just crazy.
Vasudevan: It is. It is. We’ve come a long way. And, to this day, there’s still – performance testing is seen as a monolithic and centralized function, right? And so, to this day, what you saw years ago, many of the factors are still there. So, yeah. So you’re absolutely right and it’s good to see the cloud coming in, but as a performance-testing industry and tool vendors, we can do a lot more to push the boundaries and keep up with the growing demands of engineering and quality as a whole.
Shimel: Yep. Got it. Got it. You know, Keshav, I told you when we started things get out of control. We were supposed to go 15; we’re at 20 minutes.
Shimel: We’ll have to continue our SmartBear discussion on another DevOps Chat, at another date. We’ll have you back on soon. But why don’t we put an end for it right here? But, before we do, people listening, they like what they heard; they’re interested. Where can they go get more information?
Vasudevan: Oh, absolutely. So the best place to go is loadninja.com, so L-O-A-D-N-I-N-J-A-dot-com. And you should have all the required information to become successful and get started. One of the things we pride ourselves, as a product and engineering team, is our turnaround for new features and enhancements. We deploy almost every week and, again, using the – many of the learnings, we learned from your website, Alan, in the DevOps world, to essentially continuously integrate and deploy every now and then, every week, in order to get the features and enhancements that users require. So go to loadninja.com; you’ll have all the required information to become successful.
Shimel: Fantastic. Thank you very much for being a guest on this DevOps Chat. And I am serious – we’ll have to have you back on here – but, for now, I think we’re gonna pull the plug on this episode. Thanks, everyone, for listening. This is Alan Shimel for DevOps.com, Security Boulevard, Container Journal, Digital Anarchist. You’ve just listened to another DevOps Chat. Have a great day, everyone.