In this episode of DevOps Unbound, Alan Shimel is joined by Christine Yen of Honeycomb, Paul Bruce of Tricentis and Mitch Ashley from ASG/MediaOps to discuss how to effectively test the performance of your applications and how bringing observability into testing environments helps DevOps teams identify and resolve issues early in the SDLC. The video is below, followed by a transcript of the conversation.
Alan Shimel: Hey, everyone. I’m Alan Shimel, CEO of MediaOps, Editor-in-chief of DevOps.com, Security Boulevard, Container Journal, and you’re watching DevOps Unbound. DevOps Unbound is a biweekly video series where we have some of the leading lights in our community talk about topics of interest around DevOps. We’re lucky enough to be sponsored, DevOps Unbound, by our friends at Tricentis, so many thanks to Tricentis for their sponsorships. But this is a media ops event.
Let me introduce you to today’s panel and then we’re going to jump into our topic. Let me first introduce the guy with all the robots in his background there: Paul Bruce, with a 404 shirt. Paul, welcome. Why don’t you introduce yourself to our audience?
Paul Bruce: Sure. Thanks, Alan. So, my name is Paul Bruce, as you said. I’m on the nerdy side. I’m a performance and reliability geek. That’s just a place of complexity that I love to – it’s just what – it’s just my bag; it’s just what I like to do. On the other side I’m a community organizer. I work with DevOpsDays Boston and the Boston DevOps Meetup. I am hosting an event along with Liz Fong-Jones this year called o11yfest around observability, a vendor-neutral and also – community-focused but also the open telemetry project-focused two-day event. It’s half days so that we don’t burn people out. And I work with the Tricentis team since the NeoLoad acquisition – again, performance reliability. I was with Neotys and NeoLoad for years now. I really loved that team. And I love my new one, the broader one at Tricentis. So, that’s me.
Shimel: Very cool. Thanks, Paul. And then, joining Paul today is – we’re very lucky to have her – someone I’ve had the pleasure of interviewing more than several times now over this past year of Covid but hopefully soon to see in person, my friend Christine Yen. Christine is CEO of Honeycomb. Christine, welcome.
Christine Yen: Hello. It’s wonderful to be here. Alan, you shared my title, but my background is as a product engineer. Honeycomb, for anyone who isn’t familiar, is a tool who – a tool for observability. And I love working on it and I love talking about this topic because even though I tend more towards the software engineering side I have done my fair share of breaking production, and so this topic is near and dear to my heart for software engineers to do that less and find out what their code really did more quickly.
Shimel: Very cool. Thanks, Christine. It’s great to have you here. And then last but not least is my cohost of DevOps Unbound. We’ve worked together for probably longer than either of us want to admit, but it’s hard to believe we’re both only 25. But CTO here at MediaOps, CEO of the Accelerated Strategies Group, Mitch Ashley. Hey, Mitch. Welcome.
Mitch Ashley: Thank you, Alan. But the best 25 – this year again. Twenty-five again, right?
Shimel: Yeah, 25 again again.
Ashley: Good to be here.
Shimel: Anyway – yep. Thanks, Mitch. So, today’s show, really, we’re going to explore the intersection – or maybe it’s a border between performance, performance testing, and observability. What – how do they play with each other nicely? What is the interaction? What is the relationship? And for those in our audience, you may be looking at it and saying, “Well, observability is kind of almost post-deployment. Hopefully we’re doing some performance testing pre-deployment and then post-deployment.” But how – I don’t know if I see a nexus right away but there’s – in my mind anyway there’s an obvious nexus. There’s an obvious connection.
So, let’s talk about it at a high level. How is observability and performance testing connected? And how can they – how can we get a one plus one equals five or four out of performance testing and observability? Christine, if it’s okay, I’m going to ask you to kick off with that and kind of lay out the framework, the ground rules, the – and we’ll all jump from there.
Yen: Well, laying out the framework and ground rules is a little bit of a big ask for early in the morning, but I’ll do what I can.
Shimel: Okay. I’m sure you’ll do great.
Yen: I’m wearing the wrong shirt for this. I often – I think I’ve come on one of these interviews with my “test in production” shirt, but it’s a phrase that too many folks – it can be interpreted in many ways. What does testing in production mean? There is a version of it that means, well, I’m not going to run tests beforehand; I’m just going to see what breaks in production and try to fix it after it’s impacted users. Obviously, that’s not what any of us mean. And there’s a version of it that means, hey, look, when you write preproduction tests, when you write any sort of tests, what you’re trying to do is you’re trying to compare actuals versus expected. You’re trying to form a hypothesis, observe reality, and then think about that difference. Isn’t that really just a different phrasing of what we’re trying to do when we’re looking at production, where we’re using our logging or monitoring or observability tools to see what actually happened when our code is in production?
And so, I love – I think it’s a huge intersection, a huge overlap between things like performance testing and observability because we are all using the same skill set, we’re all using the same brain sequence, but there’s – people seem to think that observability is – can be more exciting because it’s incidents and downtime and alarms are going off. I don’t think it is. I think it’s all the same. It’s just at different points in the development life cycle. It’s on different timelines. It’s with a different level of urgency. But it’s the same thing we’re all just trying to do to come up with hypotheses, validate them, and learn.
Ashley: I really like, Christine, the way you kind of intersected them together because the environment that we’re doing all of this has changed substantially. It wasn’t that long ago where it was kind of a static flow of writing codes, we’d test it, we’d CI/CD, we’d test it, we’d deploy it into production. And DevOps helped speed that up with automation and tools, and of course tools of automation for testing. But now the software stack is fluid. The infrastructure is fluid. The code is fluid because we may have introduced three new changes to containers, for example, in a cloud-native application since we’ve really had an opportunity to go and debug or determine what the source of that was. We also have dynamic applications that start up and go away, and those things may be long gone by the time we actually look at them. So, I think this key to observability is also giving you insight into an environment that may not exist at the time that happened, and so you need that information to be able to understand “This is really something I need to fix, address” – a bug, a feature, whatever it might be.
Bruce: Yeah, and I think there’s a lot of intersection between putting pressure on systems, whether that’s through actual users and risking revenues and all sorts of things or doing prerelease type testing or testing in production. There’s a – performance is about how does this system perform to expectations? And there are a number of different people with a number of different expectations. There’s the technical expectation, maybe articulated – hopefully articulated in something as legitimate as an SLO or an – and then measuring them with SLIs and stuff. But there’s also user expectation. There’s the CEO expectation. There’s B2B. If you’re in some kind of regulated industry and that SLA is actually part of a legal requirement that becomes really important. Like, we saw that unemployment sites – I don’t mean to pick on that kind of work, but that was a mess. Same thing with various different – trying to get the vaccines scheduled and stuff. And it’s like why are we still suffering from these systems that are out there and they’re good enough maybe for the day to day but have never been exercised, have never been – the concept of that being – that scaling has never really come about for that system.
Okay, well, what about systems that do by default have scalability as a first class citizen? Going back to some of the stuff that Christine said – and actually, Christine, I love – your cofounder Charity has a phrase and I’m going to butcher it, but “It’s never not a good idea to have your glasses on when you go driving.” That idea that if you can’t see what’s going on that’s a big problem.
So, a lot of people traditionally would have used something like “Let’s just throw all our APM tools at production.” And that’s great. But going back to that original thing where I was saying pressure on systems really shows us how well we’re doing engineering, the impact of that pressure matters too. So, as a performance engineer, performance and reliability engineer, I’m constantly not just trying to – as my French team would say – make load on a system, put load, put pressure on a system, but I also care deeply about being able to measure the impact of that.
I’ll pause there because we probably some other questions. But there’s more to go into about that. It’s not an either-or. And it’s not about just production or preproduction. It’s about what’s not a good idea to have visibility. Isn’t it a good idea to have visibility on a thing, whether it’s putting pressure before release and having visibility or doing that in production? We’ve got a lot of customers who do testing – who do load testing in production on production systems. This is not an impossible thing, folks, you know?
Shimel: So, let me try to put this in some kind of context that we can jump on. So, here’s what I say Christine saying, which is – and maybe I’m taking it up a couple notches, getting out of the SLO woods – at the end of the day what do we all want? We all want our apps, our infrastructure to run better. We want to have good predictability, visibility into how things are performing. And whether that is an aspect of performance testing, or you want to call it performance monitoring, or do you want to call it observability, at the end of the day the aim is the same, which is let’s understand how our stuff is running and make sure it’s running the way we think it should be running. So, at a very high level it’s almost equating performance or observability – I mean, it’s all with the same goal, which is let’s make sure our stuff’s running good, as the best we can.
In my mind – and Christine, I’ll defer to you – certainly over the last year, year and a half, maybe two even, observability has taken on – as it’s become more hyped and a real word, a real thing out there, has taken on almost an even bigger mission than that, where it just doesn’t equate with – the goal may be “Hey, let’s make sure things are running the best we can make them run and how we expect them to run,” but there – observability has a whole uber mission of things that people want to do with it or use it for. Paul, you mentioned APM. Kind of observability subsumed APM, AI ops, all of these things. But yet, wait, there’s more. Observability does more than that. It’s become this whole big thing out there. And maybe it – maybe that’s right, maybe that’s wrong. I’m not here to judge.
But that, I think, is really the crux of this. Yeah, we all have the same goals. Is observability – part of what observability does helping the performance engineer? Absolutely. Is there more to observability? I think so. Christine, you’re the observability expert. I’m going to ask you.
Yen: You know when you say a word enough times and it starts to lose its meaning and you’re like “This is starting to sound funny”?
Shimel: Yeah.
Yen: We – I think you’re – there’s – we are definitely approaching that with observability a little bit. And whenever I get to that point I like to pause and remind folks “Let’s look at the – let’s take a deep breath and look at the English word we are using to describe this. It’s not a bucket full of buzzwords. It’s not a bucket full of types of data. It is an ability to see. And it’s the ability to see into our production systems. What do you want to do with that? What can you do with that if you have an improved ability to see?”
Okay, well, now we have the laundry list of all the things that we can do once we have the ability to see into these systems that, as Mitchell mentioned, are so different than they were five or ten years ago.
Shimel: Oh, yeah.
Yen: And that to me is – that zooming out behavior is what re-centers me when it feels like we’re trying to do all the things. And you’re like “Well, it kind of makes sense. I want to have my glasses on when I’m driving.”
Shimel: Yeah. Paul?
Bruce: Christine, you’re definitely right about a buzzword merry-go-round. I have experienced that with “DevOps.” I have experienced that with – oh, man, I can’t believe I’m saying it again but I’ve got to because people will know – “shift left.” These terms that just get all the things. And we go at these things, these different terms, we latch onto them, we try to make it our own. Oh, man, vendors love to do that. We’ll take a word and we’ll make it ours and we’ll try to align it to our product strategy. And at the end of the day it’s like there’s 150,000 different versions of this thing, which as a performance guy I’m kind of like “Well, it’s not bad to have a big sample set,” but the sample set of what’s going on in the delivery chain from good idea to working software that’s making you money is not just in production.
Let’s take an example of Nike, for instance. This is a known thing that they ship the watch, the watch that you open up on Christmas Day, and you go to connect to the internet and because the servers were swamped you don’t have a good Christmas experience. That’s not the only – I’m just picking on… But that kind of situation is a situation where there’s a lot of stuff you can do to prevent obvious faults from getting to the point where that’s the bad experience. And you also need visibility on the day where your production is having problems. And it’s not just the day; it’s every day. Every day you have a certain un-zero amount of events and issues in your production systems. How do you address those things? Well, there are plenty of situations where if you’re only thinking about observability as it only applies to production – which, by the way, production is where – what makes you money. If code is not sitting in production doing something useful for people you are not making money off of it necessarily. And production isn’t just out for consumers. It could be all the different constituents in your larger organization. So, even private systems are production. And so, if something is not out there then it’s not making money.
So, is that the only place where we should pay attention, where we should have good visibility on logs, traces, and metrics, on understanding how the business is impacted, long and complicated queues and delays in these systems? No. I mean, yes, in our edge and final production systems, yes, of course we will want that because we don’t want to be in a bad situation not having the right visibility into problems when they’re happening in production. But how do you think problems come about in production? Well, there’s the emergent ones that we can’t possibly know about. Ooh, there’s just so much time in the day. But then there’s a whole bunch of other stuff. Every line of code is not just an asset; it’s a liability. So, as you’re constantly shipping these things out there’s plenty of things you can do to prevent obvious – and I mean dumb and stupid and obvious things from getting out there, causing toil, waste, risk, and loss of brand, revenue, and all that stuff.
So, I think the observability for me is not just in production. It’s really useful to have classic monitoring in preproduction environments except for that it’s super expensive. Oops. And then, observability, pieces of observability, which Christine would probably be better to go into, all those things that you can do and what you get out of that, and now we can have use cases, that also applies to preproduction situations. That’s why I’m so excited about that open telemetry project and the community over there, this notion of being able to emit information from your systems, to either build it in directly from your systems or put it on sort of after the fact, sort of replacing some of the obvious early agent type stuff that monitoring used to be the only thing that could do. That open telemetry stuff now makes it a standard that we can all benefit from no matter which environment. How about all environments? That would be super nice. And to get tracing and traceability from – all the way from the Web app that somebody is touching all the way down to the database and the 30 different APIs that it touches.
That’s really useful, not only to firefight in production but to see “Am I risking dependencies? Am I go wildly off my architecture diagrams?” You know what I mean? Those things slide. Your conceived architecture diagram is never what’s actually running in production. So, to actually be able to have visibility on that is really important.
Ashley: Well, you know, Paul, the lines between production and, let’s say, tests are blurred, to use a phrase – are really kind of blurred, and oftentimes when we adopt whatever new technologies or approaches we’ll attempt to do what we used to do just using a new technology to do it. So, we’ll test the way we used to. And just what sort of clicked with me with cloud early on was how Netflix was doing their rolling upgrades and being able to flip back and forth. The same is really true for testing and performance testing because your availability to resources generally is not an issue, or not the issue it would be normally in an environment where you’re not in the cloud or at least partially in the cloud. So, there’s a lot of things you can do not just in your current testing environment but to set a pretty elaborate – with resources to do different load tests, performance test scenarios, in addition to what you experience in production.
And I think one of the points that you made is salient, is if you’re paying attention to this you now understand your systems, your applications, your environment enough to identify “Well, we may be able to handle the number of unemployment claims we’re getting today but I know when we reach the next peak these are the three things it’s going to impact. I know those are the things we have to go adjust or fix or whatever it might be.” Because you’ve already instrumented it well enough and you have the information through observability to be able to do that. So, I think that’s a different mindset of how we think about performance testing.
Shimel: Yeah. So, I tell you, I think one of the big breakthroughs of DevOps has been to create our “testing environments” to be more lifelike, to be more real lifelike. I mean, because prior to DevOps, the world I grew up in, the classic thing is the developer developed, gave it to the ops guy, the ops guy puts it up and says, “Wait a second, this stuff don’t run.” And he says, “Hey, developer, this doesn’t run.” And the developer says, “I don’t know, dude. It ran on my machine.” And so, there was that disconnect between what the developer wrote and ran it on or they tested it on versus the real life production environment. And one of the great things about cloud and virtualization and all of this stuff is that we can theoretically really duplicate our production environments and test environments and we can increase loads and we can do all those things theoretically so that there’s no or less surprises when we go live in production. And that – look, that’s – thank you, DevOps. That’s part of what this whole DevOps thing, I think, was.
But I don’t know, is – Christine, is that really – does that dog hunt or is that just DevOps urban myth? You know?
Yen: I think you’re on – right on the ball. I mean, when Charity and I started Honeycomb she came right – that story you told, I was that dev; she was that ops. I did the whole “It works on my machine; I don’t know what you’re talking about.” And when we started Honeycomb we thought honestly we were going to be talking to more of the Charities of the world. We were like “Oh, this is a tool for ops people. They’re the ones who understand the pain.” And as Honeycomb grew and as observability grew it became more and more obvious: “No, this is for the Christines of the world too.” This is for the people who need to understand what happens beyond that testing environment. This is – the boundaries are blurring or – I like to think of it as dev and production are coming closer together because there are more and more things we just can’t test in a test environment. As good as we make the test environment, we just can’t.
Shimel: I totally agree.
Yen: And there are so many things that can be learned – the same – by thinking of production as part of the development environment. With feature flags we can start to take code that is really not quite ready for production but test it out in a safe way and learn from it. And that’s so cool. It’s – there’s so much for developers to be able to learn when everyone kind of gets over this mindset of “Well, there’s a wall between development and production.” There isn’t. There shouldn’t be between people or the software. And that’s where all the exciting parts live.
Shimel: I don’t disagree. I don’t disagree.
Bruce: So, Alan, one thing that you said – you painted sort of the stage so that we could have part of this conversation. I’m not a big fan of setting up huge staging environments. In classic world it was like, okay, well, here’s production and it uses 36 Oracle back-ends and – if we were going to run a proper performance test of course we needed 36 on our side. And before the cloud that was incredibly expensive and you’d fax your IT person and they’d get back six months later with the right hardware. And I think there’s a myth that, well, yeah, you need to be realistic at some point, especially when you’re putting pressure on a system, to answer the final question, which is – what? – is it ready enough for me to be – what? – confident enough that this thing is going to transition like we talk about in standards, the transition process, that that’s actually going to go smoothly? And not in a waterfall way, just in a – to get away from this notion of “It worked on my machine” and now the ops person is putting out fires and doesn’t know how to operate the thing.
But I’m not a big fan on spending a ton of money on this staging system which ultimately even if – even if at a very root level it’s exactly the same as the production system there’s always that jerk in the corner going “Yeah, but it’s not production.” And it’s like “Okay, you have a point but you don’t know what goes on.” And the point is I think almost like a – I’m going to do my idiot thing and pull in a mathematical and then probably get it wrong, a mathematical concept of the limit. You remember limits? And the idea is we need to be able to express are we talking about the very ends of things? No, it’s a continuum.
So, there’s a lot of companies that I work with that actually do have requirements, that they go through a particular amount of testing and particular things before they can transition this to production. They don’t have the luxury of treating production like a dev system. There is a ton of people in the world, and it’s not their dysfunction – that’s not a dysfunction, just because the unicorns can fart rainbows and sneeze glitter and launch whatever they want. I see that happening in Boston all the time in some of the startups, and they do actually put themselves at risk. And even when you don’t do that, when you have an extreme amount of maturity and you’re working with relatively complicated technologies like Kafka things can go wrong pretty quickly. And the question is do you have the people and the process and the technology and the knowledge and the experience and the expertise to handle those when those things do come up in real time?
How do you get there? How do you get to people who know how to use your monitoring and observability tools really effectively? How do you know what those systems, how close to your perceived architecture versus your actual in production, how do you know how much delta there is there? How often do you exercise the process of disaster recovery is one of those other classic questions you might ask yourself.
And it gets back to who would be involved when there is a firefight or a last-minute thing like what we’ve done with some of the companies that I worked with very closely through the NeoLoad side of things, is when the education system, when colleges shut down and they finally said, “We are shutting our doors to the people that are paying us money,” that was a scary and real and significant flag for the rest of our US education system to go “Holy crap. We’re going – we have to provide virtual not as a stopgap for the past few months but this is going to be a serious reality for us for the next year.”
And so, a lot of the vendors that help the education system have the right platforms in place. They go “Holy crap. We really need to test for 10X the size that we’ve ever tested before.” So, they’re testing in production on nights and weekends in various different places and doing it at a smaller pace. But who do they have on the call? Not just the Ghostbusters, but the devs, the ops. There’s people from infrastructure, the DBA teams. They have 20-plus people white-knuckled on the call, like [makes quivering sound] – white-knuckled on the call, and that’s a lot of money. And by the way, you’re asking them to do from 10:00 PM to 2:00 AM to 3:00 AM work. And so, they’re not going to be available for all those fancy meetings, architecture meetings that they should be stakeholders at.
So, when you get to the point where you’re doing those massive things sometimes it’s necessary to go massive. But the vast majority of the exercises, you go to the gym – not so much me, but you go to the gym to exercise. You go and you get on your bike and that’s an exercise. You don’t expect to run a race without exercising first, without knowing that path a little bit before you just run a 5K race. So, why would you expect to do that with your systems and your teams?
And that’s where some of the testing, the lower order performance testing comes into play. It’s systematizing the stuff, making a process that can be executed and re-executed no matter which environment it’s pointing to. And the telemetry that you’re getting out of it aligns in each of those environments. If all of a sudden we have really great telemetry in a pre-prod environment but we don’t have it in prod that would be a big problem. Why was it put in here and it’s not represented in production? What if we need that? Or vice-versa: We’re only doing things in production, and so now there’s only – there’s stuff that’s only in production that has no representation of how it would show up in lower environments.
So, keeping those things in synchronous but also keeping the process and the exercised skills in place is why this isn’t just a big environment at the end kind of a situation.
Shimel: Again, I think “Thank you, DevOps” where we have been able to say, okay, not only do we need those or can we use those testing dev environments, but as Christine said, with things like feature flags and some of the other technologies, processes that we use, it is possible to continue that – the feedback loops and iterations in real time while using the – while in production. And that again is one of the things that allows us to continually update, continually iterate, continually – and feedback loops was kind of an original concept as part of this DevOps thing. We’re going to have those feedback loops and we’re going to iterate and reiterate.
But where we’ve gone with observability with that and with performance testing and these things with that have really just opened the floodgates in how we do these things now. I mean, just think about how – when I first started DevOps.com eight years ago, I mean, there were the unicorns who were updating multiple times a day. But for many people in the world, for many organizations in the world, look, moving from once a year to twice a year, twice a year to quarterly, quarterly to monthly was huge. It was huge. It’s not so huge anymore. It’s common for people to be updating their code daily or more than once a day now.
I saw – I did an interview this week, a recent survey of developers who they claim that they have sped up their deployments by over 100 percent in the last year and a half. That was – something like 70 percent of developers said that. Think about it. I mean, that’s – a billion here, a billion there, you’re talking real money. Seventy percent of developers say they’re going 100 percent faster. Wow. Wow. What a golden great time to be living. And it’s because of things like observability, DevOps, our better performance testing. Anyway.
Guys, we’re coming up on our time limit here. I wanted to kind of – let’s wrap it up. Let’s – what advice – for our folks listening in, watching in here, what can we give them? What can we tell them about this nexus of observability and performance that will make their life better? Paul, I’m going to – if you don’t mind, I’m going to put you on the hot seat and ask you to go first.
Bruce: Sure. Well, there’s a couple things I’ve been working on for a while and then there’s a couple things that I just want to give a shout out to. I’ll start with – Christine doesn’t get to do the pitch thing but I do. Honeycomb is awesome. You see that thing in play and it’s awesome because the team behind it is awesome. You know what I mean? It’s kind of the Conway’s law situation where – the version of it I hear most is the systems are a result of the teams that build them. So, definitely check that out.
Same thing with some of the other tools around there. But don’t get too bought up into the whole observability – don’t just Google “observability” and look at the first one because most people are paying through the nose for SEO placement there. Really actually look at what’s available, and then also dig into the team, the blog, the spirit behind that thing.
The second thing I’d suggest is we just released – I was one of the members of a working group for a standard on DevOps principles and practices that applies to highly regulated industries. It’s IEEE 2675-2021, which we’re hoping to get adopted in ISO so it’s global as well. And what it does is really align what is – how are each of the important processes that can be applied continuously through many different life cycles as they’re going along? You know, that DevOps eternity symbol means we’re all going to be stuck in hell forever, that thing is happening, all the things are happening all the time but there are precise things that need to go on in highly regulated industries, and even not. So, we’ve been working on that for four years. And you can reach out to me on LinkedIn, on Twitter – although LinkedIn is probably a better way to go – if you want to know more about that.
But essentially it’s a body of work that a lot of people have been putting time and effort into to really say, “Hey, look, this is not about what DevOps is. This is about how to do this in these complicated situations.” And you know me; as a performance geek I tried to – I was involved in QM/QA, verification, validation, risk management process, those kind of things. And I tried to make sure that evidence-based decisions, that – which is a hook to basically say visibility, telemetry, the proper SLOs in place, those are definitely in the standard along with other things.
So, definitely check those two things out: Honeycomb and the IEEE 2675.
Shimel: Excellent, Paul. Christine.
Yen: Thank you for that plug, Paul. The one piece of advice I’d like folks to internalize, we’ve talked quite a bit about – or, I’ve talked a bit about developers in production and developers in observability. An asterisk there is that that is not – the practice – the muscles and the practices should transfer well but there are a lot of things in the DevOps world in production that are, well, in some cases outright hostile to developers and in many cases just kind of scary and unfamiliar. And so, if this is something that’s interesting to you, if you’re like “Ah, yes, I would also like to bring development and production closer together,” take a step back and look for things like that. Do your production tools primarily talk about AWS instances and CPU and memory use? Those may be things that might scare your developers off. How can you make the vocabulary that your production tools talk about more familiar to developers? How can you help them bring the concepts and nouns that they’re used to in testing – customer ID or endpoint or logic, something that feels more familiar to the developer world, how can you have that exist in your production tooling as well so that developers can show up and be like “Oh – oh, this isn’t so different”?
I think that there’s a lot of things that are encoded in certain practices because we assume that a certain persona is the one using them. And those walls are blurring – or those walls are coming down; those edges are blurring. There’s a little bit of work left to do to facilitate it.
Shimel: Absolutely. Hey, Mitchell, you want to bring it home?
Ashley: Yeah, I’m going to wrap it up this way because there’s some great practical advice – and by the way, both companies have fantastic products, so please check them out. I think we have this mental model, this habit of thinking of things in one state. Matter has four states – well, five states if you count Higgs boson – Bose higgson. It’s not just solid. It is also fluid. I mean, think of our systems. Don’t think of it as software tools, all the layers of it. Combined together they operate more like a fluid, in a state that’s changing, constantly moving. Things are happening. And that causes you to rethink developers and production, doing production – doing performance testing in a production environment. And the flow of software through that DevOps tool chain is really part of that – the fluidity of that state.
So, I think we like to think things – of things like “Let’s make everything a solid so it doesn’t change, so we can look at it, observe it, do something with it.” It’s not really that way. That’s not the world we live in today. But if you think about it as something that’s under constant change like a fluid, then I think we have a much better chance of figuring out better ways, whether it’s through technology, new paradigms, new ways of working, of how to make our software the best it can absolutely be.
Shimel: And at the end of the day that’s what we all strive for. Right?
All right. What a great way to end this, then. This is a wrap. I think it’s episode 13 of DevOps Unbound. But it’s a wrap on whatever number this was because it’s fluid anyway. It’s a wrap on this episode of DevOps Unbound. Many thanks to Tricentis for sponsoring us on DevOps Unbound. We will be back in two weeks with another great episode. Stay tuned. We also have a big roundtable coming up this month; check that out.
Paul, hey man, congratulations on joining the Tricentis team as part of the acquisition there.
Bruce: Thank you.
Shimel: Christine, they’ve all said great things about Honeycomb, but well deserved. Say hello to Charity for us as well.
Yen: I will. Thank you.
Shimel: Keep doing what you’re doing. I hope to see you all soon or a couple months from now in person. Maybe we will do a live DevOps Unbound at some time somewhere. But until then, this is Alan Shimel for MediaOps. You’ve just watched DevOps Unbound.
[End of Audio]