Alan Shimel: Hey, everyone. I’m Alan Shimel, CEO of MediaOps, devops.com, Container Journal, Security Boulevard, and you’re watching DevOps Unbound. DevOps Unbound is sponsored by our friends at Tricentis. So many thanks to them. And DevOps Unbound is a biweekly show where we cover topics of interest to the DevOps audience. I am the host. My co-host is our CTO and CEO of accelerated strategies group, my friend Mitchell Ashley. Mitchell, welcome.
Mitchell Ashley: Thank you. Good to be here as always, and with this illustrious panel.
Alan Shimel: Absolutely. And an illustrious panel, it is. We have two panel members joining Mitchell and I today, let me introduce you to both of them. They’re both great folks in their own and I’m gonna let them introduce themselves. Let’s start with our friend Judith Horowitz. Judith, welcome to DevOps Unbound.
Judith Hurwitz: Thank you so much, Alan. It’s a pleasure and honor to be here.
Alan Shimel: It’sour pleasure.
Judith Hurwitz: So I’m Judith Hurwitz. I’m the co-author of 10 books. Have been in the industry for 30-plus years. Focus on everything from DevOps, security, manageability in cloud, hybrid cloud, and really looking at how you take technology to transform organizations. It’s a complex topic, not easy, but it’s what we’re in the middle of.
Alan Shimel: Absolutely. Then last, but certainly not least, my friend Brian Dawson. Hey Brian, welcome.
Brian Dawson: Hey, Alan. Thank you. Good to be on with you again. To tell the audience a bit about myself. As I’ve told you before, Alan, I’ve been in software development and delivery for about 30 years. I consider myself a technologist and even prior to focusing on DevOps, I’ve had a focus on optimizing software development and delivery.
Excited to talk about AI. I kind of dabbled and dipped my toes in the space and my time that I spent at the company that is now PlayStation. Things have come a long way since, and during that time, I’ve spent the past 10 years of my career focused on identifying, applying and spreading DevOps practices. So I’m excited to discuss the two together here with this group.
Alan Shimel: Absolutely.
Participant: Brian, I’m not sure if you mentioned your present position with the Linux Foundation.
Brian Dawson: I did not actually, because also this organization I’m at today has a big footprint in the AI and ML space. So today I’m with the Linux Foundation, I oversee our developer relations in ecosystem development. And of note within that is the LFAI & Data Foundation which houses a number of impactful products in this space projects. Excuse me.
Participant: Thank you. I just thought the folks at LF would welcome that.
Brian Dawson: Well, appreciate it. Thank you.
Alan Shimel: All right. So the topic for today’s DevOps Unbound is artificial intelligence and machine learning, how can they help improve performance of DevOps teams? In other words, what’s their role in the DevOps world? And let me preface our conversation by giving you sort of my view of it.
DevOps for many people was all about automation. What can we automate? Let’s automate everything we can, automate, automate, automate. And obviously in discussions around automation, topics such as artificial intelligence and machine learning, you would tend to think help with automation. And therefore anything that helps with automation is good for DevOps.
And I should also say that we lump AI and ML together, like their siamese twins joined at the hip, and they’re not necessarily conjoined. You can have AI without ML and you can have ML without AI, at least I think so. I’d be interested in your thoughts.
But has it lived up to the hype? Will it ever live up to the hype? Was the hype unfounded? Was it unrealistic? What role has AI and/or ML to this point played on DevOps and what role will it play going forward? I think that’s our topic today. I mean, any one of you, if you want to kick it off with your own feelings or respond, go right ahead.
Brian Dawson: Well, I’d like to jump in as a troublemaker early on and say –
Alan Shimel: Okay.
Brian Dawson: – I challenge Alan that DevOps is really about automation is kind of the basis for the tie. If we look at it, DevOps is really a set of cultural practices and tenants that align DevOps and other software delivery stakeholders around the shared objective of delivering quality software reliably, rapidly, and repeatedly.
Now the reliably, rapidly and repeatedly component, absolutely lends itself to some of the benefits that we could mine from AI. But I actually think it’s a great opportunity if we don’t say DevOps equals automation equals AI, but rather we say DevOps equals a culture aligned around a shared objective and how can AI and ML help support those shared objectives.
Judith Hurwitz: So I think you make a great point. Brian, I think one of the most important issues, and you mentioned culture. We have been, for what, 50 years through this development and operations perspective on how we get things done. And the promise of AI is, “Okay, can push a button and it will take care of everything.” And, and this promise has been there for, what, 20 years.
The reality is, it’s just not that simple. There are definitely things that we are doing today, and we are seeing evolve that is definitely helping the developers and the operational professionals. For example, if you have repeatable functions that happen all the time, that for example, if you press a button and a certain task should happen, you can probably use a model and collecting massive amounts of data to do that automatically.
nd that’s a very good use of AI, and will work today. They call it ML Ops or AI Ops, and there’s a good reason why that’s what we’re focused on today, is because you’re look for predictable patterns and predictable anomalies so you can avoid making stupid mistakes that, “Oh my God, why did I do that? I knew what the right answer was.”
But AI is not a panacea. And I think if we look back even two, three years ago, you had companies that all were saying, “Okay, we have automated, we’ve put AI into DevOps and all you have to do is press button and all of your problems are over.” That’s just not reality.
Alan Shimel: Yeah.
Mitchell Ashley: Yeah, there’s the marketing AI and ML, and then there’s real AI and ML. Marketing, meaning everybody glams onto the term and uses it, meaning it’s a case statement or an if statement in people’s logic in their software. I started doing some work in AI in the 80s and 90s programming and Lisp prologue, and doing some corporate education in that area, some expert systems with triage, things like that.
So dabbled in a little bit there. And then I was exposed to more recently, about five, six years ago, worked with a gentleman named Dr. Bernardo Huberman, who’s one of the experts in the field. And I asked him a me understand why is machine learning taken off as this sort of, he described it as a subset of AI. So you can know that was his model anyway.
What he told me really made sense, and it kind of helps me understand where can we apply machine learning? And that being kind of the most popular part of AI. And he said, “Machine learning takes massive amounts of data.” And the fact that we have the cloud and all these applications, creating data, a lot of data exhaust. Now, people are going back and mining that data.
So we have this whole, Judith, in your writing, in your books about data analysis and leading up to AI, machine learning is great for that, because you can have supervised or unsupervised algorithm.Supervised, meaning that’s a cat, that’s a cat, that’s not a cat, or unsupervised, which is gonna pour through the data and start to look for those patterns and trends and anomalies that you were talking about Judith.
So when people talk about using AI or ML in our industry, I always think about, well, where’s there lots of data, and can we leverage it in some impactful way? So as I look at products or technologies that claim they’re doing that, that’s at least one sort of criteria to fair it out. Is it real there? Is there something real there, or is it more spin and fluff to help us? We all have case statements in our spot, right?
Alan Shimel: Absolutely. Actually, Brian, it looked like you were gonna say something, I didn’t wanna jump on you.
Brian Dawson: No, Alan. I mean, of course I have a lot to say, but nothing in particular, I just, since I’m off mute, I’ll say I absolutely agree with Mitch and Judith and underscore the points that they’ve made.
Alan Shimel: Yep. You know what, I wanna take a moment though, and explore, Brian, you disagree DevOps isn’t about automation. And I get the whole cultural aspects of DevOps and all of that, but certainly automating as a way to be more efficient, to get more done faster, I think is part and parcel of the DevOps mindset.
Brian Dawson: Yeah. And if I may and without waiting to see where you wanted to go with that, I’d say, no, it, it is absolutely important, but there’s a line Jez Humble would use in talking about CD. And frankly, I think the book that Dave Farley, Jez Humble and team, Road Around Continuous Delivery is undercelebrated and under referenced.
And in talking about that, one of the things he said is continuous delivery. And I’m gonna say by extension, some aspects of DevOps don’t require any tools. I can do continuous delivery with a bash script.Catch is it’s not the most efficient in pursuit of effective collaboration of delivery velocity so you can iterate. And as we’ve talked about in this past, establish that control feedback loop that requires things like automation to achieve those goals.
I just think sometimes when we talk about what we see going on with AI and ML, and in those terms, we overload a principle or practice with expectations that are core to it. And I worry a bit about that, so I wanna call it out. And sorry, Alan, I don’t know if that’s where you wanted to go with the commentary.
Alan Shimel: No, no, no. I wanna go wherever you wanna go, Brian. It’s a collective. Judith, what about – I’m sorry, go ahead. Oh
Judith Hurwitz: Oh, yeah. So,Alan, I think one of the key issues is how the whole area of continuous delivery. In previous generations, you would build an application and it would live pretty much as it was written, could be for 10 years. Today, applications are constantly having to be revised because customers change, partners change, the sources of data change.
So, unless you have the ability to constantly update and change and modify, you lose out. One of the values of using automation and using AI and machine learning models with lots of data, is to support the ability to do this. You don’t have enough hands and enough brains to where problems may occur, because you’ve changed things.
How many times have we seen problems occur because somebody has added a new service to their environment and somebody forgot to change a configuration file? Now that’s not something that you need a massive brain to be able to do, but people get busy and they forget to do simple things.
Mitchell Ashley: I think, Judith, that’s really a good point, ’cause I wanted to ask you about this in the software creation process. Given things are so dynamic now and constantly changing, understanding just the environment, the infrastructure’s code all the way up through the application and how much all of that is changing, it seems like the stepping in for humans in certain conditions in that environment is a great application of AI.
For someone to synthesize all the factors that might go into where a problem exists or what might be causing a problem, seems to be where some of those algorithms might be helpful. Do you agree with that? Has that been your experience of what people are thinking about for AI?
Judith Hurwitz: Yeah, definitely. I think what we’re seeing is this is where sort of the human factor comes in. You set it up so that if the printers turned off, don’t send me an alarm fast emergency. There’s a problem with the printer. We all have gotten used to that. But when there are problems that you you’ve never seen before that you don’t have data on and the model doesn’t take into account, then the AI Ops or ML Ops then gives you a message, “There’s something strange going on. I don’t know how to fix it. What do you want do here?”
And so over time, when that appears again, well now you had data, well, this occurred once before, maybe it was anomaly twice. But then when you have enough data and enough experience over time, then you build that into the model so the next time that occurs, you make a fix.
On the other hand, you don’t want the system to say, “Oh, I know what that is, make a fix,” when it turns out, no, no, no, you don’t understand the context of, of what’s going on here. And just because there’s a correlation, doesn’t mean that there’s a cause for that.
Mitchell Ashley: It’s good to know, ’cause it’s not always about automating a response to it. I know in the security world, we’re very cautious, we’re skeptical about those things happening, blocking, legitimate traffic and financial world. That’s a huge issue that we run into.
So you pointed out a good scenario, whereas we see repetitive patterns over time and that can help machine learning algorithms now understand, “Okay, that’s what that pattern looks like,” so you can identify what it is next time instead of it’s just an anomaly.
Brian Dawson: Yeah. It’s interesting Mitch that you say that, and going back to the original, kind of the starting topic about AI and ML, not really achieving, its its promise yet. And as you called out, getting data is an issue, it’s not necessarily applicable to every space. You need a level of determinism and predictable patterns that you can learn from and build on.
And I have for a long time been excited that when we look at CI and CD, when we look at, Alan, the automated workflow of software development and delivery, or even if it’s not automated kind of the standard, there’s a couple of things that you do have. You’re doing builds, sometimes thousands of times a day, builds in deployment, getting that out into pro. You could be generating a ton of data.
It’s also in its nature, you’re striving to achieve a level of consistency and repeatability in how you deliver software. And I’ll say I’m not an expert to the level. Some of the people on this episode are, but I get at a very sort of top level get really excited about the opportunity for AI and ML to help better CI and CD, and in line with the principles of DevOps.
What I see it really helping with is reducing cognitive loads so developers can focus on coding, innovating and solving problems while helping ensure that quality and stability that’s difficult to maintain while you’re moving fast. So just thought I’d called out.
And I am curious to see when vendors and others really start to do like what our friend Koski has done with Launchable, and really apply AI and ML to the left hand side or dev side of the process to reduce the load, insure quality and stability
Alan Shimel: Guys, we’re all lumping AI and ML, AI and ML, AI, and ML. Does it have to be AI and ML? Isn’t it, in my experience, a lot of what we call AI and ML is a lot more on the ML than it is on the AI? So, is it fair to call it truly AI or is it really ML? And maybe ML is for today and AI is it tomorrow. I don’t know.
Judith Hurwitz: So I think there’s definitely a problem of nomenclature here. What we’re really dealing with primarily now is models and modeling data and creating models from data. That’s the reality of today. I think that there is a lot of misuse of the term AI and there have been some absolutely wild predictions. I can’t remember the name of the computer scientist who predicted that he would be able to replicate the human brain with AI.
Mitchell Ashley: Marvin Minsky treat.
Judith Hurwitz: No, this was past Minsky.
Mitchell Ashley: Oh, was it after? Okay.
Judith Hurwitz: Yeah, no, I mean, this was in the last five years.
Mitchell Ashley: Oh, okay.
Judith Hurwitz: So, over time, you’re always gonna be dealing with models. You’re never gonna get rid of the models. AI is sort of the next – Models are a subset of what you eventually achieve with AI, but AI is really a concept that will probably take decades to evolve. And I think one of the problems we have right now is where you have vendors, and I’ve talked to hundreds of them over the years where they say, “We have an AI application.”
It’s because it’s a hot buzzword and you can look back and in the history over the last 30 to 40 years and see whatever the hot topic is, all of the vendors say we do that. So I think that that’s one of the problems that businesses are facing right now. What is the difference? What does it really do? How does it make your company better? How do you use this technology to be prepared for change?
Mitchell Ashley: I think that’s a good way to break it down too, because early days of AI were about emulating human thinking. We’re still long from – I’m not sure – All my intelligence is artificial, by the way, I acquired all of it. I don’t think I was born with any of it. But that was kinda where it started. And then AI became about expert systems largely, and then machine algorithms.
Machine learning algorithms was really where I think most of the activity is today because of that prevalence of data. It seems like most organizations are faced with sort of one of three strategies, what do we look from our vendors and how they may use AI in a meaningful way, or machine learning in a meaningful way that’s gonna help my business.
Do I build models myself from the data like you were talking about, Judith, to apply to like a financial analysis kind of situation, more of an expert system, or do I use machine learning algorithms in my own software to do interesting, valuable things that my software can do.
And maybe you play in all three of those or a subset of those, but that seems to be the question I think most individuals or organizations are at. We’re not at a place where we can go build models, but we’re looking for these capabilities either in our own code or in third party products.
Judith Hurwitz: So I heard an interesting story from one of my clients a few years ago, maybe five years ago. He had a client that was very gungho about AI. So he went out and he hired five data scientists, paid them a million dollars each, gave them their own space and left them alone. These are the smartest people on the planet. Came back in six months. Didn’t wanna bother them. They’re so smart.
“All right. What have you found out? Have you written this application?” And they said, “Well, we’ve been discussing this for six months and we have determined what algorithm we’re going to use. The point is, that they worked in isolation. They thought they were the smartest people on the planet. They did not talk to people on the business side about what business problems they had.
They didn’t talk to the people who understand corporate data. They didn’t ask them, “What data do you actually have? What data do you need?” They didn’t talk to people who knew the business strategy or the business processes that were in place or needed to change. So it really is a team sport.
And I like Brian’s discussion in the beginning about culture, because it really is about sort of hybrid organizations where you have people, you have leaders that know a little bit about all of these areas, and then a team that’s brought together that can work across these areas.
Mitchell Ashley: Sure.
Brian Dawson: Judah, can I ask, do you have any recommendations for DevOps teams on how do we evaluate or investigate where ML is a solution? And I ask this based on observation, we’ll see if we agree, that oftentimes people are looking for a problem to apply the solution to, as opposed to looking for ML as a solution to a problem they’ve identified. I’m just curious if you [crosstalk].
Judith Hurwitz: I violently agree with you.
Brian Dawson: Okay.
Judith Hurwitz: So I think, and a lot of times people get so enamored with a new technology that they look at it as a way to solve all problems.And for a DevOps team, I think we are finally getting to the stage where there’s really reality in DevOps. It’s not the developers who are saying, “That’s not my problems. The operations people have to make this work.”
There’s really beginning to be this collaboration between development and operations. And this shift left is definitely becoming real. So that’s definitely true, but for these teams to be successful, they have to have a holistic view of, where is your business going? What do they actually need? Why have they come to you and say, “We gotta do something?”
Is it just because it would be cool to do something and spend money,or is there a real rationale behind this that they need from? What’s the pain that’s out there that they can solve? So, they have to start with that. They have to start not only collaborating between developers and the operations team, but with the business leaders, with the people who understand all of the data, people who understand security. So, all of this comes together in very much a holistic pattern.
Alan Shimel: Fair, fair. Hey guys, we’re way past halfway through here and I wanted to kind of turn our conversation to the future. We’ve had a discussion on sort of the history of AI and ML and lessons learned, et cetera. But when we look forward, Brian, you mentioned the Linux Foundation has a, I dunno if it’s a daughter foundation or a subgroup, dedicated to AI and technologies.
What do we see when we look sort of near term future? Forget, yes, one day, we’ll mimic human brain patterns, who knows if we’ll be alive by that, but near term, what do you see? What is Linux Foundation planning for?
Brian Dawson: Well, looking forward, I – And it’s funny, ’cause you brought me in and I started to think about the impossible dream of what’s gonna happen in the future. I will call out and will mention again, I said at the start, the Linux Foundation is a parent foundation of what we call a Linux Foundation open source project that hosts other project.
So LFAI & Data, you can call it a sub-foundation of Linux Foundation because they host, I believe at this point, 12 graduated projects with about 20 to 30 projects in incubation. So we’re talking 30 to 50 projects under the LFAI & Data umbrella that are all working on various aspects of shared efforts, multiple, large commercial organizations, as well as standards bodies coming together to drive the future of of AI and ML.
Brian Dawson: What I do see coming out of LF AI & Data Foundation in the short-term is standards and foundational implementations, i.e. moving sort of beyond the sort of rudimentary discovery around AI, building sort of packaged or gray box implementations that everybody agrees and collaborates on, which I think will help unlock the less initiated, less expert vendors to begin to deliver truly AI based differentiating capabilities.
So, to put that in short, because there was a lot of words is, I think it’s about establishing a foundation for us to start to build on and accelerate our progress within the AI and data space applied to DevOps.
Judith Hurwitz: And so, Brian, I think that you are 100 percent right. And that’s when commercialization really happens, is when we have those standards and when everybody agrees to use those foundational services. I think the challenges that we face are, how to get the commercial vendors who don’t necessarily want to –
They want to give lip service to these standards, but they really want you to only use their version of the “standard” so that customers never leave. So, it’s that stickiness factor that they are looking for. So, I think it’s it’s a hard journey.
Brian Dawson: Yeah. Well, I think, Judith, you actually nailed the reason the Linux Foundation exists, frankly. And if we look at Kubernetes as a model, it was hard. Press Google could have easily said, “We’re not going to hand this over to the Linux Foundation. We’re gonna dominate this space. We are going to be the modern cloud OS.”
But they understood for it to gain traction, for it to grow and truly offer benefit industrywide, they had to hand it over to the Linux foundation, a grow manage. They had to bring in Microsoft. They had to bring in Amazon to play in that space. And I see LFAI & Data serving that same role, what I tend to call in some language unlocking innovation, or as our tagline is, decentralized innovation.
And I would beg to say that that is, we didn’t call that out directly, but one of the challenges that we’re seeing in the AI and data space. If there’s not short-term monetization, we’re not gonna do it. And if there is, then we wanna own it. So can we create a impartial playing field for everybody to come in and innovate together, and then build commercial solutions off of that?
Judith Hurwitz: Yeah, it’s the challenge. It really is. And I think Kubernetes is a great example. I think data in some ways is more complicated because data’s really the crunch –
Brian Dawson: Yeah. Well, I know everybody wants to own it. Everybody wants to own the data.
Alan Shimel: No doubt about it.
Mitchell Ashley: There’s discussion about AI needs to become an engineering discipline as opposed to this sort of edge specialty. Doesn’t mean that necessarily is gonna be applying everywhere, but I think how we apply DevOps or, I’m sorry, AI to certain kinds of problems, it seems to me the two most ripe areas are that highly complex environments.
So it’s more and more infrastructure, it’s automated, more things are gone to digital. How do you manage the infrastructure or manage the triage problem-solving? And then of course, I think that the other areas, people will continue to find niche areas where AI can be applied to gain competitive advantage in a certain domain or space.
And it seems like that’s the trajectory we’re on for quite a while. I don’t think it’s ever gonna be AI takes over everything, and it’s the thing that replaces DevOps or whatever. But it seems to me kinda a tool, but – Sorry, go ahead, Judith.
Judith Hurwitz: No, what I was gonna say is very interesting. If you look at healthcare, for example, if you look at the spectrum firm, I can automate certain DevOps functions that are repeatable, where I can identify patterns. And as I collect more data, I can automate more things.
You have something like healthcare, which in terms of orders of magnitude of complexity are just huge. And what I’m seeing is a lot of the vendors who think that I’m going to tackle healthcare with AI and we’re gonna own the industry, a lot of them are getting out of the business because right now, it’s just too hard and it will be too hard for quite a while.
Mitchell Ashley: It’s true of the medical industry in many ways. It’s a hard nut to crack.
Brian Dawson: Yeah. If I may start to dream a little and chime in based on something you said Mitch, I do see that in the near term, we are at a point where we talk about modern software development and delivery, the rapid pace which we’re delivering change, we’re building on inventions and progress made over decades. Library reuse is really heavy.
And we are getting to a point where as we build more complex systems, we have to figure out how can we outsource sort of maintenance and management for one of a better word to use it kind of grossly to solutions like ML? How can ML help us continue to improve, grow and build on what we’ve done, but manage the scale and complexity?
And I think we’ll move from some standardization, foundational blocks will apply more ML and both the operations and development space to manage that complexity, to maintain stability. But then I also eventually see the next stage being now, how do we apply ML and surface it to a developer at the time of a commit?
Here’s what the expected outcome of this changes to help provide guardrails to the developer, the end state, or I wouldn’t even say the end state. I think we’d agree existentially there’s never an end state here. That we’re at a point where we don’t even have explicit CICD pipelines. We can commit code to a repository, the language can be inferred.
We can give queues or signals to infer where we want it deployed, how we want it deployed. And I actually see ML and to an extent, layers of AI, just figuring out what to do with code at rest. So if you flash forward in 10 years, and we’re truly applying these to technologies at scale in the cloud, I just change a line of code.
That change automatically is delivered to production. And AI/ML is helping us do that. So that’s sort of my dream. I don’t hit compile, I don’t hit build, I don’t build out stages and workflows, I just change code, and that’s running in a system somewhere.
Judith Hurwitz: So, don’t you also want to have the ability to say before you say, you look at what’s happening, and don’t you wanna sort of at least at this stage say, “Are you sure you wanna do that? Are you sure you wanna delete that file?”
So until we get to the point, because you have to be able to trust that the system is smart enough to understand if I make this change, what cause will that unleash? Because we’re not dealing with the perfect world in DevOps by a long shot.
Brian Dawson: Yeah, I do want that. I want protection. I want it to know where the vulnerabilities are and warn me. And I do think, Judith, that would be a prerequisite into this deploying code at rest, or truly applying dev AI.
Alan Shimel: Fair enough. Hey guys, we are just about out of time. I think this was a great discussion. If there’s one thing I could take out of it though, is that the crystal ball remains cloudy in terms of how this is all gonna play out and we’re gonna have to wait for it to kind of come into focus as we – But it certainly will be an important part of software development, of operations going forward. And it’s gonna be an increasingly important part.
Brian Dawson: [crosstalk]
Mitchell Ashley: I think the crystal ball is in Magic 8-Ball, ask again later.
Alan Shimel: That’s a good one, Mitch. But for now, we’re gonna call an end to this episode of DevOps Unbound. Again, thanks very much to Tricentis for their sponsorship. Thank you so much, Judith. Thank you so much, Brian, for appearing here and hope to see you on a future episode. Mitch, as always, great job, riding shotgun with me.
Mitchell Ashley: Good to partner with you. Absolutely.
Alan Shimel: Absolutely. But this is it for this episode of DevOps Unbound. This is Alan Shimel, have a great day, and we hope to see you soon on another DevOps Unbound, as well as don’t forget, every month we do a live round table open to you, our audience, with questions. So stay tuned for that as well. Take care, everyone. Bye-bye.
[End of Audio]