The task of keeping networking straight while expanding the definition of network to include both physical and virtual networking has fallen heavily upon the op-centric side of DevOps–or what I like to call devOPS. What is defined as networking has grown, sometimes at a ridiculous rate. While the terms are familiar, how you manage them in truly virtualized environments is different than traditional hardware management. More agile, but different.
And that means those who deal with networking in DevOps have to know more, manage more and debug more–all while working on other parts of the DevOps toolchain. It’s been a slog through increasing capability with increasing workload.
There is a confluence of new and different things that promise to set this lopsided progress right. We’ll look at them individually, then talk about the sum of the parts.
For most of us, when we hear IoT, we think of millions of devices slamming data into a centralized service. iPhones, for example, returning information to Apple. Or highly specialized devices such as utility meters that report far more frequently than humans could go out and read them. But IoT is many things, and increasingly IoT is networking devices. They hold a wealth of information about the state of the network at any given moment, and that information is increasingly being funneled into APIs that IT or management apps can take advantage of. Considering that some of those management apps are also getting data from application tooling and server health status, there is a wealth of information coming together in one place.
A networking device can be doing a lot in any given second, and generating data as it works. Servers can be on their way offline when they feed the information out, or maybe sitting idle and wanting more work shipped their way. Application bottlenecks might be identified in the wealth of application tooling information, but buried under the wealth of other response data being shipped back. AIOps aims to help the devOPS side deal with these issues by limiting what makes it to human attention and drawing correlations between issues. Finding that an over-burdened network connection is associated with an apps performance degradation is an easy one to talk about, even though it is pretty simplistic. AIOps can make the connection before a human gets involved, and then the staff member can decide what to do about the actual root cause.
It’s not at all perfect yet–we’re not generally talking about AIOps making a ton of decisions, rerouting traffic or spinning up new instances to resolve issues at this point–but it is definitely in the “useful tool” category. The cost in person-hours to analyze and reduce problems is definitely cut by the use of AIOps as it exists today, and that benefit will only grow moving forward.
The biggest problem is the one that bedevils all data interchange, and in fact is the biggest issue in the world of big data–data standardization.
No, my OMG in the headline wasn’t what you thought. It was the Object Management Group (OMG). They’ve been around forever, and have done a pretty good job of developing useful standards, particularly in the area of data interchange. I’ve followed them for a long time, and used their standards since … well, anyone remember CORBA? That was my first set of OMG standards.
Now they’ve started an Artificial Intelligence Platform Task Force (AIPTF). This group will be looking at all areas of AI to set standards, but the interesting bits for this blog will be in data standardization and communication. Standardizing data and how it is communicated is the next big step in this process, and hopefully the OMG will set us down that path. They’re not specifically looking to address infrastructure big data needs, but their work will likely guide AIOps moving forward. And there is huge potential in that movement. If AIOps gets some standards that help it to be more proactive, we may actually end up with auto-fixing algorithms for obvious and easy issues. We already have the systemic ability in container managers to increase the number of instances serving an application if the existing ones get over-loaded, if we could expand that type of auto-growth to include networking issues and consider cores used by instance to find the best solution. Members of the team that are more devOPS will have more time to focus on new development. And that’s a huge plus.
Whether the work at OMG benefits us will depend upon whom is involved in the various subgroups they have planned to implement, and cooperation with other ongoing efforts at standardization. Yet I’m hopeful that, like with that original CORBA spec, they will provide a roadmap that others can follow to put us on the path to the much-maligned self-healing network. Giving devOPS a chance to keep up, while leaving them ultimately in control of what happens. Current iterations of AIOps do this to a lesser extent, but more inputs and more automated analysis of those inputs will only make AIOps more powerful. Standardization will make AIOps more able to make use of data and better focused on their specific problem at hand. I see the confluence of all three trends to be a huge positive to the industry if we manage to take advantage of it. And some vendor out there will see the potential, taking advantage of it for us, we just have to find them and use their solutions.
Meanwhile, paw through the data available to you by hand again, and keep knocking those walls down.
Collaboration enables customers to seamlessly manage tests from creation through execution and analysis SAN FRANCISCO, Jan. 21, 2020 –…
We are pleased to announce the 2019 DevOps Dozen honorees. I would say DevOps Dozen winners, but truth be told…
Brian Dawson: Hello. Welcome to DevOps Radio, live at DevOps World | Jenkins World 2019. I have the pleasure of…