Change Management, let’s face it, it’s usually a checklist item and a CYA tool. But in the world of DevOps where change is part of the culture and processes. Change Management needs to be more than a cost center, and really a way to improve what you are doing. And there is no excuse not to.
Most of the time organizations look at change management as a way to spot problems AFTER they happen. Thus it becomes a tool to respond to change, rather than leverage change. Regrettably the response is usually from the governance perspective. Allowing IT to identify why a change occurred, or as a way to evolve with the organization – as is defined in ITIL and ITSM, is almost never leveraged.
This is extremely short-sighted. And in the world of DevOps, which embraces change as a way to iterative-ly improve on processes, change management is usually viewed as something to avoid, or just annoying. But in most enterprises, you cannot avoid it.
Fortunately change management, the tools, can be detached from change management the governance. And then no longer tied to the reactive philosophy. They also already have the functionality which allows them to be utilized in a proactive way to improve processes.
Pipeline, Measure, Observe
As I mentioned in this post. The delivery pipeline is the signature of your DevOps process. It should be documented, created deliberately, and iterated over just as you do code. When viewed as something that is living you move from the reactive to the proactive.
You can do this by attaching to the delivery process log analysis, automated load testing, automated functional testing, and configuration monitoring. These are your application change management tools. But they are more than a way to track the end of the application delivery. They should be observed at the end of every or most releases. The results of the observations should be used to improve the delivery process. Of course how you observe needs to be clearly defined, and tests and dashboards created prior to this.
By doing so, it should be clear where the weaknesses, such as errors, slow performance, and changing variables occur. For example a slow regression test, or manual production release process that takes too long from the point the release is ready to when it’s flagged for delivery. If you only look at it from a release by release point of view, all knowledge as lost; you are really performing fast waterfall, not DevOps.
The other type of observation is comparing steady states to new states. This could be done in a tool like LogEnteries by looking at their live tail feature, and compare historical states of the application to the current one. Or with a perceptualdiff tool like Applitools which will show you the visual difference of the application over time, and identify steady states. Or in a infrastructure monitoring tool like ScriptRock’s GuardRail. Where you setup a policy for infrastructure steady state, and get notifications when infrastructure stops matching the policy.
Take all the observations, and make decisions about the process, not being myopic about only the code and or a specific release, like we are used to doing.
If you are moving fast enough, and results oriented enough, change will happen. And it is an opportunity. Now using these events to respond is one thing. But using them to improve over time is another.
Faster, Faster
For example what if you see that while you have been releasing more features, your time to deploy has increase substantially. Slowing everything down because you are maybe releasing too much each time. Which starts a snowball effect from regression testing, to release down time, to responses to new bugs. Or for example in a very similar scenario where your application gets more functionality, but the performance slows down a lot, as told to you by a load testing tool like BlazeMeter.
In the first case the results tell us we need chunk our feature releases into fewer features per release, with perhaps more frequent releases. In the second one it tells us we need take a look at the efficiency of our code, and perhaps have a dedicated performance effort. Like making each component on the business logic, and view layer more compartmentalized utilizing separate threads. For example.
In any case, the information tells us not that something happened, but that we need to improve something in our delivery process. It is not an end, it is a means.
Taking it one step further
Some of the change management tools even take it to the next step. The killer feature in ScriptRock is that you can export configuration management scripts after a change. So imagine this, a developer un-known to you, the sys admin, makes a major change to the frameworks on integration environments. You become aware because you receive a notification due to broken policy. These changes impact infrastructure in a big way, and need to be replicated to dev, test, and prod.
After you talk to the developer you realize, these changes are for the better, and should be kept. But, hell, now you have to replicate them in the gold master infrastructure. No sir, or madam. You can export from ScriptRock a new Puppet, or Chef script that has the changes. One click, done.
Such killer features and tools are taking change management from the project checklist item, and CYA tool, into the area of being a killer weapon in understanding and improving your delivery processes. Ultimately improving the quality of applications. So stop thinking about change as a response, and more of a way to make things run better.