Change consistency (or the predictability of how change is deployed) followed by change transparency (or the visibility into the state of change at any stage in the software development life cycle [SDLC]) are two key benefits of embracing a highly functioning enterprise-class DevOps set of services. Most organizations are attempting to transform themselves from a widely varied manual delivery system to an automated, template-based delivery model that DevOps relies upon.
Along this journey a key fact soon emerges: We cannot allow “manual” tweaks to our environments and maintain the predictability, visibility or the stability we want. To be truly consistent, process must be fully automated. A process that relies upon human interaction (whether planned or unplanned), is subject to human variability that can impact its success. In short, we do not want a deployment process that has humans coming along after it, to “clean it up.”
Thus a key principle of DevOps, is that it becomes “the singular path to production.” We want all changes going in to production to have gone through our DevOps systems and processes. Our goal becomes to reduce and eliminate human reliance in every interaction. As an example, we automate test execution, then test evaluation, then test gated change progression. We use “smart” deployment technologies to overwrite what is different in the destination environment, forcing it into compliance with our stated version of changes that “should” be there.
In effect, we “step on” variations, overwriting them every time we initiate a deployment. This quickly teaches our freestyle engineers who make changes on the fly that their work will get “stepped on” with each new deployment (perhaps daily or hourly). Most engineers quickly get the hint that to preserve their work, they need to include it in the deployment process itself (updating the proper configuration file, or adding the right series of steps in the deployment execution automation itself). Otherwise, they will face a daily onslaught of repetitive tasks that inevitably will overwhelm them.
In this sense, a properly implemented DevOps system is “self-cleaning.” Paired with an executive mandate to have all change go into production through DevOps, another benefit emerges that may bring the CIO glee. Just a few years ago, IT organizations were bent on the idea of documenting change (therefore understanding it better) through the means of a configuration management database (CMDB). Those projects tended to be large, costly, and doomed to minimal success. Typically, the CMDB effort was great for 90 days after its first instantiation, then over six months its data became “suspect,” then after a year or two its data became prehistoric and close to useless.
The simple truth is, engineers (of nearly every variety) are toxically allergic to documentation. Asking engineers to have the human discipline of keeping up with documentation is like asking the sun not to shine. No matter what set of inducements is offered, or punishments threatened, the engineer simply remains toxically allergic to documentation and its maintenance (probably a DNA thing).
Industry tried to solve this problem by producing “discovery tools,” or tools that could determine automatically what our state of change was environment by environment. The idea being that if we ran discovery tools on regular intervals we could at least renew our CMDB baseline from time to time. It was an improvement, but discovery still lacks context for change and often is not thorough enough to generate real value beyond asset location and basic configuration.
Enter DevOps, which inherently has an artifact database capability to track versions of builds, deployments and releases over time. DevOps understands the need for artifact identification, articulation, compilation and history. Most early DevOps tooling has some form of it built into the products. But this is where a perfect storm becomes possible.
If our DevOps systems have achieved transparency; that is, if we are able to connect the context of change outlined from ideation (in the business requirements) through production release (including the history of testing, etc.) by version, we now have a great deal of wealth we could plug into a resurrected CMDB. And we could maintain the CMDB automatically (requiring no engineers to have to do manual things to keep it up to date). Our CMDB becomes a repository, or master repository of the artifacts of change we implement through DevOps.
This is more than just baseline or the ability to establish a “golden copy” of what change should look like. It is the ability to establish “relative” golden copies by development version while it progresses through environments of even highly complex SDLC systems. We have thus increased the value of a CMDB exponentially beyond its use in just production, but can integrate it, into the SDLC process and watch change become documented in real time no matter how fast or how often we move change.
This should be groundbreaking. Even if the CIO has burned all his funding and political capital on a former CMDB project that has had less-than-desirable results, he or she can salvage what is left of it and quietly use DevOps to resurrect the CMDB, making it an integrated step of moving change. Once the CIO has demonstrated success doing this, he or she can offer this reclamation project to the business and begin exploring ways of creating CMDB valuation to the business (beyond asset location and valuation).
What emerges is a DevOps-centric CMDB. DevOps, having become the singular path to production, has both insight and context into all change progression and simply updates the CMDB (master artifact repository) as needed automatically. In this way, an integrated CMDB becomes part of the DevOps continuum. And perhaps the greatest benefit or win is the simultaneous recognition of engineer’s allergy to documentation already baked into this sustainable solution. I would call that win-win—or, at least, a recognition of reality.
To continue the conversation, feel free to contact me.