As DevOps functionality continues to expand, we’re looking at a new vista for freedom of applications. Our apps can be moved around as we need them; we can build-test-deploy on the fly; and we have the ability to monitor whatever is needed. The price we pay is one we gladly take up: implementing new toolchains to drive the transition and getting those toolchains integrated into the entire development and operations process. Yes, we have an entire organizational side, too, but for this blog, we’ll talk about tools.
Why are we going to discuss tools alone if this is about DevOps? Because it is a different set of problems to move from one organizational structure to another versus moving from one toolset to another. While DevOps is giving us independence from a lot of traditional infrastructure issues, the reality is that it is creating dependencies upon the very tools/toolchains we are using to gain that independence.
Confusing? Let’s try looking at it differently. The new vendor lock-in will be DevOps tools. While they are freeing us from vendor lock-in on several fronts, you are unlikely to make a quick move from, say, Ansible to Puppet or vice versa. It can be done, and some organizations even do a little of both to make sure their options are open. But most shops will pick one and run with it, because that creates a single training and maintenance direction.
I (and others) have alluded to this issue in the past, but it is of growing importance as vendors start to take what should be cross-portable tools and customize them to limit portability. Container management, for example, is available on every major cloud vendor these days, but using the vendor’s bundled container service is convenient and more difficult to move away from. That is but one example of many.
And the importance of these tools/toolchains is growing at the same time that their use is growing. Soon, in a DevOps-heavy shop they will be essential and locked-in as if they were cemented.
So, What Are You Saying?
Pick DevOps tools as if they will be there for decades. This is not how DevOps teams are encouraged to think, but you are building a DevOps infrastructure that, if adopted broadly in the organization, will be difficult to move away from to any significant degree. Just like these days you would need a driving reason to port thousands or hundreds of thousands of lines of code, DevOps scripts are growing and the number of DevOps scripts are multiplying. As you fold security, storage and networking into that environment, and build real-time reporting that you are going to rely on for years to come, make certain what you have is good, if not great. In many ways, DevOps thrives on “good enough” without thought to long-term futures, but the investment required to have a fully developed DevOps environment strongly implies that what an average enterprise is doing today will be what it is stuck with for a good long while. That’s not a place for “good enough” if you know there is better out there.
It’s Not All Unchangeable
Some bits of the environment can change over time, of course. Better data collation/integration technologies no doubt will take over for whatever you are using for reporting now, as well as AI-based solutions that can plow through the growing pile of machine-generated data to get a systemic view of what is happening. But for the most part, build it like you’ll be maintaining it for years to come, because much of the environment you will be.