Since the widespread acceptance of virtual machines (VMs), we’ve been on a racetrack with changing technology, with little opportunity to stop and digest lessons learned. That’s not to say organizations haven’t done a good job of learning from experience and mistakes both; rather, the lessons learned are too frequently obsoleted by The Next Big Thing.
While I have an abiding interest in several areas of DevOps, we’ll focus on one I’ve spent a lot of time around recently: provisioning.
All Roads Lead to Rome
At this point in time, you can provision to VMs, “the cloud” (note that from an automated provisioning perspective this is anything but homogeneous, but the fiction helps here for a moment), containers, clusters and clustered container managers. While all end up giving you a working machine that has IP access, how each goes about it is different.
That is a problem for automation, unless a company goes all-in on one of the above. Containers were going to save us from custom configurations based upon target—and largely have—but in DevOps they still leave us with an underlying deployment infrastructure that must handle where that container will land. The infrastructure doesn’t define itself, unless we teach it to, and that is still very much tied to the target. Anyone who has used both AWS and Azure knows that even two cloud environments can look very different to get up and running with automation.
Server deployment tools are steadily growing their capabilities and are adopting the ability to spin up cloud instances alongside their ability to handle hardware/VMs. That’s a good thing, but it’s still relatively new.
The thing is, we in DevOps are trying to do the exact same thing in every case: get a server configured far enough that our application provisioning tool (Puppet, Chef, Salt, whatever) can pick it up and get the server populated with our applications. One goal, many paths, and it is a rare organization these days that is taking only one of those paths.
Plan Ahead, Forge Ahead
The rate of change is not showing signs of slowing down—the re-emergence of clustering software in the light of containers is proof of that. I’m stoked about the idea of clustering, simply because it can take a layer out of an increasingly complex environment, but it changes the way we deploy yet again, offering new software, configurations and APIs.
The defense mechanism I recommend is to look at what is available, consider how it does or will fit into your architecture and plan for it. DevOps is the area of IT most affected by these changes, so that’s where the planning should take place. But like all topics DevOps, this requires a lot of communication with Dev and stakeholders to make certain the platforms being considered are the ones that will best serve the needs of the company. But supporting all of the possible deployment mechanisms in a DevOps environment means investment in scripting and apps that do the same job for different targets. While that may be doable today if you are blessed with tons of DevOps man-hours, I cannot say often enough that it is building technical debt. Each of those different deployment mechanisms will need to be maintained moving forward, and while the idea of DevOps is always relatively simple, it has been my experience that the reality means some pretty complex code/script feats are necessary to make it work successfully and reliably.
So build a list of platforms you aim to support, start working toward that list, then revisit in six months or so to make sure nothing has changed. Considering that the needs of most organizations change continually in light of technical advances, and the markets are changing from those same technical advances, six months is a good timeline for revisiting. Any longer than that and you may find yourself with a list of things to support that isn’t accurate and misguides DevOps automation efforts; any sooner than that and you will find you are shifting priorities too frequently to adequately support large installations.
It’s About the Business
In the end, for the vast majority of IT, the goal is simply to support the business in an efficient and secure manner. Find out what your organization needs and then go out and make it work. Our industry is lucky enough to have a metric ton of smart people, so making it work is not the issue; pinning down the goals is.
And keep rocking it. For every error, there are a bazillion successes we don’t jump up and down celebrating, so do the same with those errors. Learn the lessons and move along. Pretty soon you’ll have fully automated deployment on the newer platforms most-suited to the organization, serving up the apps the organization needs.