As longtime readers of my blogs will know, one of my hobbies—when not writing about or playing with technology—is modeling. There is a similarity between quality tech work and quality modeling, but the brain power involved in modeling is much less, leaving me time to think about tech issues or listen to audiobooks (when I’m not listening to music, anyway).
In modeling, as in tech, there is a process. The process varies based upon what you’re working on, but is largely the same, with some steps slightly different or skipped altogether and others taking longer. At the moment, I’m building the first day of Gettysburg from the American Civil War, but I simply enjoy modeling—and wargaming with the models, but that’s a separate story for a hobby-focused blog.
The thing is, that process is nailed down, and to the extent it can be, is automated. One step leads reasonably to the next, and while one step is completing on a given model (glue or paint drying, etc.), there is an opportunity to advance a separate project.
Sounds a lot like DevOps, doesn’t it?
The thing is, every once in a while, you will get something that is so different, it totally interferes with the carefully planned flow. My current primary project is a little over 300 figures. While one unit is getting the final two coats of finish (that are basically the same or every model), the next unit is getting primed. Assembly of all the figures came first, so there is only the priming step before painting can commence.
But here is a work-in-progress picture of Pegasus Bridge. This came as a complete set, and took a large chunk of time to get together, find the correct water for, get placed on foundation boards, etc. This is outside the normal process of assemble/paint/base figures and vehicles. The set is beautiful, and even this picture, which was taken well before completion and only to study color and weathering effects versus reality, shows how truly nice the model was. But it totally disrupted the process flow.
DevOps and Multisystem Installations
That’s what complex system installations such as Hadoop or OpenStack do. You’re installing across multiple systems, configuring them for load-sharing and communication, and doing it by hand is a long process that interrupts the normal DevOps flow. Crafting scripts to install them is possible but highly complex and frustrating, simply because of the complexity and interrelationship of the nodes in the system.
Having installed both by hand, and installed both with tools designed to make it easier, I will happily tell you that yes, you will understand more if you do it by hand, and no, that immediate understanding is not necessarily worth the cost of weeks or months figuring it out and getting it installed.
You see, to operate these systems, you will need much of the knowledge you would gain hand-installing, so you’re going to learn it either way. Your choice is a simple one: Get a working system and learn by tweaking settings or swapping nodes in and out, or have no working system and learn by the frustrating process of trying to figure out what you’re supposed to do next, and why that step is failing for your specific hardware.
DevOps and Multisystem Maintenance
The only real problem is that it is painful to maintain these systems. It’s not as simple as
yum update [[packagename]]
because that would only upgrade one package on one system. But these tools span multiple systems, and even just dependencies generates a massive list for each system in the cluster. Seeing the need, most of the traditional application provisioning tools (Puppet, Ansible, et al) have built or cobbled together solutions that will get the job done. But custom solutions exist for these products, too. The problem is that, like Pegasus Bridge, as beautiful as they are they disrupt the DevOps flow. No system goes without exceptions, and I’ll go out on a limb and say that custom solutions to install and maintain solutions such as Hadoop and OpenStack are worth having yet another tool in the DevOps tool chain. The trend is multipurpose tools to be used on many projects, but the importance of a cloud or big data cluster in your IT operations makes them worth the extra tool. As long as the extra tool reduces the amount of work and the chances for human error (which, in my opinion they do—it is kind of their selling point).
That Leaves Time
These tools (a good example, though in general I won’t be talking about the competing vendors here, is the Mitaka-based release of Mirantis) allow not just installation but also ongoing management of the OpenStack cloud or big data cluster. That means there is less nitty-gritty for DevOps (or operations for that matter) teams to worry about, freeing time for working on what actually gets built on top of these powerful platforms. Cloud and big data both are just enablers for more functionality when looked at in the right light, which means more work (and often more meaningful work) for DevOps teams.
When you consider either of these technologies, check out what’s available for installation and maintenance, even if it is outside your normal DevOps tool chain. Like Pegasus Bridge, the change is likely to disrupt you for a bit, but once you have a tool and process in place, it will just be something that happened—and was going to happen with or without the tools. They just make it less painful.