Almost since the beginning, there has been a tug-of-war between whether better toolchains or increased communications were “real” DevOps. The answer to that question is, Yes.
Better tools undeniably speed the delivery process and streamline operations. Anyone who claims differently is not living in the real world. Jenkins alone can make your development world a better place, and it is but one tool—with hooks into like a million more, but still one tool.
Increased communications speed the delivery process and streamline operations, if they are frank and honest. A developer who knows that his choices for included dependencies can make life easier or more difficult on maintenance is far more likely to include that consideration in their choice, for example.
Gains Are In the Doing
But the rock-star gains that some shops have seen from DevOps? Are a bit of both, and something else. The sudden improvement of 4x delivery frequency most often comes from no longer avoiding that thing you don’t talk about, or the legacy piece of software that eats an increasing amount of time just being kept alive because it’s necessary.
And I would argue that is the best place to start. I’ve touched on this tangentially before, so I thought I might be more direct.
You have it. That thing in the process that is a bottleneck. No one talks about it. There are reasons—some because a few years ago you tried to resolve it and couldn’t; some because the bottleneck is a natural outgrowth of some requirement; some because “it’s always been that way,” But there is that step (or more often three or four steps) in the develop/test/deploy cycle that consumes a huge chunk of time to do little work.
There are also those tools/processes so old that they consume an inordinate amount of resources. Most shops more than a few years old have them. I’ve worked in a couple of 100-plus-year-old companies and frankly, these bits of outdated software were part of the genotype of the org because they’d been in IT so long.
Manual processes are known to take time, simply because they’re manual. Development itself, even when broken into smaller pieces and implemented piecemeal in the manner of Agile, still takes time. Testing, too, is still (or again, depending upon your background) a largely manual process. And security checks are also in a ton of shops.
These are known. And if you’re approaching the problem from the “find the bottlenecks,” I would exclude them. Better tools or more automation can help all three, streamlining who does what can help test and security, but these aren’t the places that I’m thinking of.
Aging Support Software
I worked in a shop once that spent an inordinate amount of time keeping a “database” alive that was in existence before RDBMS. Yep. There was mission-critical software that used the database, so it was maintained as well as possible internally. Internally because the vendor no longer existed.
Several analyses were done, and each time it was decided that the DB couldn’t be gotten rid of without a major rewrite of the systems it supported. But as time went on more and more resources were poured into those systems and the database. Eventually, it was forced from above that the database had to go. The estimate to do so came in at an insane number and the project was dropped. Meanwhile, an increasing amount of resources were being poured into the DB and the systems that supported it. All were outdated, and all required special skills that were also outdated. Soon the org was paying exorbitant rates for the few people who could work on the system to do so via contract.
Finally, with hard dollars attached to the preservation of this system, a forced upgrade was a viable project, and it went forward.
This is all pretty standard enterprise stuff, it’s kind of the way IT has functioned in the past. But had IT management gone to business units with improvements that could be offered during the upgrade, all of the resources poured into not upgrading over the years could have been used for new product/feature development. Identifying the fact that replacement was inevitable and wouldn’t get cheaper for waiting is the type of outdated software issue I’m talking about.
From the bottleneck side, there is often a “this monster log file must be reviewed by a human” step on the way to production, too. Think about simple automation. What is the human scanning for? Talk to those responsible and see if they can offer guidelines concerning how they would train a new person to review the file. Then write a small script to do it—not as a final solution, but to show that a person doesn’t have to if the review is well-bounded. People won’t trust it, so run it in parallel for a while, and learn from what it misses, point out what it finds. Long term, eliminate the delay of someone pouring over logs looking for the same things over and over. Take time to make it possible. It won’t be a ton of time, but it will reap gains. Just talking to the team that must accept the log output might reap gains.
There isn’t a need to immediately install 15 DevOps tools or reorg all of IT before starting your DevOps journey. Just look for the places resources are being invested to little long-term gains, or where the process is held up in a manner easily eliminated. And keep rocking it.