Not all automation scales. One of the chief impediments of delivering an enterprise-class DevOps service is the decentralization of tooling choices—or worse, the methodologies within the tool for delivering CI/CD. If the methods used to deliver automation are as varied as the last engineer who created them, the spaghetti that ensues will not scale much farther than previous methods did.
The trick to scalability is not so much tooling-dependent as it is approach-dependent. Templates or frameworks (or any word you would use) to describe a relatively preformatted approach to automating a particular kind of technology component are the key to achieving scale.
Ideally, your highest-skilled engineers create the basic template or framework that other, lesser-skilled engineers will use to create the actual automation for each system as they move forward. Using a single way to automate a particular kind of technology enables you to manage large numbers of systems utilizing that technology from a single point of administration. For example, a standardized template used to create the automation to deploy a MySQL-type database could be altered over time to include new testing requirements detected from results found later in the QA process.
Once the new testing is incorporated into the standardized template, the resulting automation is fixed once for every instance of that kind of database in every system that uses it. Otherwise, you’ll need to remember to update every script for every instance of a system that uses this kind of database. Those updates likely will be implemented differently across the applications by different engineers, which may negatively impact testing results (thus defeating the purpose of testing consistency, and actually generating more unnecessary work to locate the variations).
Templates or frameworks (or pick the synonym that suits you) can reduce variability in “how” the automation continuously delivers and integrates. Once variability is killed (or nearly killed), predictability can emerge. Predictability throughout the entire SDLC lowers overall costs, as finding and fixing bugs becomes more systemic. What’s more, there are fewer bugs, thanks to battle-tested processes that produce consistent behavior. If the tooling is sophisticated enough to offer visibility, the costs of quality are reduced even further, and the speed to deliver is radically increased.
DevOps promises that combination of reduced failure rates with increased speed of delivery. However, embracing unstructured variability in creating the automation can kill it. The Ops side of the DevOps equation is all about doing things systemically and predictably, as that is the only way to maintain stability while changes whiz through their world. Getting Ops to buy into DevOps is much easier when they have complete visibility into a predictable, repeatable, process for delivering change. Avoiding that 3 a.m. phone call is something everyone has in common, and reducing variability by focusing automation through the lens of templating or frameworks is an excellent way to achieve it.