There’s been a lot of discussion regarding DevOps and its role in fostering quality and continuous improvement through increased levels of automation. In turn, this automation is fostered through emerging infrastructure and software architectures that are exposing control plane and management programming interfaces, which has been dubbed in many circles as Infrastructure-as-Code.
This movement allows development and operations personnel to have real-time interactions with the underlying environment—the network, servers, operating systems, and storage—that are used to deliver applications and data. It also means that as the environment becomes more programmatic the business must treat this this effort as a first-class application development process and ensure that the corresponding assets follow common coding practices and meet with a consistent and sustainable architecture.
Those that are unfamiliar with the activities of system administrators and data center operations may have an ill-conceived notion after reading emerging literature on DevOps that operators have been running the business on the backbone of purely manual processes. Even as a life-long IT person, until I started working with infrastructure and operations as part of my role, I was unaware of what happens behind the great curtain that keeps the digital bits flowing through the organization.
The truth is that operations and administrators have been developing scripts for years to automate routine tasks. These scripts have typically been used to dynamically generate or update configuration files that were then applied after a restart. This change to Infrastructure-as-Code is significant because now these configuration changes can be made on the fly and are often immediately acted upon without having to take the system offline. If you’re now putting your hand over your mouth upon the realization of the potential disastrous effects this can have, you understand the impact of this power.
Thanks to Software Defined Networking, hypervisors, and even Software Defined Storage, an operator can programmatically change the entire environment underlying the application without having to notify the application of this change. Architecturally, this is a wonderful thing. Pragmatically? Well, there’s a lot of bad code out there that makes assumptions about the underlying infrastructure and is about to be exposed when migrated to these modern infrastructures.
Hence, building, testing and deploying infrastructure control applications must become part of the overall Software Development Life Cycle (SDLC) and begin to be built, tested, deployed and managed using the same controlled techniques that are hopefully being used for any applications that are being developed by application engineering.
You may be asking, “Wait, doesn’t that mean my QA environment must have the same infrastructure components as my production environment?” To wit I respond, “Maybe!” In some cases this may be advantageous for ensuring that production deployment can support continuous deployment processes, which will ensure limited human intervention from the point of user acceptance through deployment, which will, hopefully, lead to limited opportunity for production failures. However, because we’re moving to a more programmatic infrastructure many vendors can also supply virtual instances of their infrastructure code for purposes of testing and this may be satisfactory for your needs.
Of note, there are many businesses out there that do not develop their own software. They primarily use Commercial-Off-The-Shelf (COTS) applications. Thus, these businesses may not have an established SDLC practice. Likewise, these businesses may not believe DevOps is relative for them since they do not do “development”. If your business falls into this category, the move to Infrastructure-as-Code and Software Define Data Centers should be perceived as a driver for DevOps adoption as well as formally adopting one of the well-defined SDLC approaches. Even if these components won’t impact you for a few years until your next refresh cycle, the advantages DevOps brings around improving quality and consistency in delivery is of definite value to your business now.
Infrastructure-as-Code is a significant change in capability that operations has for delivering compute and storage resources to the business. It will allow them to have greater insight into the hardware allowing them to do predictive maintenance, migrate applications to various resources based on demand, and tune the overall environment to meet the business’ needs at a given moment. In short, it will allow IT to deliver compute services with a high degree of availability and security. It will also expose shortcomings in existing applications that have made assumptions about the underlying environment on which it is operating. Thus, if operations and application engineering are not working together and the entire application environment inclusive of the infrastructure control apps is not tested and deployed as a unit, the likelihood for failure will be high.