Getting a solid grasp on the meaning of DevOps requires a clear understanding of the terms used to describe it. Over the last several years, we have seen a paradigm shift in the way software is developed, moving away from a waterfall approach, with monolithic releases, to an agile approach with incremental releases. This shift has changed the way in which the Application Lifecycle (ALM) is managed. Focus around what we normally would refer to as ALM is now more accurately described as Continuous Delivery. Continuous Delivery is not a set of tools, it is an emphasis on people (culture), tools and processes. The change from ALM to Continuous Delivery includes three very basic principles:
- The software is always deployable which means you prioritize keeping the software deployable over adding new features. To do this requires smaller, less risky changes released overtime instead of all at once.
- A feedback loop of the build, test and deploy steps expose the readiness of the software allowing anyone from development, test or production to determine that an immediate release of the software to any environment can be approved.
- Automation of the build, test and deploy process is critical in order for the process to be repeatable and to support a higher frequency of builds, deploys and test cycles.
So like ALM, there is an awareness of a lifecycle (Dev, Test, Prod). But unlike ALM the lifecycle should not be seen as linear. With Continuous Delivery, a software update may move from Development directly to Production without having entered the Test phase. Because Continuous Delivery principles support keeping the software in a deployable state, with a visible feedback loop to determine the readiness of the software, software can be confidently moved into any environment at any time.
This change between a linear, waterfall approach, to an non-linear agile approach is far more realistic in the way organizations manage their software. Over the course of my 20+ years’ experience working with both large complex enterprises and small software teams, the need to be able to perform a release, into any environment at the ‘drop of a hat’ has been consistent and problematic. The principles behind Continuous Delivery defines a more realistic approach to software development.
Moving from a linear application lifecycle approach to Continuous Delivery includes making changes at many levels. First, developers must begin embracing Agile practices, managing smaller, incremental releases. Because Agile increases the frequency of builds, testing and deployments, DevOps tools must be in place to support the automation of these tasks. In the waterfall approach, teams had the luxury of time to get build scripts to work, execute test manually, and write a deploy script for every environment. Continuous Delivery does not offer that luxury therefore automation of these tasks becomes critical. This is why Continuous Build, Continuous Deploy and Continuous Test tools are implemented. Continuous Build addresses the frequent building of code changes, updating binaries upon a check-into version control, and reporting on what artifacts were impacted providing feedback into the readiness of the build.
Continuous Deployment automates the movement of those binaries to any endpoint in any environment and supports server configuration management, all performed on an incremental basis with feedback showing the inventory and the changes. Continuous Test automates testing so developers and/or testers can run through critical functional test, and code analysis to provide feedback on the readiness of those binaries.
To maximize on automation, these steps are chained together using the workflow engine of the Continuous Integration Server. The workflow can be triggered upon a source code check-in, kicking off a compile/link (build), running code analysis, deploying to an endpoint and running test. In Continuous Delivery, each environment may have their own Continuous Integration workflow that performs the steps needed. For example, a Production CI workflow may just perform activities related to moving the code to hundreds or thousands of endpoints, with database updated, but not execute a build or test.
Not all is perfect in the Continuous Delivery approach. Automation continues to be a bottleneck. As with most paradigm shifts, changes in behavior and adoption is a major hurdle. In most companies, the use of manual scripts to perform builds, deployments, test automation and manage infrastructure are still called under the covers of the Continuous Integration workflow. These are the same scripts that were used in the waterfall approach, except now there is not the luxury of time to fix the scripts or update them to adjust to software changes. To support Continuous Delivery, the build, deploy, test activities must be elastic, automatically adjusting to code changes, database updates and server configurations.
To create a more dynamic Continuous Delivery process, DevOps tools are moving away from the need for coding one-off scripts and instead moving to a model driven approach with Playbooks and Plug-ins. Many developers still hold on to these scripts with the argument that they do not want to be at the mercy of a ‘vendor’ or ‘tool’ to manage these low level tasks. As Continuous Delivery pushes the frequency of releases, this final paradigm shift, moving away from the old approach to one-off scripting, will need to move into full automation. Continuous Build will be supported by Build Automation, Continuous Deploy will be supported by Application Release Automation and Continuous Test will be supported by Testing Automation and Virtualization. The automation tool chain will be orchestrated via a Continuous Integration Server and the entire process will become a reliable and fast Continuous Delivery process.
About the Author/Tracy Ragan
Ms. Ragan has had extensive experience in the development and implementation of DevOps for large organizations. She began her consulting career in 1989 on Wall Street specializing in build, testing and release management for the distributed platform. It was during her consulting experiences that Ms. Ragan recognized the lack of standardized development procedures on open systems that was common on the mainframe. In the years leading to the creation of OpenMake Software she worked with development teams in implementing a community driven standardized build and deploy process that enabled frequent builds and releases, automated around version control. Her knowledge and experience contributed to the creation of OpenMake Meister, the first commercial Build Automation solution. Ms. Ragan served on the Eclipse Foundation Board of Directors as an Add-in Provider Representative for 5 years. She has been published on numerous occasions and regularly speaks at conferences including CA World where she presented for 15 consecutive years. She holds a BS Degree in Business Administration, Computer Technology from California State University, Pomona.