Technology has ushered in an era of real-time information, immediate feedback and instantaneous response. As a result, there is tremendous pressure on development teams to deliver a continuous flow of updates and fixes to software applications. In support of this objective, development philosophies and methodologies abound: agile, lean, scrum, kanban and extreme programming, just to name a few. Development teams are adopting and implementing these new methodologies at a breakneck pace in an attempt to bring efficiency and speed to the development process.
This increased velocity has placed added pressure on software delivery teams to build, validate and deploy newly developed code faster and more reliably. Over time, the DevOps movement has emerged, injecting robust automation into the delivery process while spawning a diverse collection of hundreds of single-purpose delivery automation tools. As a result, DevOps has improved delivery speed and quality. Even still, continuous development comes with its challenges. Namely, the lack of a real-time “big picture” view, of the end-to-end software delivery process.
New Challenges for CD Teams: The Unified Pipeline
While highly diversified delivery toolchains can be effective and have increased speed and agility, significant challenges continue to plague the software delivery process as a whole.
Real-time visibility and visualization – Typically, the delivery process begins as soon as newly developed code is considered “dev-complete.” Often, the sequence of steps and stages new code progresses through on the way to potential release is referred to as the delivery pipeline. In a moderately sized organization, delivery pipelines are comprised of dozens of people performing hundreds of tasks. Without a centralized “delivery portal,” team members have little or no insight into the overall upstream or downstream process flow. While delivery pipelines benefit from diverse automation, they are still a “black box,” largely absent of communication, collaboration and audit/tracking capabilities.
Data fragmentation – Important delivery data is generated at virtually every step along the delivery pipeline. Disconnected toolchains generate siloed, disaggregated and uncorrelated data that is trapped within a dozen or more automation tools. As a result, team members often lack access to a single comprehensive audit trail document that fully describes the functional and technical contents of a specific release along with any relevant data generated during each step of the delivery pipeline. At a macro level, self-organizing delivery teams do not have the real- time and historical analytic data required to quickly identify and react to exceptions or make performance optimizations across pipelines and/or teams.
Is dynamic pipeline orchestration the answer?
While the build and integration stages of a delivery pipeline are largely static and predictable, pipeline flow can become much more dynamic once manual testing begins. Defects can be discovered at any point along the delivery pipeline and judgment calls often determine a dynamic process flow. Often tools designed to automate the build/integrate phase of the pipeline are stretched beyond their original intent when it comes to pipeline orchestration. While these tools automate a critical portion of the delivery pipeline, they are far too static to support a real-world pipeline flow through the manual steps that occur in the later stages of a delivery pipeline. As a result, pipeline orchestration is largely a manual process beyond integration and into the quality assurance phases of delivery.
What’s Missing?
When it comes to alleviating the challenges associated with delivery pipelines, what will it take to ease the burden on continuous development teams?
- Detailed Inventory of Each Bundle-of-Change – As multiple revisions get merged together in downstream pipelines to form a release candidate, a clear understanding of exactly what “changes” have been incorporated within any particular change bundle (release candidate) can become crucial.
- Current Deployed Environment – At any given time, any revision or release candidate can be deployed to one or more environments or locations. The ability to quickly determine which change bundles have been deployed to which environment(s) is a fundamental need of virtually every stakeholder along the delivery process.
- Broken code lines – Release candidates that contain known material defects are usually considered terminal and/or not deployable. Identifying a release candidate as broken can dramatically reduce the amount of time invested into a terminal release candidate. Subsequent testing within environments that contain a broken release often leads to wasted effort.
- Detailed release documentation that fully describes the functional and technical contents of a specific release-in-transit along with all relevant data generated during each stage of delivery.
- Reporting that describes stalled releases awaiting manual intervention with corresponding wait times.
- Work queues that outline manual tasks or pending decisions/approvals assigned to a specific individual or group.
- Travel time details on the average time it takes a release to traverse a pipeline from start to finish.
- Reporting that describes the “base” release (starting version) and all subsequent change that has flowed thru any given development or test environment.
What is the Answer?
Without a technology that can oversee the entire development process and fill these gaps, teams often try to leverage Continuous Integration (CI) tools. While these tools greatly reduce the effort required to automate the build/integrate phase of the continuous delivery pipeline, this stretches them beyond their practical limits.
However, a new type of pipeline orchestration technology that further streamlines and advances the effectiveness of the CD process is emerging and can effectively alleviate the challenges associated with fragmented and siloed delivery pipelines. This technology, often called a “pipeline orchestration framework,” expedites movement and raises awareness of a software “release candidate” as it travels through one or more delivery pipelines.
So how do they work? Pipeline Orchestration Frameworks:
- Leverage existing tools and processes – Because Pipeline Orchestration Frameworks sit on top of existing delivery automation tools, there is no need to reconfigure current process flow or abandon existing tooling. Pipeline Optimization Frameworks simply enhance and automate what is already in place.
- Provide audit trail – Comprehensive audit trail documentation of each part of the process delivers highly detailed and valuable information regarding each new potential release. Having this audit trail ensures higher production quality results, increases cross-team awareness and reduces the time and effort required to perform root-cause analysis when defects are found (pre/post-production).
- Manage deployments – Efficiently managing the cost and friction associated with dynamic development and testing environments over time can dramatically reduce the cost of delivery. Pipeline Orchestration Frameworks are capable of automatically provisioning and/or refreshing fully formed application environments as often as needed while keeping track of environmental drift or change over time.
- Workflow automation – One of the goals of continuous delivery is to automate as much as possible along the delivery pipeline. However there are often steps that can’t (or shouldn’t) be automated, such as code reviews, exploratory testing, and manual decisions to automatically move a release into late stage test environments or even production. Pipeline Orchestration Frameworks manage and orchestrate these manual workflows as efficiently as possible, ensuring all delivery team members are aware of items in their “work queue,” and identifying stalled releases immediately.
What will 2015 Hold?
The purpose of continuous delivery is to support development teams’ need to release reliable code faster. Therefore, the delivery team must keep up with an ever-increasing development velocity. Managing the delivery process not only requires managing each of the different phases of delivery, but also orchestrating the delivery process as a whole. That, in turn, requires visibility into every part of the end-to-end process, and a 10,000-foot view of the delivery pipeline. Pipeline Orchestration Frameworks offer CD teams a powerful new tool to unlock the full potential for successful continuous development.
About the Author
Dennis Ehle is the CEO and founder of cloud sidekick, which provides an automated, self-service integration and deployment framework for enterprises seeking continuous delivery. The cloud sidekick Velocity Pipeline Optimization Platform enables agile development teams to push code through multi-stage pipelines, automatically testing in virtual environments and ultimately publishing reliable code faster. For more information, visit www.cloudsidekick.com.