I often talk about how development teams ignore their pipeline. It is easy to focus on specific tools and process, but by and large the pipeline that contains them is not being treated as its own entity. Your entire process is something that should be managed, iterated over, and measured just like the code within it. While pipeline oversight is rare, there are some companies doing it, and TASC is one. At the DevOps Enterprise Summit I was able to sit down with John Gildenzoph, a DevOps Engineer at TASC, to better understand how they took a holistic strategy to adopt DevOps.
TASC is a large health benefits provider based out of Madison, Wisconsin. They have a technology team of about 100, including Developers, QA, and Ops. Their development is supporting both web and mobile applications. The mobile application is a critical tool for their customers to submit claims and manage their information, and a core web application that serves both customers and employees. After listening to John talk, I discovered that TASC found their path to DevOps by addressing the pipeline holistically.
The problem – an inefficient pipeline
Around 2011 TASC realized they were struggling with an outdated development process, based on the waterfall methodology. According to John, ‘deployments were manual, inconsistent, and time consuming, and infrastructure was monolithic, static, and inconsistent.’ In addition to slow processes their was no transparency into what was going on and how processes were executed. This resulted in an inability to scale, and did not support the objective they had to package and deliver apps faster, and at a higher quality.
The solution – automate the entire pipeline
They could have made some incremental changes to their processes to make them faster and more efficient, but this would ultimately lead them back to square one with all the same issues to fix sooner or later. They needed a new approach, one that can adapt and scale as their needs change over time. Especially with the rate that DevOps tooling is changing. John and team realized automating the entire pipeline is the only viable strategy.
“How do we move all of this stuff forward at the same time, not just the development cycle? We wanted to move everything forward together so we did not paint ourselves into a corner” – John Gildenzoph
It did not happen overnight, first they needed to know what they already had, and start somewhere. The first step they took was to automate parts of their development, QA, and deployment cycles. They started by automating builds with Jenkins. Jenkins being the standard in build automation, they didn’t want to reinvent the wheel and make a build automation tool themselves. These quick wins allowed them to start somewhere, and set them up for bigger wins down the road.
The next step was to automate the entire pipeline. They realized that their needs will change over time, requiring new tools frequently. The same tools may not always be suited for the job, and new tools may rise to replace existing tools. Because single do-it-all tools tend to overpromise and underdeliver, building for the future required a ‘rip & replace’ approach to tools that makeup the pipeline. This being the case, TASC wanted to take a long term approach to managing and automating their pipeline. To achieve this they needed a pipeline orchestrator. This would make it easy to add and remove components from the pipeline as needed, and give them a holistic view.
“we can re-evaluate … we have no real dependency on a tool. We just made a move from SVN to Git. I only had to modify a handful of SVN commands with Git and I was done.”
To orchestrate the entire pipeline, they opted to use Automic Release Automation, an enterprise wide orchestrator. A tool they already owned but were not fully leveraging – “we basically used it like Cron to move files around” John explained. The new implementation of Automic not only abstracted their existing production deployment but also made the interfacing and execution of that deployment process simple enough for non-operations users to handle. It enabled deployment to production environments which required a sophisticated workflow to be automated.
While Automic could in some cases be seen as a competitor to Jenkins. In their environment considerations for existing business applications was important, a place where Atomic apparently excels, which provided a greater breadth than Jenkins alone could provide.
In addition to Automic, they are not afraid of leveraging tools. They incorporate many tools to help with automation, but also governance like Sonatype Nexus a component and artifact repository. Sonatype helped how they packaged and delivered artifacts and simplified the consistent deployment of those artifacts. And Docker to get closer to full-stack deployments and supporting their micro-services setup of around 20 services.
The outcome – Faster delivery, higher quality
The automation was not the only benefit. TASC found that automation is a great way of driving process change through the organization. By taking the time to examine the current process and asking the hard questions of why they were doing certain things actually brought a lot of things to light and in the process of automating they actually improved the core processes.
The new orchestration engine led to increased visibility into development, better QA process, and more stable infrastructure. Easy workflow monitoring meant that even business stakeholders could now gain visibility into the pipeline. All this combined to drastically reduce deployment times to less than an hour, and with zero downtime.
The only way to break the cycle of inefficiencies in a software delivery pipeline is to automate every cycle within it – Dev, QA, and Ops. This can’t be a half-hearted, or partial effort, it needs all hands working together to overhaul every section of the pipeline. While this may sound daunting, the payoff is worth every ounce of energy spent on it.
Adopting a pipeline orchestration tool like Automic that allows you to ‘rip & replace’ tooling whenever you see fit might not be an immediate need, but helps sustain the environment in the long term and ensures you can adapt to what comes. This takes your focus off the tools, and puts it where it belongs, on your software delivery strategy. By doing so it’s easier to adopt DevOps and enhance your pipeline in the future.
TASC made the transition to modern development without calling it ‘Devops’. This goes to show that as long as you’re concerned about delivering higher quality software faster, a Devops-like methodology is where you’ll naturally land. Pipeline orchestration might be a new concept, but from an execution standpoint it is not new at all. It has been executed successfully within even large environments, and is the easiest way to make sure your pipeline supports current and future development.