Continuous delivery is an illusion. That doesn’t make it insubstantial—illusions can have a tremendous impact. Like the first motion picture: the locomotive felt so real to first audiences that they fell from their chairs, screaming. But of course, the locomotive was an illusion, merely rushing past on screen.
A good magician, like a good inventor, will tell you that the secret to a powerful illusion is hard work on top of a good idea. Indeed, good magicians and inspired technologists have a lot in common. Each is always on the hunt for the next big insight, the transformative angle, building on existing industry techniques to create something even greater. But each knows the secret to truly wowing their audience, whether for a one-night-only show, or an ongoing relationship. What really gets the job done is the vast, complicated, hard-won machinery ticking away behind the scenes.
Today’s No. 1 magic trick for making great applications, a directive now pervasive across every industry, is continuous delivery. Right now, continuous delivery provides that wow. It takes app updates from a customer stumbling block to a seamless benefit. It takes feature delivery from “maybe in a year” to “is tomorrow soon enough?” It allows Dev teams to wave their magic wands and transform the impossible into the effortless.
And, of course, continuous delivery is an illusion. Remember that movie locomotive, leaping out of the screen? Every instant of its motion was its own little frame of celluloid. And the true magic was the process that bound each and every frame together, delivering them onto the screen with perfect timing.
Continuous delivery is the very same sort of illusion. The steady stream of new features, new capabilities and bug fixes must be created one by one by a vast army of talented developers—not unlike the thousands of talented stencilers who first brought color to motion pictures. And the true heart of the process is the automated systems that support this army, and which ensure the coordinated delivery of its products to the end user.
The heart of continuous delivery is the machinery of automated testing and deployment. The approach behind continuous delivery encourages teams to produce software in short cycles, ensuring that the software can be reliably released at any time. For these releases to be reliable, continuous testing is integrated into the software delivery pipeline, for immediate feedback on any bugs an update would generate. Then, when a proposed change has passed the automated tests, it is deployed to production automatically. This software development life cycle (SDLC) allows companies to achieve the business goal of truly responsive development. The team sees an opportunity, confirms the solution and then delivers that solution to customers extremely rapidly.
Each stage of continuous delivery—develop, test, deploy—has its own technical challenges, but the testing step in particular poses a unique bottleneck.
In the development stage, developers primarily require code. Code is relatively lightweight, and can be managed through industry-standard tools such as Github. Developers need their laptops, but in general code management poses a limited operations support challenge.
The deployment stage can be more challenging for ops. The newest version of the code must be deployed to a wide population of final platforms, which may vary in hardware, software, location, permissions and so on. However, the target systems are at least those in ordinary use. The server the new page runs on or the smartphone that installs the application update are pre-existing systems, and they are managed by their users. So Ops teams are not required to support specialized hardware to deploy. Furthermore, in the past few years a number of innovative technologies, such as Ansible and Docker, have emerged to simplify this process further.
Testing, on the other hand, combines the challenges of both of these stages. Continuous testing requires dedicated resources for developers and QA. But these resources are full-scale application environments, including not just code but data, operating systems and even appropriate hardware. And because best practices encourage developers to write new tests for all new code, the amount of regression testing required for every code change can only grow. That means that testing must grow ever more parallel, or suffer increasing delays—and any delay in testing can undermine overall continuous delivery.
To preserve the Continuous Illusion, a development team needs on-demand access to scalable testing environments, including their data. Data in particular is often overlooked in test environment creation, but it must be delivered to every test instance. That means test data management (TDM) is an inescapable part of any successful continuous strategy.
TDM is relatively self-explanatory: it refers to the process of creating, managing and delivering production (or production-like) data to non-production data environments. Every organization does some TDM, though many do not consider it a separate practice. But to achieve the sort of seamless efficiency that can support the Continuous Illusion, teams must adopt a modern TDM practice.
Legacy TDM relies on multi-stage, manual delivery chains. Developers and testers file tickets, and then they wait. Storage admins, DBAs, backup admins, security personnel and other staff each have to spend a huge portion of their time servicing the ever-growing stream of tickets, handing off environment creation from one function to another. Meanwhile, developers and testers are left in the lurch, and continuous delivery grinds to a halt.
Once, this process might have been unfortunate but necessary, imposed by technological restrictions. But modern solutions can tackle these challenges and overcome them. Too often, teams with poor TDM have simply neglected to implement the solutions that they need.
The most effective TDM practices will use a data management solution on par with other continuous delivery pipeline tools. The TDM solution will be able to stand up data environments in minutes and at massively parallel scale. It will rely on production data and automatic security, rather than synthetic data generation or data subsets (both of which requires substantial upfront planning and cannot hit the full suite of edge cases). It will integrate readily with existing continuous delivery workflow tools, allowing data to be automatically provided whenever a test job runs.
There are some solutions in the market today that meet these criteria. Adopting one can represent your opportunity to tighten test cycles and deliver code faster—regardless of how fast your software timelines are today.
To deliver the Continuous Illusion, every component of the continuous delivery pipeline must be operating at peak efficiency. A powerful TDM solution is a key requirement for allowing your customers to sit back, relax, and enjoy the magic of your product.
About the Author / Louis Evans
Louis Evans is a DevOps evangelist with Delphix. He’s excited about the potential for superior TDM to enhance overall software quality. He is a huge Alan Turing fan, and partial to a well-placed semicolon.