Continuous delivery refers to the development of software in short and ongoing build-test-release cycles along the deployment pipeline. Repeated testing of processes and scripts before deployment to production means most errors are discovered early on and have minimal impact. Plus, finding and fixing such errors is much easier with fewer changes per release—software is released faster and more frequently with fewer deployment problems due to a heavy focus on visibility, instant feedback and incremental changes.
To accomplish this, companies must have a straightforward and repeatable deployment process. Most reliable, of course, is an automated deployment pipeline that can feed into any environment.
We have identified seven continuous delivery best practices that are supported by a set of complementary task-specific tools, in what’s known as the “DevOps toolchain,” to automate the process from end to end and minimize human error.
The native code deployment process generally includes a built-in safety net, preventing out-of-process, locally generated artifacts from entering the production environment. Every change a developer makes must be documented in the source control repository or it is not included in the build process. If a developer copies an artifact generated locally to a test environment, the next deployment will override this out-of-process change.
This insistence on having a single source of truth—and a reliable one—creates a stable foundation for all development processes. However, the processes are only as strong as their weakest link. That’s why the primary golden rule for effective continuous delivery is to version-control everything. What works well for code also will preserve the integrity of configuration, scripts, databases, website html and even documentation.
Build Binaries Once
Not every organization’s build process is the same or uses the same tools, but every build process in a single organization must be completely consistent. Whether the build is a single file deployed to an automated test environment or a complex build with several different possible deployment versions, each build version should happen exactly the same way and result in unique binary artifacts.
This one-time-only compiling eliminates the risk of untracked differences due to various deployment environments, third-party libraries or different compilation contexts or configurations that will result in an unstable or unpredictable release. Save the compilation phase output (the binaries) to a binary repository, from which the deployment process can retrieve the relevant artifacts.
Deploy the Same Way Every Time
An inconsistent deployment process can become a source of configuration drift across environments, especially with the rapid releases common in a continuous delivery system. As a result, valuable time and effort are wasted in identifying and addressing difficulties arising from the application working in one environment and not another.
For a reliable process in all environments, then, the same set of steps must be repeated from start to finish. This ensures the same results in lower environments, with more frequent deployments (such as integration, QA, etc.), as in higher environments (pre-production and production), with fewer deployments.
Smoke testing deployments is a very rapid way to make sure that the most crucial functions of a program work and will pass basic diagnostics. This non-exhaustive software testing (of all elements, such as services, database, messaging bus, external services, etc.) does not provide the same fine-grain comprehensiveness as full test suites; however, it can be run frequently and quickly, often in a matter minutes (rather than hours or days). This allows for a much quicker turnaround on what can become very time-consuming and basic issues.
The faster the feedback loop works (especially with automated test tools such as Telerik, QTP, TestComplete), the higher the final product’s quality will be and the quicker it will reach its release-ready state.
Deploy into a Production-like Environment
Each new deployment (using, for example, IBM uDeploy, CA Release Automation, XebiaLabs, Automic) should be made into an environment that mimics as closely as possible the actual final production environment. This includes infrastructure, operating system, databases, patches, network topology, firewalls and configuration. By validating software changes in this type of detailed pre-production environment, mismatches and last-minute surprises can be effectively eliminated and applications can be released safely to production at any time.
Continuous delivery by definition must be an ongoing process of building, testing and releasing. Every check-in to a source control repository should trigger compiling (if needed) and packaging by a build server. This, in turn, should automatically initiate certain predefined testing until the software can either be marked as releasable or returned to development.
Continuous integration tools (e.g., Jenkins / CloudBees, Bamboo, TeamCity) ensure that cascade of actions along the delivery pipeline is consistent, automatic and instant. Thus, development teams merge developed code and get feedback from automated test tools many times a day, for a more efficient deployment and update process.
Include the Database
No list of continuous delivery best practices would be complete without advising that the database be managed using the same basic protocols that ensure secure and reliable source code, task, configuration, build and deployment management. However, many times this is neglected because the database is unlike other software components or compiled code that are easily copied from development to testing to production. Yet, the database is actually a repository of the most valued and irreplaceable asset—our data—and preserving it accurately is imperative to continuous delivery.
Although the database poses several unique challenges, specialized database tools such as enforced database source control (for all environments), a database build automation tool and database release and verification processes can ensure a stable resource in your DevOps chain.
Considering these seven continuous delivery best practices will allow you to create a deployment pipeline with increased productivity, faster time to market, reduced risk and increased quality.
The key to ensuring a quick, robust and accurately repeatable continuous delivery is, as we have seen, automation, deployment consistency, comprehensive testing and database version control. With these fail-safes in place, pushing the “release to production” button can be done much more often and with much more confidence.
About the Author / Yaniv Yehuda
Yaniv Yehuda is the co-founder and CTO of DBmaestro, an enterprise software development company focusing on database development and deployment technologies. Yaniv is a DevOps expert who spent the last couple of years raising awareness about the challenges surrounding database development and deployment, and how to support database continuous delivery. Connect with him on LinkedIn and Twitter.