The Continuous Deployment model discussed below requires some very specific properties for its platform substrate. Currently AWS is the the only environment that supports all the features needed to implement this. So when we start talking about implementation details in this, and future articles, we will be targeting AWS.
The title of this article is actually reversed from what it should be. This article isn’t as much about using reproducible Composable Environments for Continuous Delivery (CD) as it is about using Continuous Delivery to build reproducible Composible Environments. The Composable Environment is the important piece. Why? To understand that we need to understand what such an environment is. An Environment is a self contained, full software stack that includes every component needed to support the stack along with a version controllable description of that stack. For example, an Environment for a website might be the all of the machines that run the HTTP Servers, other machines that run the RESTful Web Servers, databases, AMQP providers etc. It would also include any infrastructure required for auto scaling, fault recovery etc. This Environment would be completely described in a declarative way by one or more files that are under version control. That description contains every virtual machine image, database schema, DNS server, DNS entry, etc needed to standup a system in a self contained, replicable form that can be put under version control. This description is versioned in a sane way and reproducible through the execution of a single command. A good example of this concept is Cloud Formation Stacks. Stacks declaratively describe a full environment and the Cloud Formation infrastructure takes those stacks and, on request, vivifies them into full running system. Our concept of environment takes that further with ideas around isolation of environments and accessibility via DNS, but these examples provide a good seed for the concept we are describing.
Reproducible Composable Environments are a simple concept that has profound implications. A few of the most important are:
- A completely reproducible environment is self contained. Multiple instances of that environment don’t interfere with one another.
- A declaratively described environment can be changed algorithmically. That is, it can be easily changed via some automated process.
- An environment can be deployed algorithmically can also be un-deployed algorithmically.
Once we have created an environment with these properties, we can use it to accomplish things that where not previously possible. It opens up new opportunities through a combination of Testability, Composability and Deployability. You might notice that this sounds like the move towards Structured Programming that happened in the programming community in the 1960s. That transition had a profound and continuing impact on how we create software systems and I think this approach to Devops will have a similar impact on how we create and manage environments.
Why Environments are Important
These environments are useful through out the software development process. Start with the individual developer writing and iteratively testing the system. Having such an environment allows him to truly test the changes that he is making by standing up a full environment, including all its participating parts, and testing that system in context. The next big place for using such a system is during Continuous Integration, henceforth CI. At the end of the build cycle the CI system creates an environment, publishes the changes into that environment, stands up the complete system and runs the full test suite for the entire system. The ability to standup the full system in context massively reduces risk in rolling out changes. Being able to roll out, at need, an environment with updated components in an ad-hoc or automated manner improves the testability of our system.
Automated processes are the correct method to build and modify environments, and are required for Continuous Delivery.
I define Continuous Delivery as a idempotent, automated process that is primarily designed to update a reproducible Composable Environment with new, well tested artifacts.
We can think of Continuous Delivery as a process with a set of well-defined inputs and a set of well-defined outputs. The process itself is composed of a set of stages, each of which have its own inputs and outputs. This allows us to compose stages and even whole CD processes into different processes that serve new purposes.
For the simplest CD process, the main input is source code and the main outputs are the services that are stood up as part of that system. To be more specific, the outputs are a set of endpoints to which you or your customers connect. In examples I will provide, these will take the form of DNS entries. Knowing the inputs and the outputs, we can now start to think about the process in between.
The Stages of CD
The standard CI pipeline is composed of five parts: Develop, Test, Publish, Deploy and Promotion
The Develop stage is essentially the individual developers development process. Its only input is the output of the CD process that automatically creates and provisions the Development Box as well as the related environment. This is the first example of a CD input being the output of another CD process.
The reason Development box is necessary is so that the developer can test his work in an environment that exactly matches that of the CI server. It should also exactly match the system that every other developer is working on. This is critical, and often overlooked. Currently, developers rarely write code in development environments that match each others, let alone ones that match the CI server.
The Test stage is where we take the artifacts produced in the Develop stage and test their viability in context. The output of the test phase is usually a more complete set of artifacts published into the environments.
These Stage usually takes the form of taking developer code, compiling it, turning it into some package-like structure and publishing it to an environment specific known location. The structure is usually an actual package: a LXC/Jail container like a Docker image, a virtual machine image like an EC2 AMI, or some combination of the three.
During the test phase, the target environment is stood up with the new artifacts, and a series of integration/release tests are run. Test reveal the viability of the changes within the context of the environment as a whole.
The Publish stage takes the output of the Test phase and publishes those artifacts in a way that makes them available to the Deploy stage. This usually involves putting the packages in a publicly accessible place.
You may have noticed our heavy reliance on some packaging model. That is intentional. You need turn your system into a manageable unit in order to make it work correctly as part of the process.
The final stage is Deploy. This is where the published packages are used as inputs to update the environment. The output of this stage is the canonical environment updated and vivified with the newly published artifacts. The running systems, provisioned databases, accessible URLs, etc are all updated in place.
Finally we have the optional step of promotion. This simply involves moving the artifacts outputted from the Deploy state to a new environment. A common example of this is the promotion of a beta environment to production, where some secondary process has been run that has validated the environment making it available to consumers.
Overcoming Technological Constraints
I am not going to lie to you. It is non-trivial to get started with this process. Up until a very few years ago, I would have said it is impossible. But, the world has changed, and it is now possible, even approachable. There are some prerequisites that we have to get it working. You must have the ability to programmatically configure your network, configure your DNS, provision hardware etc. I can accomplish this with a mix of new technologies like OpenStack, OpenFlow and customized DNS among other things. Most of us don’t have the ability to implement those technologies in our data center. Fortunately, we have AWS. Between EC2, Route 53 and especially CloudFormation, we have nearly everything needed to make this happen. Over the next few articles, we are going to take the model described here and apply them on a production system. At the series conclusion, you will be able to apply this model in your own AWS-based environments. You should still be able to adapt and apply parts of this model in non-AWS environments, as well.
And, my team will keep implementing and refining our ideas for composable environments for continuous delivery towards having a more complete solution. Stay tuned!