Gartner predicts that 25% of Global 2000 companies will be actively implementing DevOps methodologies in their organizations by 2016. As DevOps goes mainstream, it may be time to re-think some of the implementation techniques that have worked in small companies and in non-production uses. The common assumptions that are often applied to make a continuous integration process work are generally not valid in production real-world datacenters. Common assumptions include:
- Dedicated resources are available for separate dev and test environments
- Pools of virtual machines are sufficient for executing regression and QA tests
- Testing environments mimic production environments
- Tool chaining is sufficient to implement DevOps workflows
Let’s consider each of the above assumptions and how they impact success in real production environments.
Dedicated resources for separate dev and test environments
In any large organization, it is prohibitively expensive to maintain separate and dedicated infrastructure for development and test environments. In our experience, any organization with more than five or so development teams will need to have a better way to manage and share development, test and QA infrastructure. This means that they will not be able to just trigger regression tests directly from build completions. They will need to first ensure that they have access to the resources needed for test execution. We believe that DevOps tools need to also incorporate reservation and scheduling capability for shared and constrained physical and virtual infrastructure.
Pools of virtual machines for sufficient for executing tests
DevOps tools are mainly focused on automating the process of developing, testing and running applications in homogeneous pools of virtual or physical servers. In these environments, it is possible for the DevOps tool to depend on the virtualization environment, PaaS tool, or OS for the server pool (such as CoreOS) to provide execution and deployment. Unfortunately, delegating the scheduling (or lack thereof) to the virtualization or OS tool does not work in large production environments and is certainly not going to provide enough control for regression testing that needs to mimic real world datacenter infrastructure. This can become a severe limitation of DevOps tools. We believe that a more complete DevOps toolset must incorporate workflow orchestration that allows the developers to have control over how applications are executed onto the target virtual environment in each step of the DevOps process.
Testing environments mimic production environments
Global 2000 datacenters are not homogeneous. They are typically hybrid infrastructures where applications are deployed across a combination of traditional, virtual and public cloud computing. In addition, production network topologies are typically complex and require configuration to support applications. Ignoring this complexity in development and testing will result in applications that fail to transition to production successfully. Most continuous integration tools are not capable of controlling deployment into this more complex environment. This explains why most DevOps projects today stop at “continuous integration” and never get all the way to “continuous deployment” into production. DevOps tools need to have the capability of controlling not just the deployment of applications, but the configuration of the hybrid infrastructure on which those applications will run – and this needs to result in test infrastructure configurations that really do mimic production infrastructure.
Tool chaining is sufficient to implement DevOps workflows
The prevalent solution to a continuous deployment process is to chain DevOps tools together end to end from development to deployment. This creates problems because each tool must have a point to point integration with the tools on either side of it in the process flow. By the time an IT shop strings them all together, there is often no complete solution. Eventually they will find that the next tool in the chain has only one (or zero) options because it doesn’t have the specific integrations needed for all of the tools already selected.
More critically though, simply chaining tools together ignores the three key problems described above. A true end-to-end solution should not just chain development, test, QA and deployment tools together, but should also incorporate:
- Reservation and scheduling of resources prior to initiating the next step in the DevOps process;
- Workflow orchestration capability that controls placement of applications onto an appropriate mix of virtual or physical servers; and
- Automation capability for configuring the target infrastructure and network topology before running applications.
We believe that we will see customers augmenting their existing DevOps tools as they move into full continuous deployment, with additional software that can provide each of the above capabilities.
About the Author/Joan Wrabetz
Joan Wrabetz is the CTO for QualiSystems. Most recently, she was theVP and CTO for the Emerging Product Division of EMC. Joan received her MBA at the University of California, Berkley, and MSEE at Stanford, and a BSEE at Yale. She has held teaching positions at the University of St. Thomas, St. Mary’s University and the University of St. Thomas, St. Mary’s University. Joan holds patents in load balancing, distributed systems, and machine learning classification and analytics. Her experience inclues: founder and CEO of Aumni Data, CEO of Tricord Systems, Vice President and General Manager for SAN operations at StorageTek, Founder and CEO of Aggregate Computing, Management and senior technical positions at Control Data Corporation and SRI International, Partner with BlueStream Ventures.