IBM DevOps Evangelist, Eric Minick will leverage his slots at IBM InterConnect 2015 to help you surmount continuous delivery and automated deployment DevOps challenges, three of which include: balancing a continuous delivery pipeline; deployments in hybrid cloud scenarios; and, fitting in trickier deployments.
DevOps.com spoke with Eric about these challenges and the central themes that will bubble to the top as he speaks to each one. If you attend Eric’s sessions, see if you can connect the dots between these challenges and themes.
The CD Pain Point Domino Effect, Cloud Deployment Conundrums, and Deployment Complexities
The first challenge Eric will address is balancing the continuous delivery pipeline. When maturing that pipeline, does development, IT ops, and the business really fix continuous delivery pain points or simply redistribute them? With every attempt to fix a weak link in the continuous delivery chain, whether that link is builds, tests, or deployments, each newly strengthened point in the pipeline reveals another that is resultantly more vulnerable.
“If we have slow manual builds, for example, and we fix that with a continuous integration tool, I guarantee you that you will then have a problem where your deployment process is pained because you’ve given yourself a lot more stuff to deploy,” says Eric Minick, IBM DevOps Evangelist.
The second challenge is deployments in hybrid cloud scenarios. As much as DevOps benefits from the cloud, the cloud can also add complexity to deployments. In hybrid cloud scenarios, the organization must ask, if we’re updating infrastructure in our deployments, how does that change when we’re in multiple clouds?
“What if some of the stuff we want to put into a test environment is traditional apps that we would normally build in-house and some of it is platform-as-a-service bits that need to change?” asks Minick.
Finally, it can be difficult to deploy trickier elements such as database bits together with more straightforward aspects such as static content even though the organization has tested and passed them in the same pre-production, staging environment.
There are two central themes that run through these discussions. The first is that organizations should deploy or promote components to production at the same level of granularity that they test them at. There are benefits to deploying these elements together. The detriments of promoting them separately include broken software.
If the organization is testing a set of interconnected web applications that have dependencies from one application to the next and they all pass, then the enterprise should promote all that to production because they tested it together and it functions well together.
“In the same way, if you’re testing a single sub-system and it passes, you should promote that into a production environment just like the one you tested it in,” says Minick.
If the development team deploys one of several components that work well together into an older production environment that it did not test that piece of software in, it will likely produce errors at the very least.
The second theme is that organizations can align their decisions about what to automate and what not to automate based on the recognition that the strength of people is creative problem solving while the strength of computers is that they follow directions well.
“So anything that requires fancy decision-making is probably best left to people to handle where as anything that is simply a list of instructions to repeat is probably something the organization should automate,” says Minick.
Each of these themes carries particular benefits for DevOps. The benefit of deploying and promoting components together at the same level of granularity that you test them at is that you avoid the risk of something breaking in production because that comprises an untested environment for that component or sub-system. So if the organization tested a grouping of components and they worked well together, the enterprise should deploy them all together.
“In such cases, deploying separately results in errors and downtime and all sorts of ugliness and bad behavior. To avoid that, you would have to try to track the relationships between every code change in every sub-system and how these might impact each other,” says Minick, “which is a hugely expensive and very painful thing to do.” That’s why it makes sense to test things together that are going to production together.
The benefit of the organization separating what it should automate from what it shouldn’t is that anything the company can hand over to computers, those computers will do it more quickly and cheaply and on more machines simultaneously. “Handing over tedious tasks to machines keeps the developer’s job more interesting too,” says Minick. And automation means the enterprise’s capacity to deliver changes to the marketplace increases.