In the application economy, you have to deliver software as if your business depends on it … because it does! It’s a bold statement, but true. Think of nearly every interaction a person might have today—be it for work, commerce or play. Most have a digital dimension, which relies on digital technology and platforms.
In order to survive and grow, every company needs to become a technology company and every business needs to become a digital business. Successful transformation requires Continuous Delivery, the new business imperative that enables you to rapidly develop and deliver applications that drive superior user experiences and engage your customers and staff.
But the traditional “software factory” or process for transforming an idea into a customer experience is throttled by a number of bottlenecks in the delivery pipeline. That means delivering innovative, high-quality applications, faster and more frequently can be a chaotic and complex process, particularly if your application delivery systems and processes were designed to only push out one or two releases a year.
Join moderator Alan Shimel, DevOps.com; and experts Georg von Sperling, DevOps and Technology Evangelist at CA Technologies and Julie Craig, Research Director at Enterprise Management Associates (EMA) on July 21, 2015 to discuss ways to overcome today’s application economy challenges with solutions designed to transform your software delivery lifecycle into a continuous delivery pipeline of digital innovation.
Recorded: Tuesday, July 21, 2015
1. Based on your experience tell me : continuous delivery versus traditional development. The continuous delivery model is perfect for every business ?
GvS: As a goal, yes, absolutely. Because continuous delivery practices enhance what is already being done and simply promote good behavior over creation of waste. Removing bottlenecks to increase quality, frequency and velocity simply makes sense, regardless of the business you are in. Now, that said, continuous delivery is not necessarily perfect for every business if you define it as continuously delivering to production. But what development organization would shun away from getting early and amplified feedback in User Acceptance Test (UAT)? To accomplish this, you still require every one of the concepts in continuous delivery.
ANBU: If you think of achieving Continuous delivery as a means to achieve modularity & lose-coupling in order to achieve higher workforce productivity, every business will be motivated to do it. As a concept, it is perfect for every business, but the objectives of the process and their order of their importance will vary industry to industry. As long as you make these adjustments, you will benefit a lot.
2. One of our struggles is the impact of Continuous Delivery on our Environment Strategy and the Mechanics used to maintain them. For example, applications dependent on other applications / components will require those other applications / components in the environment infrastructure in which testing is occurring. How do you see Compnay Environment Strategies changing to support Continuous Delivery? When does it become necessary to change an application’s architecture to reduce or remove these dependencies? Or are there testing methodologies that can accurately test changes without the need for large QA Environments?
GvS: The path with the least resistance is seemingly learned from the airplane manufacturing industry. When there is a new wing design that is required to undergo quality validation, the dependent systems and datasets are so complex, they could not have ever released those designs if they threw up planes into the air with different passenger or luggage loads (data) in different weather conditions (environment) attached to a full airplane body (dependencies). Likely, this would have resulted in the same adverse effects that we experience when pushing a digital asset from our IDE to the first environment – often catastrophic. Virtualizing dependent services with actual behavior – just like in a wind tunnel – is a good approach. This also allows discovery of the dependencies in consumable, shareable format which, should you find additional needs to revisit the architecture, actually aid in the process of redesign.
ANBU: I would recommend a model-based approach in your situation. In your stated example, if you model ‘infra-as-code’ and try to achieve modularity at that level, so that developers and testers call it out during their plan/design/development process to make their ‘availability and use’ defined, the problem can be solved. It requires an assessment on all the functional components (infra included) in order to model it. There are tools out there that would help you achieve such a modeling.
3. How do you measure the success of DevOps other than faster & quality releases? are there any metrics that can be collected periodically?
GvS: In the context of Continuous Delivery, there are many metrics that can and should be captured. Many more in the context of DevOps. We could likely devote an entire session on this. Let me focus, however, on continuous delivery metrics. On top of focusing on business-oriented outcomes, much like in DevOps, some operational metrics could be captured. Examples are:
– Lead time to production. Measuring cycle time from the work starts to the time the team’s definition of ‘done’.
– Defect resolution time. Really successful teams can fix defects as quickly as they are discovered. The more left-shifted the defect discovery is, the more quickly they are resolved.
– Production defect leakage. Measure both: new defects found and defects previously found and still made it to production.
– Defects. It is bad enough to ship buggy apps. Continuously doing so is even more terrible. Measure at each stage in the cycle, and notice a left-shift if CD is going well.
– Broken build time. The longer a build sits around broken, the commitment to quality is just not there.
– Production downtime during deployment. If you achieve zero-downtime, you can stop measuring here.
– Regression test duration. Teams with manual practices will measure in months or weeks. CD teams will likely measure in minutes or hours. Also include lead-time to regression test.
– Performance test outcomes. If you find tons of functional bugs in performance testing, there is likely a bottleneck not addressed early in the cycle.
– Touches. Number of tickets, emails or other communication required to do a deployment in each stage. If it isn’t “zero-touch”, there is improvement potential.
This is not a complete list, but certainly one to start with
ANBU: GVS’s response is complete.
4. How do you get legacy code/systems into the CD model? Or is CD only for new code/projects/systems?
GvS: The practices advocated with CD aren’t specific to the tenure of the project or the age of the technology. They are arguably all good practices and make perfect sense. Now, if you are an enterprise on the journey to adopt CD practices, the recommendation is certainly to begin with projects that have the highest business impact. A good heatmap analysis does go a long way to begin. If it turns out that it is your legacy code/system, so be it. Parallelize work streams, accelerate quality and automate & orchestrate the delivery pipe.
ANBU: Legacy code/systems are generally monolithic/tightly coupled. Hence the problem. Unfortunately, they are also part of the corporate string of applications, so we can’t stay away from them. We have seen examples where these systems stay as one ‘component/workstream’ and plug into the larger CD stream, creating a hybrid-model. Yes it is not perfect, but will help you implement CD. Additionally, the maintenance/re-engineering of these legacy systems are opportunities to change the code to adopt to the larger CD strategy that you put in place. If they are truly legacy, there won’t be major ‘patch’ work so they stay what they are.
5. How do you reconcile product management with CD, i.e., the definition of a story to encapsulate a deployable or atomic element that can be delivered? Often times stories are part of a larger epic that needs to be delivered as a whole and not on their own. so the issue is deploying atomic units/stories creates the fear of delivering partial functionality. How does a business deal with this and overcome this
GvS: If you pardon me, but it seems a misinterpretation of CD itself. Deployment and delivery are two, if not three, different things. I believe you may have latched onto the “live” deployment of an atomic change to production. Given the right strategy in modeling the pipeline itself, all CD elements are still valid, the atomic element may go “live” with many other atomic elements. BUT, unlike in the past, the fear of failure in that deployment is minimized as quality drastically improved and, the same, already tested, processes are utilized to set the atomic unit “live” as have been validated in lower environments. It is a journey.
ANBU: one of the benefits of CD is achieving the readiness of production deployment of (ALL or SELECT atomic units) on a push of a button. It does not force you (but you have the choice) to deploy every atomic unit into production If what you said “larger epic as a whole or none” is important, then your definition of the ‘select’ atomic units should correlate to it. Configuration mgt and release mgt should have proper ‘gates’ to make sure that they roll-out properly.
6. My question on metrics is because in legacy orgs it takes a lot more time & energy to get to CD, so measurement through metrics is critical.
GvS: Absolutely, metrics and measurement go hand-in-hand. Looking at CALMS (Culture, Automation, Lean, Measurement and Sharing) for guidance, influencing the organizational culture through measurement and sharing which both are derived from automation is the nirvana. Along with the will of the IT organization, there also has to be executive buy-in to the strategy. Yes, grass-roots is fine, but a lot can get accomplished if the company stands behind the strategy. Having defined positive business outcome metrics as part of that journey helps reinforce that postion.
ANBU: Agree with GVS.
7. What is the Quality as a result of continuous delivery? What is the tradeoff with respect to the Business impact to defect leakages in production?
GvS: Quality metrics should show drastic improvement if CD tenants are implemented. There was a question on metrics already. This is not about continuously deploying bad code, which really just makes things worse, but about continuously delivering, where quality is certainly a determining factor.
ANBU: You need to baseline ‘quality’ before and after CD. There is an absolutely no way, these measures should negatively trend, if you properly implement CD. Defect leakage in any model will have the same implications, just that in CD – the mean time to resolve (MTTR) these issues should be faster.
8. For a business to adopt a CD model, doesn’t all disciplines/functions have to change to suit the atomic delivery concepts.
GvS: Once nirvana is achieved, likely there was a change in all disciplines and functions. It is a journey that must begin. The imperative is there for the business. But, much discussed in the context of DevOps as well, CD is a journey of transformation – not something you do in a day.
ANBU: YES. this is a life-cycle process, hence the impact is for everybody. But you can start from one unit/organization and spread it out to evolve it. if you have the luxury, start a new project/unit with CD as the foundation.
9. How do you see ITIL and DevOps interacting to maximize the benefits of continues delivery?
GvS: There is a great amount of synergy between the two. And actually, how much easier would it be to have “Standard Changes” in the change management process, enrich a CMDB with actual data that isn’t stale, have service models up to date and have focused problem management.
ANBU: A well defined devops strategy can help you implement ITIL more effectively. Configuration management and release management are two particular areas where the ITIL can provide the maximum benefits to the CD strategy.
10. Jez Humble and David Farley suggest not using feature branching, because this incourages back room channel development, and to only use the master branch to do CD, but other companies in this space says feature branching is ideal because git-flow encourages this, is there a prefered strategy in your branching strategy to do CD. Is there a need to better define what is meant by ‘continuous delivery’? I am concerned with people stating that CD means code gets in untested (which is from my understanding an anti definition) and fully tested code is a part of CD.
GvS: I believe Jez’ argument, and I agree, is why it is essential to have a deployment pipeline that is triggered off trunk to ensure all necessary validations, including functional testing. A deployment pipeline is a pattern that organizations adopting CD use. The key is that the validations should be performed against a build off mainline. I also advocate a number of good approaches that help in the validation effort, such as automating the generation of functional tests and having the right data sets available for relevant validations.
ANBU: No comments.
11. Does continue delivery means continuous Deployment to production? For Horses, frequency of deployment is not too frequent but they would still need to have Continue delivery to achieve benefits of Test Automation, Agile Methodology etc.
GvS: I would argue no. Continuous delivery does not equate continuous deployment to production. If I may be so bold, I would state continuous delivery is a practice to allow deployment to production at any time with a high degree of quality and a low amount of risk with zero downtime and zero touch. A business leader recently said to me that whereas, in the past, “deployment to production was an IT decision, it is now a business decision at a push of a button”. But, CD certainly played a part in this paradigm shift, as the entire delivery process, including testing, is visible and predictable.
ANBU: Continuous Production deployment is not the ultimate goal of CD. It is to let you manage multiple workstreams in parallel as atomic units, accelerate quality through high degree of automation and provide an ability to learn/continuously improve the processes. By means of this, you make your changes ready to be deployed faster than you normally do. You should aim to achieve the frictionless automation across the life-cycle, not just in QA activities.
Alan Shimel, Editor-in-Chief DevOps.com, An often-cited personality in the security and technology community and a sought-after speaker at industry and government events, Alan has helped build several successful technology companies by combining a strong business background with a deep knowledge of technology.
About the Panelists
Julie Craig, Research Director at Enterprise Management Associates (EMA)
At EMA, Julie’s focus areas are Application Management, public and hybrid Cloud, Integration Technologies, DevOps/Continuous Delivery, and Application Performance (APM). Julie has over 20 years of deep and broad experience in software engineering, IT infrastructure and integration engineering, and enterprise management. Her experience in commercial software companies included development of communications interfaces and management of programming teams.
Georg von Sperling, DevOps and Technology Evangelist at CA Technologies
Helping organizations adopt Continuous Delivery without creating more technical debt, Georg is a practitioner, an evangelist and a general geek on software development methodologies, technology, patterns, systems and delivery practices. He advises, helps customers visualize, plan and implement transformational approaches to delivering digital assets from idea to production for medium to large enterprise organizations.
Anbu G. Muppidathi, Vice President at Cognizant Technology Solutions
Anbu has over 25 years of experience in the IT industry and is with Cognizant for the past 20 years. Anbu’s customer management responsibilities include advising the C-suite on breakthrough thinking, competitiveness, strategy, operations and transformation. His business management responsibilities include strategizing on QE&A business unit’s growth, managing the customer relationship, delivery and business operations for the customer engagements in Americas. Prior to QE&A, Anbu was with Cognizant’s Banking & Financial Services vertical for more than a decade managing a large portfolio of North American customers.