Adopting and implementing DevOps isn’t an overnight project. It takes time and effort to determine how it best fits into your organization—and what benefits your organization will realize as a result.
Creating an enterprise-class DevOps service is typically done in one of three ways, depending on the organization, its leadership and its employees. Each has its pros and cons, and each has a number of factors to consider.
A Fully Centralized Model
In this model, tooling decisions are done in a central service provider group for the entire organization. Tools are managed centrally and standards are created centrally. DevOps automation creation (at least the templates or frameworks) is done centrally. This model is highly advantageous for organizations with a wide variety of technology platforms (think large population of legacy apps) and use many technology components for their apps, such as multiple types of databases, middleware and operating platforms, for example.
Not many large organizations are ready to scrap what they have in the application portfolio and start over using the cloud and containers and develop in a singular methodology. Using a centralized model, companies can continue to improve upon their years of investment in legacy applications and adopt a DevOps continuum without starting over from scratch.
This also is the lowest-cost model, as tooling is standardized and administered from a central point and licensing can be negotiated to maximum effect. In addition, labor (generally the highest ongoing cost of any service creation and implementation) is minimized by centralizing it. Training costs also can be minimized by establishing a centralized training program for all impacted audiences. Companies can change their incentive programs to achieve hiring objectives and business results as priorities change—something that isn’t practical in a decentralized or federated model.
A Fully Decentralized Model
In a fully decentralized model, tooling decisions are determined and managed by each development team. This has the advantage for shops with small numbers of applications or singular methods of development—fewer applications are better candidates for starting over with cloud or container integrations and can take advantage of the newest technologies from the starting point, if needed.
Shops such as Etsy or Facebook seem massive, but from a user point of view, they really only present a single application that needs to be maintained and improved over time. Instances like this may favor a decentralized model because the actual number of application candidates may be highly limited—sometimes only one.
This model typically represents the highest costs, as repeatability and standardization can only be measured at the development team level. As such, this is practical for a very limited number of applications, or technologies; if those numbers grow, the strategy quickly becomes untenable. In addition, this strategy has an uncanny history of pulling development resources to do the DevOps automation engineering that needs to be created and maintained. Typically, the highest skilled developers are assigned these types of duties, and thinking in terms of templates and frameworks is rarely their first approach. Developers tend to be expedient, and believe they will always be there to make sure it works. This also can reduce the potential for repeatability and predictability (raising costs).
A Federated Model
In a federated model, tooling decisions and standards are done centrally, but management and operations of the tools are performed by a limited number of subservient organizations (often by line of business). Centralized templates may or may not be viable depending on the variability of the business units they serve. The chief advantage of this approach is the segregation of tooling and reporting by business unit (typically) without having to build that kind of segregation in to tools that were not designed that capability in mind.
This model also has the advantage of increasing the speed of adoption, as several parallel efforts can take place at the same time and spreading engineer resource demand across multiple organizations. It typically makes the most sense in larger organizations with multiple business units whose application portfolios that would benefit from an onboarding to DevOps.
What’s more, a federated model may be the most cost-efficient. Labor costs will be inherently higher as functions are replicated from business unit to business unit; however, that replication will result in increased adoption rates and project completions (you get what you pay for). If business units are able to leverage centralized templates for each technology type, adoption rates become an order of magnitude faster (or have that potential). Tooling costs still can be negotiated centrally, even though management will occur in the business units. Again, there is hardware replication from business unit to business unit, but the ability to segregate users, permissions and operations down to the business-unit level can save tremendous labor costs if development was the only other option to create that capability in tools not designed for it.
Adoption Costs
Measuring the cost of adoption requires focus, no matter which method you employ to create the DevOps service. Applications moving to DevOps are said to be onboarded (where the original build, deploy, test and release automation is created). To successfully onboard an application requires education of the sponsoring executives, training of the impacted developers and testers, and coordination with the change and release managers.
Collection of meta data about the behavior of application components is no small task, requiring frequent follow-ups and action items to reach completion. The effort to onboard applications to DevOps becomes the first cost for estimation and tracking as the application portfolio is attacked.
Even after onboarding of a given application is complete, costs continue. The terms “continuous delivery” and “continuous integration” should also infer that application development methods “continuously” evolve and may adopt new technology components. Significant changes may occur that require revisiting the DevOps automation to adapt to these ongoing revisions and updates.
While this work is rarely as extensive as the initial onboarding, it is measurable. Work of this nature is usually referred to as BAU (business as usual) and represents the second significant cost of DevOps services that must be measured and tracked. If no thought is given to this work effort, inevitably the team assigned to do onboarding will be distracted by the BAU requirements until they reach the point where no new applications can be completed.
Staffing cost estimations can be presented at the component level for both onboarding and BAU efforts. For example, if one DevOps engineer can onboard 20 components per week (assuming familiar technologies within a given portfolio), and if the general ratio of components per application is 4:1, then a given DevOps engineer can successfully onboard five applications per week. If the engineer is never distracted with BAU, this can continue until the entire portfolio is onboarded. However, my experience has shown that for every 20 applications completed with onboarding, the BAU work effort can absorb nearly a full DevOps engineer. So if you start with a team of 10 people, you will onboard only 200 applications total before the entire team is required to focus on BAU work effort and has no more bandwidth to continue onboarding new apps. Understanding this demand, and learning how efficient your particular engineering team is, will be key to managing costs while completing the onboarding efforts of large application portfolios.