DevOps.com

  • Latest
    • Articles
    • Features
    • Most Read
    • News
    • News Releases
  • Topics
    • AI
    • Continuous Delivery
    • Continuous Testing
    • Cloud
    • Culture
    • DataOps
    • DevSecOps
    • Enterprise DevOps
    • Leadership Suite
    • DevOps Practice
    • ROELBOB
    • DevOps Toolbox
    • IT as Code
  • Videos/Podcasts
    • Techstrong.tv Podcast
    • Techstrong.tv Video Podcast
    • Techstrong.tv - Twitch
    • DevOps Unbound
  • Webinars
    • Upcoming
    • On-Demand Webinars
  • Library
  • Events
    • Upcoming Events
    • On-Demand Events
  • Sponsored Content
  • Related Sites
    • Techstrong Group
    • Container Journal
    • Security Boulevard
    • Techstrong Research
    • DevOps Chat
    • DevOps Dozen
    • DevOps TV
    • Techstrong TV
    • Techstrong.tv Podcast
    • Techstrong.tv Video Podcast
    • Techstrong.tv - Twitch
  • Media Kit
  • About
  • Sponsor
  • AI
  • Cloud
  • Continuous Delivery
  • Continuous Testing
  • DataOps
  • DevSecOps
  • DevOps Onramp
  • Platform Engineering
  • Low-Code/No-Code
  • IT as Code
  • More
    • Application Performance Management/Monitoring
    • Culture
    • Enterprise DevOps
    • ROELBOB

Home » Blogs » Enterprise DevOps » Measuring the Cost of Service Creation & Adoption

Measuring the Cost of Service Creation & Adoption

Avatar photoBy: Kristian Nelson on March 8, 2016 2 Comments

Adopting and implementing DevOps isn’t an overnight project. It takes time and effort to determine how it best fits into your organization—and what benefits your organization will realize as a result.

Recent Posts By Kristian Nelson
  • DevOps and the Identity Conundrum
  • DevOps and Automation Abstraction?
  • Putting Ops Back in DevOps
Avatar photo More from Kristian Nelson
Related Posts
  • Measuring the Cost of Service Creation & Adoption
  • DevOps in a Hybrid Model
  • DevOps and PaaS: ‘Give me a platform. Let’s rock, let’s rock, today’
    Related Categories
  • Enterprise DevOps
    Related Topics
  • adopting DevOps
  • cost of devops
  • devops approaches
  • devops-as-a-service
  • insuring success
  • scalability
Show more
Show less

Creating an enterprise-class DevOps service is typically done in one of three ways, depending on the organization, its leadership and its employees. Each has its pros and cons, and each has a number of factors to consider.

TechStrong Con 2023Sponsorships Available

A Fully Centralized Model

In this model, tooling decisions are done in a central service provider group for the entire organization. Tools are managed centrally and standards are created centrally. DevOps automation creation (at least the templates or frameworks) is done centrally. This model is highly advantageous for organizations with a wide variety of technology platforms (think large population of legacy apps) and use many technology components for their apps, such as multiple types of databases, middleware and operating platforms, for example.

Not many large organizations are ready to scrap what they have in the application portfolio and start over using the cloud and containers and develop in a singular methodology. Using a centralized model, companies can continue to improve upon their years of investment in legacy applications and adopt a DevOps continuum without starting over from scratch.

This also is the lowest-cost model, as tooling is standardized and administered from a central point and licensing can be negotiated to maximum effect. In addition, labor (generally the highest ongoing cost of any service creation and implementation) is minimized by centralizing it. Training costs also can be minimized by establishing a centralized training program for all impacted audiences. Companies can change their incentive programs to achieve hiring objectives and business results as priorities change—something that isn’t practical in a decentralized or federated model.

A Fully Decentralized Model

In a fully decentralized model, tooling decisions are determined and managed by each development team. This has the advantage for shops with small numbers of applications or singular methods of development—fewer applications are better candidates for starting over with cloud or container integrations and can take advantage of the newest technologies from the starting point, if needed.

Shops such as Etsy or Facebook seem massive, but from a user point of view, they really only present a single application that needs to be maintained and improved over time. Instances like this may favor a decentralized model because the actual number of application candidates may be highly limited—sometimes only one.

This model typically represents the highest costs, as repeatability and standardization can only be measured at the development team level. As such, this is practical for a very limited number of applications, or technologies; if those numbers grow, the strategy quickly becomes untenable. In addition, this strategy has an uncanny history of pulling development resources to do the DevOps automation engineering that needs to be created and maintained. Typically, the highest skilled developers are assigned these types of duties, and thinking in terms of templates and frameworks is rarely their first approach. Developers tend to be expedient, and believe they will always be there to make sure it works. This also can reduce the potential for repeatability and predictability (raising costs).

A Federated Model

In a federated model, tooling decisions and standards are done centrally, but management and operations of the tools are performed by a limited number of subservient organizations (often by line of business). Centralized templates may or may not be viable depending on the variability of the business units they serve. The chief advantage of this approach is the segregation of tooling and reporting by business unit (typically) without having to build that kind of segregation in to tools that were not designed that capability in mind.

This model also has the advantage of increasing the speed of adoption, as several parallel efforts can take place at the same time and spreading engineer resource demand across multiple organizations. It typically makes the most sense in larger organizations with multiple business units whose application portfolios that would benefit from an onboarding to DevOps.

What’s more, a federated model may be the most cost-efficient. Labor costs will be inherently higher as functions are replicated from business unit to business unit; however, that replication will result in increased adoption rates and project completions (you get what you pay for). If business units are able to leverage centralized templates for each technology type, adoption rates become an order of magnitude faster (or have that potential). Tooling costs still can be negotiated centrally, even though management will occur in the business units. Again, there is hardware replication from business unit to business unit, but the ability to segregate users, permissions and operations down to the business-unit level can save tremendous labor costs if development was the only other option to create that capability in tools not designed for it.

Adoption Costs

Measuring the cost of adoption requires focus, no matter which method you employ to create the DevOps service. Applications moving to DevOps are said to be onboarded (where the original build, deploy, test and release automation is created). To successfully onboard an application requires education of the sponsoring executives, training of the impacted developers and testers, and coordination with the change and release managers.

Collection of meta data about the behavior of application components is no small task, requiring frequent follow-ups and action items to reach completion. The effort to onboard applications to DevOps becomes the first cost for estimation and tracking as the application portfolio is attacked.

Even after onboarding of a given application is complete, costs continue. The terms “continuous delivery” and “continuous integration” should also infer that application development methods “continuously” evolve and may adopt new technology components. Significant changes may occur that require revisiting the DevOps automation to adapt to these ongoing revisions and updates.

While this work is rarely as extensive as the initial onboarding, it is measurable. Work of this nature is usually referred to as BAU (business as usual) and represents the second significant cost of DevOps services that must be measured and tracked. If no thought is given to this work effort, inevitably the team assigned to do onboarding will be distracted by the BAU requirements until they reach the point where no new applications can be completed.

Staffing cost estimations can be presented at the component level for both onboarding and BAU efforts. For example, if one DevOps engineer can onboard 20 components per week (assuming familiar technologies within a given portfolio), and if the general ratio of components per application is 4:1, then a given DevOps engineer can successfully onboard five applications per week. If the engineer is never distracted with BAU, this can continue until the entire portfolio is onboarded. However, my experience has shown that for every 20 applications completed with onboarding, the BAU work effort can absorb nearly a full DevOps engineer. So if you start with a team of 10 people, you will onboard only 200 applications total before the entire team is required to focus on BAU work effort and has no more bandwidth to continue onboarding new apps. Understanding this demand, and learning how efficient your particular engineering team is, will be key to managing costs while completing the onboarding efforts of large application portfolios.

Filed Under: Enterprise DevOps Tagged With: adopting DevOps, cost of devops, devops approaches, devops-as-a-service, insuring success, scalability

« Murphy’s DevOps: 7 Habits of Rugged DevOps
What Does DevOps Want? »

Techstrong TV – Live

Click full-screen to enable volume control
Watch latest episodes and shows

Upcoming Webinars

Evolution of Transactional Databases
Monday, January 30, 2023 - 3:00 pm EST
Moving Beyond SBOMs to Secure the Software Supply Chain
Tuesday, January 31, 2023 - 11:00 am EST
Achieving Complete Visibility in IT Operations, Analytics, and Security
Wednesday, February 1, 2023 - 11:00 am EST

Sponsored Content

The Google Cloud DevOps Awards: Apply Now!

January 10, 2023 | Brenna Washington

Codenotary Extends Dynamic SBOM Reach to Serverless Computing Platforms

December 9, 2022 | Mike Vizard

Why a Low-Code Platform Should Have Pro-Code Capabilities

March 24, 2021 | Andrew Manby

AWS Well-Architected Framework Elevates Agility

December 17, 2020 | JT Giri

Practical Approaches to Long-Term Cloud-Native Security

December 5, 2019 | Chris Tozzi

Latest from DevOps.com

Stream Big, Think Bigger: Analyze Streaming Data at Scale
January 27, 2023 | Julia Brouillette
What’s Ahead for the Future of Data Streaming?
January 27, 2023 | Danica Fine
The Strategic Product Backlog: Lead, Follow, Watch and Explore
January 26, 2023 | Chad Sands
Atlassian Extends Automation Framework’s Reach
January 26, 2023 | Mike Vizard
Software Supply Chain Security Debt is Increasing: Here’s How To Pay It Off
January 26, 2023 | Bill Doerrfeld

TSTV Podcast

On-Demand Webinars

DevOps.com Webinar ReplaysDevOps.com Webinar Replays

GET THE TOP STORIES OF THE WEEK

Most Read on DevOps.com

What DevOps Needs to Know About ChatGPT
January 24, 2023 | John Willis
Microsoft Outage Outrage: Was it BGP or DNS?
January 25, 2023 | Richi Jennings
Optimizing Cloud Costs for DevOps With AI-Assisted Orchestra...
January 24, 2023 | Marc Hornbeek
Five Great DevOps Job Opportunities
January 23, 2023 | Mike Vizard
Dynatrace Survey Surfaces State of DevOps in the Enterprise
January 24, 2023 | Mike Vizard
  • Home
  • About DevOps.com
  • Meet our Authors
  • Write for DevOps.com
  • Media Kit
  • Sponsor Info
  • Copyright
  • TOS
  • Privacy Policy

Powered by Techstrong Group, Inc.

© 2023 ·Techstrong Group, Inc.All rights reserved.