DevOps.com

  • Latest
    • Articles
    • Features
    • Most Read
    • News
    • News Releases
  • Topics
    • AI
    • Continuous Delivery
    • Continuous Testing
    • Cloud
    • Culture
    • DataOps
    • DevSecOps
    • Enterprise DevOps
    • Leadership Suite
    • DevOps Practice
    • ROELBOB
    • DevOps Toolbox
    • IT as Code
  • Videos/Podcasts
    • Techstrong.tv Podcast
    • Techstrong.tv - Twitch
    • DevOps Unbound
  • Webinars
    • Upcoming
    • On-Demand Webinars
  • Library
  • Events
    • Upcoming Events
    • On-Demand Events
  • Sponsored Content
  • Related Sites
    • Techstrong Group
    • Container Journal
    • Security Boulevard
    • Techstrong Research
    • DevOps Chat
    • DevOps Dozen
    • DevOps TV
    • Techstrong TV
    • Techstrong.tv Podcast
    • Techstrong.tv - Twitch
  • Media Kit
  • About
  • Sponsor
  • AI
  • Cloud
  • Continuous Delivery
  • Continuous Testing
  • DataOps
  • DevSecOps
  • DevOps Onramp
  • Platform Engineering
  • Low-Code/No-Code
  • IT as Code
  • More
    • Application Performance Management/Monitoring
    • Culture
    • Enterprise DevOps
    • ROELBOB
Hot Topics
  • HPE to Acquire OpsRamp to Gain AIOps Platform
  • Oracle Makes Java 20 Platform Generally Available
  • How to Maximize Telemetry Data Value With Observability Pipelines
  • Awareness of Software Supply Chain Security Issues Improves
  • Why Observability is Important for Development Teams

Home » Blogs » Enterprise DevOps » The Transience or Permanence of Provisioning

The Transience or Permanence of Provisioning

Avatar photoBy: Kristian Nelson on April 19, 2016 1 Comment

I realize the title of this article may conjure up ideas about the impacts of entanglement from quantum computing, but no, we are not there yet. Instead, I am referring to the benefits an organization can achieve from determining which environments in its software development life cycle (SDLC) should remain permanent and which are better candidates to be transient to understand the discreet costs of innovation (ideally if virtualized).

Recent Posts By Kristian Nelson
  • DevOps and the Identity Conundrum
  • DevOps and Automation Abstraction?
  • Putting Ops Back in DevOps
Avatar photo More from Kristian Nelson
Related Posts
  • The Transience or Permanence of Provisioning
  • The Role of a Traditional NOC in the new DevOps World
  • 7 Pillars of DevOps: Essential Foundations for Enterprise Success
    Related Categories
  • Enterprise DevOps
    Related Topics
  • devops maturity model
  • enterprise costing
  • enterprise devops
  • environment costing
  • transient environments
  • virtualized infrastructure
Show more
Show less

First, to keep an even playing field, make sure your DevOps services are using the “build once, deploy many” philosophy. Under this thinking there is a single build constructed that is deployed unilaterally into each class of SDLC environment (not a unique build for each class). Uniformity under test eliminates variability of results. Uniformity of the basic process of deployment, builds consistency and predictability also will reduce errors and the time it takes to resolve them.

Second, the actual computing capacity you deliver any environment on may be virtualized, whether you provide your own bare metal assets (private cloud foundation) or if you use a public cloud entirely is less the issue. Understanding whether a class of environment is transient or permanent only affects how you plan for the costs and policies of operating it, and it leads back to the discreet cost of innovation as a result.

The Cost of Creation

When this topic is usually considered the first and immediate point of view is usually R&D, or the development class environment. And yes, the ability to provision a DEV class environment and then tear it down when the effort has completed is the normal use-case to justify virtual infrastructure at all (public, private or hybrid cloud usage). The obvious benefits include having no “throttle” on how many concurrent efforts might be spun up.

Transient DEV environments also build in a concept of failure, or elongated “playing,” not intended to produce a specific result but intended to dabble with innovation to see what “might” be possible. All of this without the capital investment it would take to build and maintain an R&D lab to accommodate it, or set aside fixed computing capacity for it. DEV efforts with intent (i.e., part of a formal project) can be tracked for costs discreetly with that planned innovation. DEV efforts without intent (i.e., more like what used to occur in R&D labs) can be associated to the overhead costs of all innovation.

The Cost of Evaluation

But beyond DEV, the next major class of environment to consider is the integration (INT) class. If your company has more than one application, has apps that work together or has applications with upstream or downstream dependencies, you likely need an INT class environment. The major responsibility of this class environment is to provide “something to point to.” Your app (undergoing change) has an environment where it can point to all the other applications to test how the interconnections work. Logically, this class of environment is a better candidate for permanence. This is because you can rarely predict which apps may need this service at which time across the entire portfolio. Many apps undergoing change may be using this at one time to insure they all work with each other (i.e., assessing their interoperability with future versions, early, or prior to production).

Another reason for maintaining a permanent INT class environment may involve your interaction as a vendor to other companies who “point” their apps at yours as a service provider. Even organizations who have little interaction between their own apps, may well be a significant provider to a wide swath of customer applications. If your customers need a place to point to, the INT class environment is typically the one you want. So in this instance permanence is the logical need.

Next up, becomes the performance test (PT) class of environments. Emulating production, or being able to project differences between the platform you use, and what production provides is key to success. However, this class of environment is a perfect candidate for transience (especially if your development and production environments are already virtual). Why maintain expensive hardware on the floor, only to conduct periodic testing, which requires massive reconfiguration and preplanning. Dump it. Instead, use a virtualized platform where provisioning is near instant and can exactly mimic both development and production – remember: build once, deploy many.

Another significant class of environment in the SDLC to examine is the User Acceptance Testing (UAT). Logically, this class environment may get a lot of usage, but ideally only after the app has passed testing in development, INT and PT. Because of this, the UAT should be transient as well. A good candidate to tear down when not in use. You may wish to plan for a setup and tear down of this class to mimic what happens in formal development efforts.

Finally, the quality assurance (QA) or preproduction (PreProd) class environments, if in use, are designed to emulate the current state of production completely, allowing the app-under-change to run against a more sophisticated type of INT class environment (with possibly copied production data, etc.). This happens more often when the earlier INT class environment contains a great number of other applications under change. This one provide a “cleaner” place to point to, only production operating apps. The QA environment may also be used as an after action, when an error is discovered in production for debugging or more extensive monitoring. Obviously, if your organization finds a need for this class of environment, it will carry the permanence characteristic.

The Cost of Operations

Production is by nature permanent. Virtualizing production offers the ability to scale on demand. But a scalable production solution benefits from more mature DevOps services (whether you own the data center or not). Consider for a moment, the number of errors attributed to code post a DevOps embrace. The number of errors goes down, the time to resolve them goes down, the size of the effort gets smaller for less risk, and higher speed. This combination means production is going to suffer far less from aberrant code running amok (asking for tons of memory, CPU or disk due to error). Those kinds of problems get detected earlier (because of DevOps) in the SDLC process and hardly ever reach production after DevOps is engaged.

With a mature use of DevOps, it means that any kind of scalable solution of capacity-on-demand will get engaged for only two reasons: a dramatic increase from customer demand (the ideal solution) or a denial of service attack attempting to drive your costs higher without real customer use. The monitoring solution you employ will help decipher the difference. If your application has logging features built into it (i.e., you produce a log detailing what customer is performing what key feature in your app and when) you should be able to avoid the latter scenario with an automated monitoring detection effort.

New Abilities to Measure the Cost of Innovation

Understanding the cost delta of transient and permanent environments is worth examining. It will help you understand better the cost of innovation on an annualized basis. And it may also reveal how much real effort goes into testing of each kind of class. If user adoption is suffering for example, and you determine that UAT class environments are largely unused or torn down, you know better where to address the problems earlier in the cycle. The same can be said for PT as well, how much test, for what kind of scaled performance in production.

If your organization maintains INT or QA class environments you may want to allocate their costs to SDLC overhead as they tend to be permanent, and value shared. The development, PT, and UAT environments, if transient, represent the variable cost of a particular innovation effort and can be tracked with good discretion. When considering the timing, value, and benefits of virtualization, you may wish to factor in where you are on the DevOps continuum. If you have had DevOps running for a while, the timing for virtualization and scalable capacity on demand may be better, than if you are just starting out.

Keep in mind that most organizations who have transitioned away from waterfall and into agile, will lose a good deal of the former financial controls that worked in waterfall, and do not work as well in agile. DevOps delivery is also capable of moving change at orders of magnitude higher speeds than former manual methods. This combination makes trying to cost innovation discreetly more difficult. Being able to associated transient environment costs with formal development efforts (ones with intent) may restore some of that financial discretion. It may also help identify the true overhead costs of innovation located in the operation of environments classed better as permanent.

To continue the conversation, feel free to contact me.

Filed Under: Enterprise DevOps Tagged With: devops maturity model, enterprise costing, enterprise devops, environment costing, transient environments, virtualized infrastructure

« The DevOps Institute, What a Difference a Year Makes
Mesosphere Democratizes the Container-Based Datacenter with DC/OS Open Source Project; Backed by Accenture, Cisco, Equinix, Hewlett Packard Enterprise, Microsoft and Many Others »

Techstrong TV – Live

Click full-screen to enable volume control
Watch latest episodes and shows

Upcoming Webinars

The Testing Diaries: Confessions of an Application Tester
Wednesday, March 22, 2023 - 11:00 am EDT
The Importance of Adopting Modern AppSec Practices
Wednesday, March 22, 2023 - 1:00 pm EDT
Cache Reserve: Eliminating the Creeping Costs of Egress Fees
Thursday, March 23, 2023 - 1:00 pm EDT

Sponsored Content

The Google Cloud DevOps Awards: Apply Now!

January 10, 2023 | Brenna Washington

Codenotary Extends Dynamic SBOM Reach to Serverless Computing Platforms

December 9, 2022 | Mike Vizard

Why a Low-Code Platform Should Have Pro-Code Capabilities

March 24, 2021 | Andrew Manby

AWS Well-Architected Framework Elevates Agility

December 17, 2020 | JT Giri

Practical Approaches to Long-Term Cloud-Native Security

December 5, 2019 | Chris Tozzi

Latest from DevOps.com

HPE to Acquire OpsRamp to Gain AIOps Platform
March 21, 2023 | Mike Vizard
Oracle Makes Java 20 Platform Generally Available
March 21, 2023 | Mike Vizard
How to Maximize Telemetry Data Value With Observability Pipelines
March 21, 2023 | Tucker Callaway
Awareness of Software Supply Chain Security Issues Improves
March 21, 2023 | Mike Vizard
Why Observability is Important for Development Teams
March 21, 2023 | John Bristowe

TSTV Podcast

On-Demand Webinars

DevOps.com Webinar ReplaysDevOps.com Webinar Replays

GET THE TOP STORIES OF THE WEEK

Most Read on DevOps.com

Large Organizations Are Embracing AIOps
March 16, 2023 | Mike Vizard
Modern DevOps is a Chance to Make Security Part of the Process
March 15, 2023 | Don Macvittie
Addressing Software Supply Chain Security
March 15, 2023 | Tomislav Pericin
What NetOps Teams Should Know Before Starting Automation Journeys
March 16, 2023 | Yousuf Khan
DevOps Adoption in Salesforce Environments is Advancing
March 16, 2023 | Mike Vizard
  • Home
  • About DevOps.com
  • Meet our Authors
  • Write for DevOps.com
  • Media Kit
  • Sponsor Info
  • Copyright
  • TOS
  • Privacy Policy

Powered by Techstrong Group, Inc.

© 2023 ·Techstrong Group, Inc.All rights reserved.