DevOps.com

  • Latest
    • Articles
    • Features
    • Most Read
    • News
    • News Releases
  • Topics
    • AI
    • Continuous Delivery
    • Continuous Testing
    • Cloud
    • Culture
    • DevSecOps
    • Enterprise DevOps
    • Leadership Suite
    • DevOps Practice
    • ROELBOB
    • DevOps Toolbox
    • IT as Code
  • Videos/Podcasts
    • DevOps Chats
    • DevOps Unbound
  • Webinars
    • Upcoming
    • On-Demand Webinars
  • Library
  • Events
    • Upcoming Events
    • On-Demand Events
  • Sponsored Communities
    • AWS Community Hub
    • CloudBees
    • IT as Code
    • Rocket on DevOps.com
    • Traceable on DevOps.com
    • Quali on DevOps.com
  • Related Sites
    • Techstrong Group
    • Container Journal
    • Security Boulevard
    • Techstrong Research
    • DevOps Chat
    • DevOps Dozen
    • DevOps TV
    • Digital Anarchist
  • Media Kit
  • About
  • AI
  • Cloud
  • Continuous Delivery
  • Continuous Testing
  • DevSecOps
  • Leadership Suite
  • Practices
  • ROELBOB
  • Low-Code/No-Code
  • IT as Code
  • More
    • Application Performance Management/Monitoring
    • Culture
    • Enterprise DevOps

Home » Blogs » Enterprise DevOps » The Transience or Permanence of Provisioning

The Transience or Permanence of Provisioning

By: Kristian Nelson on April 19, 2016 1 Comment

I realize the title of this article may conjure up ideas about the impacts of entanglement from quantum computing, but no, we are not there yet. Instead, I am referring to the benefits an organization can achieve from determining which environments in its software development life cycle (SDLC) should remain permanent and which are better candidates to be transient to understand the discreet costs of innovation (ideally if virtualized).

Recent Posts By Kristian Nelson
  • DevOps and the Identity Conundrum
  • DevOps and Automation Abstraction?
  • Putting Ops Back in DevOps
More from Kristian Nelson
Related Posts
  • The Transience or Permanence of Provisioning
  • Encore Platform for Running Distributed Apps in the Cloud Arrives
  • BMC Integrates DevOps Mainframe Portfolio With Git
    Related Categories
  • Enterprise DevOps
    Related Topics
  • devops maturity model
  • enterprise costing
  • enterprise devops
  • environment costing
  • transient environments
  • virtualized infrastructure
Show more
Show less

First, to keep an even playing field, make sure your DevOps services are using the “build once, deploy many” philosophy. Under this thinking there is a single build constructed that is deployed unilaterally into each class of SDLC environment (not a unique build for each class). Uniformity under test eliminates variability of results. Uniformity of the basic process of deployment, builds consistency and predictability also will reduce errors and the time it takes to resolve them.

DevOps Connect:DevSecOps @ RSAC 2022

Second, the actual computing capacity you deliver any environment on may be virtualized, whether you provide your own bare metal assets (private cloud foundation) or if you use a public cloud entirely is less the issue. Understanding whether a class of environment is transient or permanent only affects how you plan for the costs and policies of operating it, and it leads back to the discreet cost of innovation as a result.

The Cost of Creation

When this topic is usually considered the first and immediate point of view is usually R&D, or the development class environment. And yes, the ability to provision a DEV class environment and then tear it down when the effort has completed is the normal use-case to justify virtual infrastructure at all (public, private or hybrid cloud usage). The obvious benefits include having no “throttle” on how many concurrent efforts might be spun up.

Transient DEV environments also build in a concept of failure, or elongated “playing,” not intended to produce a specific result but intended to dabble with innovation to see what “might” be possible. All of this without the capital investment it would take to build and maintain an R&D lab to accommodate it, or set aside fixed computing capacity for it. DEV efforts with intent (i.e., part of a formal project) can be tracked for costs discreetly with that planned innovation. DEV efforts without intent (i.e., more like what used to occur in R&D labs) can be associated to the overhead costs of all innovation.

The Cost of Evaluation

But beyond DEV, the next major class of environment to consider is the integration (INT) class. If your company has more than one application, has apps that work together or has applications with upstream or downstream dependencies, you likely need an INT class environment. The major responsibility of this class environment is to provide “something to point to.” Your app (undergoing change) has an environment where it can point to all the other applications to test how the interconnections work. Logically, this class of environment is a better candidate for permanence. This is because you can rarely predict which apps may need this service at which time across the entire portfolio. Many apps undergoing change may be using this at one time to insure they all work with each other (i.e., assessing their interoperability with future versions, early, or prior to production).

Another reason for maintaining a permanent INT class environment may involve your interaction as a vendor to other companies who “point” their apps at yours as a service provider. Even organizations who have little interaction between their own apps, may well be a significant provider to a wide swath of customer applications. If your customers need a place to point to, the INT class environment is typically the one you want. So in this instance permanence is the logical need.

Next up, becomes the performance test (PT) class of environments. Emulating production, or being able to project differences between the platform you use, and what production provides is key to success. However, this class of environment is a perfect candidate for transience (especially if your development and production environments are already virtual). Why maintain expensive hardware on the floor, only to conduct periodic testing, which requires massive reconfiguration and preplanning. Dump it. Instead, use a virtualized platform where provisioning is near instant and can exactly mimic both development and production – remember: build once, deploy many.

Another significant class of environment in the SDLC to examine is the User Acceptance Testing (UAT). Logically, this class environment may get a lot of usage, but ideally only after the app has passed testing in development, INT and PT. Because of this, the UAT should be transient as well. A good candidate to tear down when not in use. You may wish to plan for a setup and tear down of this class to mimic what happens in formal development efforts.

Finally, the quality assurance (QA) or preproduction (PreProd) class environments, if in use, are designed to emulate the current state of production completely, allowing the app-under-change to run against a more sophisticated type of INT class environment (with possibly copied production data, etc.). This happens more often when the earlier INT class environment contains a great number of other applications under change. This one provide a “cleaner” place to point to, only production operating apps. The QA environment may also be used as an after action, when an error is discovered in production for debugging or more extensive monitoring. Obviously, if your organization finds a need for this class of environment, it will carry the permanence characteristic.

The Cost of Operations

Production is by nature permanent. Virtualizing production offers the ability to scale on demand. But a scalable production solution benefits from more mature DevOps services (whether you own the data center or not). Consider for a moment, the number of errors attributed to code post a DevOps embrace. The number of errors goes down, the time to resolve them goes down, the size of the effort gets smaller for less risk, and higher speed. This combination means production is going to suffer far less from aberrant code running amok (asking for tons of memory, CPU or disk due to error). Those kinds of problems get detected earlier (because of DevOps) in the SDLC process and hardly ever reach production after DevOps is engaged.

With a mature use of DevOps, it means that any kind of scalable solution of capacity-on-demand will get engaged for only two reasons: a dramatic increase from customer demand (the ideal solution) or a denial of service attack attempting to drive your costs higher without real customer use. The monitoring solution you employ will help decipher the difference. If your application has logging features built into it (i.e., you produce a log detailing what customer is performing what key feature in your app and when) you should be able to avoid the latter scenario with an automated monitoring detection effort.

New Abilities to Measure the Cost of Innovation

Understanding the cost delta of transient and permanent environments is worth examining. It will help you understand better the cost of innovation on an annualized basis. And it may also reveal how much real effort goes into testing of each kind of class. If user adoption is suffering for example, and you determine that UAT class environments are largely unused or torn down, you know better where to address the problems earlier in the cycle. The same can be said for PT as well, how much test, for what kind of scaled performance in production.

If your organization maintains INT or QA class environments you may want to allocate their costs to SDLC overhead as they tend to be permanent, and value shared. The development, PT, and UAT environments, if transient, represent the variable cost of a particular innovation effort and can be tracked with good discretion. When considering the timing, value, and benefits of virtualization, you may wish to factor in where you are on the DevOps continuum. If you have had DevOps running for a while, the timing for virtualization and scalable capacity on demand may be better, than if you are just starting out.

Keep in mind that most organizations who have transitioned away from waterfall and into agile, will lose a good deal of the former financial controls that worked in waterfall, and do not work as well in agile. DevOps delivery is also capable of moving change at orders of magnitude higher speeds than former manual methods. This combination makes trying to cost innovation discreetly more difficult. Being able to associated transient environment costs with formal development efforts (ones with intent) may restore some of that financial discretion. It may also help identify the true overhead costs of innovation located in the operation of environments classed better as permanent.

To continue the conversation, feel free to contact me.

Filed Under: Enterprise DevOps Tagged With: devops maturity model, enterprise costing, enterprise devops, environment costing, transient environments, virtualized infrastructure

Sponsored Content
Featured eBook
The 101 of Continuous Software Delivery

The 101 of Continuous Software Delivery

Now, more than ever, companies who rapidly react to changing market conditions and customer behavior will have a competitive edge.  Innovation-driven response is successful not only when a company has new ideas, but also when the software needed to implement them is delivered quickly. Companies who have weathered recent events ... Read More
« The DevOps Institute, What a Difference a Year Makes
Mesosphere Democratizes the Container-Based Datacenter with DC/OS Open Source Project; Backed by Accenture, Cisco, Equinix, Hewlett Packard Enterprise, Microsoft and Many Others »

TechStrong TV – Live

Click full-screen to enable volume control
Watch latest episodes and shows

Upcoming Webinars

Continuous Deployment
Monday, July 11, 2022 - 1:00 pm EDT
Using External Tables to Store and Query Data on MinIO With SQL Server 2022
Tuesday, July 12, 2022 - 11:00 am EDT
Goldilocks and the 3 Levels of Cardinality: Getting it Just Right
Tuesday, July 12, 2022 - 1:00 pm EDT

Latest from DevOps.com

Rust in Linux 5.20 | Deepfake Hiring Fraud | IBM WFH ‘New Normal’
June 30, 2022 | Richi Jennings
Moving From Lift-and-Shift to Cloud-Native
June 30, 2022 | Alexander Gallagher
The Two Types of Code Vulnerabilities
June 30, 2022 | Casey Bisson
Common RDS Misconfigurations DevSecOps Teams Should Know
June 29, 2022 | Gad Rosenthal
Quick! Define DevSecOps: Let’s Call it Development Security
June 29, 2022 | Don Macvittie

Get The Top Stories of the Week

  • View DevOps.com Privacy Policy
  • This field is for validation purposes and should be left unchanged.

Download Free eBook

The State of Open Source Vulnerabilities 2020
The State of Open Source Vulnerabilities 2020

Most Read on DevOps.com

What Is User Acceptance Testing and Why Is it so Important?
June 27, 2022 | Ron Stefanski
Chip-to-Cloud IoT: A Step Toward Web3
June 28, 2022 | Nahla Davies
Rust in Linux 5.20 | Deepfake Hiring Fraud | IBM WFH ‘New No...
June 30, 2022 | Richi Jennings
DevOps Connect: DevSecOps — Building a Modern Cybersecurity ...
June 27, 2022 | Veronica Haggar
Common RDS Misconfigurations DevSecOps Teams Should Know
June 29, 2022 | Gad Rosenthal

On-Demand Webinars

DevOps.com Webinar ReplaysDevOps.com Webinar Replays
  • Home
  • About DevOps.com
  • Meet our Authors
  • Write for DevOps.com
  • Media Kit
  • Sponsor Info
  • Copyright
  • TOS
  • Privacy Policy

Powered by Techstrong Group, Inc.

© 2022 ·Techstrong Group, Inc.All rights reserved.