DevOps.com

  • Latest
    • Articles
    • Features
    • Most Read
    • News
    • News Releases
  • Topics
    • AI
    • Continuous Delivery
    • Continuous Testing
    • Cloud
    • Culture
    • DataOps
    • DevSecOps
    • Enterprise DevOps
    • Leadership Suite
    • DevOps Practice
    • ROELBOB
    • DevOps Toolbox
    • IT as Code
  • Videos/Podcasts
    • Techstrong.tv Podcast
    • Techstrong.tv - Twitch
    • DevOps Unbound
  • Webinars
    • Upcoming
    • On-Demand Webinars
  • Library
  • Events
    • Upcoming Events
    • On-Demand Events
  • Sponsored Content
  • Related Sites
    • Techstrong Group
    • Container Journal
    • Security Boulevard
    • Techstrong Research
    • DevOps Chat
    • DevOps Dozen
    • DevOps TV
    • Techstrong TV
    • Techstrong.tv Podcast
    • Techstrong.tv - Twitch
  • Media Kit
  • About
  • Sponsor
  • AI
  • Cloud
  • Continuous Delivery
  • Continuous Testing
  • DataOps
  • DevSecOps
  • DevOps Onramp
  • Platform Engineering
  • Low-Code/No-Code
  • IT as Code
  • More
    • Application Performance Management/Monitoring
    • Culture
    • Enterprise DevOps
    • ROELBOB
Hot Topics
  • npm is Scam-Spam Cesspool ¦ Google in Microsoft Antitrust Thrust
  • 5 Key Performance Metrics to Track in 2023
  • Debunking Myths About Reliability
  • New Relic Bets on AI to Advance Observability
  • Vega Cloud Commits to Reducing Cloud Costs

Home » Blogs » DevOps in the Cloud » From Automated Cloud Deployment to Progressive Delivery

From Automated Cloud Deployment to Progressive Delivery

Avatar photoBy: David Eastman on October 10, 2019 1 Comment

Your team is on its agile journey and you can more or less track a deployment from a story, through git commits, to an automated builder, to an artifact repository, into a container and then onto the cloud. Or, in fact, any other of the many valid variants that lead you to believe your deployments are largely automated. What you can point to is a business request comes in and the result is a service or a new app. Your purview seems to stop before your users can even respond though. Is that good enough?

Recent Posts By David Eastman
  • Agile and DevOps Success Needs Continuous Testing
Avatar photo More from David Eastman
Related Posts
  • From Automated Cloud Deployment to Progressive Delivery
  • DevOps: Shift Left to Reduce Failure
  • Continuous Delivery and Deployment: A Tale of Two Models
    Related Categories
  • Blogs
  • DevOps Culture
  • DevOps in the Cloud
  • DevOps Practice
  • Enterprise DevOps
    Related Topics
  • agile culture
  • artifact repositories
  • automated cloud deployment
  • automated deployment
  • development team
  • progressive delivery
  • release environments
Show more
Show less

Let’s go back to the early idea of a release. It was a collection of all the features and fixes the loudest stakeholders had persuaded the product owner to put at the top of the story list. A date was promised, but QA was late, and the persistent stakeholders squeezed a little more juice out of a story. Then it was stuffed into a ball and delivered to your servers overnight, lest anyone notice. The next day, the developers had to deal with the fallout as the support queues grew with confused users.

TechStrong Con 2023Sponsorships Available

While developers were getting the hang of the agile notion of automated deployment, the release lifecycle did pretty much end with a deployment going live. This gave certainty that what had been built and placed in the artifact repository was the same thing the testers had seen in the staging environment. And it also had the changes Jira said it had.

This type of release was very much like a birthday present. Your uncle talked to your dad briefly, without your knowledge, and agreed you really would benefit from a new pair of socks. Then these were delivered on the day with the receipt in the bag, just in case you needed to take them back.

The first evolution came with the concept that a release wasn’t exactly synonymous with a deployment. Finally, the end-user’s perspective nudged its way into corporate release strategy–updates could be deployed without necessarily impinging directly on every user. What if there were two identical release environments with clever network switching, meaning only one was truly live? This type of flip-flop arrangement (sometimes referred to as blue/green deployment) allowed changes to be tested internally in a realistic environment before going live.

It was the advertising industry that prompted the idea of A/B testing. Instead of guessing one variation of a campaign might be superior to another, the surprisingly scientific method of using a controlled sample and a variation was used on audiences to see which they preferred. In the digital age, it is possible to deploy two variations of the same release to different server sets. This might mean each user session could be using either release. It would then be necessary to associate the resulting click-throughs, or whatever success measurement is chosen, to the specific release flavor. The worth of this kind of experimentation was only as good as the question you asked, but it was at least observing real user interaction.

Moving on, DevOps environments began to use more feature flags or toggles. These were code paths that could deliver feature changes in live servers, controlled through configuration changes. This reduced the need to re-deploy to make certain changes. Release minded teams were coalescing around configuration, as opposed to just code. Whereas code once compiled and built was locked away in unreadable artifact files–a larger stakeholder community could administer configuration files, usually encoded in English. This helped to keep the feature changes closer to the stakeholders and less likely to slip back into silos.

As a more defined understanding of delivery arrived, so did the understanding of the user community. It used to be the case that a user was a one-line entry in a database. Even firms whose business is not mining their user’s data understand there are different sets of users for products and services.

Traditionally, there has always been internal users or beta testers–those within the firm whose job is to check the bits they understand, or are responsible for, are working as expected. Then outside the firewall, there are the tech savvy community who actively want the latest updates. These guys report bugs and often compare your product with the competition, perhaps using both. This audience will not lose their marbles if you deliver a bug to them. More to the point, they will notice rapidly–perhaps telling everyone on social media–about your failings.

The disengaged users who may only have the free version or tier of your product come next. These people are more likely to work on impressions and are best not disappointed. They are also unlikely to be keen on updates. Finally, the large core of solid users who pay their way, just want things to work and need to understand a path to continuity will already appreciate the software and will understand the roadmap by osmosis. These will probably include your biggest users, too. Of course, if you have no way of distinguishing your users all the above is moot.

With services sitting on a public cloud and CDN like edge services such as CloudFront, the ability to release by territory is much more straightforward. This is also increasingly necessary for legal reasons–EU regulations, for example. This allows for tactical releases to communities in different time zones and sizes. The practice of deploying changed services to a small group for risk mitigation, or “canarying,” is another method that takes advantage of smart routing. This requires the ability to quickly observe and rollback releases though.

Today, most software has some degree of dark launch that restricts the visibility of a release to the appropriate community, until all is well. In this sense, a progressive delivery (defined as “continuous delivery with fine-grained control over the blast radius” by James Governor) can be seen as the virus-like spreading of your software or services from your development teams’ laptops to the last user.

Observing service use is simply part of the core concern of virtually every company in the tech space. From this perspective, progressive delivery changes the way to see your development team as mere builders to user fulfillment specialists, which of course, they always were.

The future will undoubtedly involve increasingly customer-driven releases, which goes hand in hand with the increasing use of data science to study user communities and their exhaust. The important take away is to make sure your team is reaching further to the right-hand side of the product journey, and keep the user’s reaction to what you make as a larger part of your sensory input. Think more about constant small changes and less about ceremonial releases.

— David Eastman

Filed Under: Blogs, DevOps Culture, DevOps in the Cloud, DevOps Practice, Enterprise DevOps Tagged With: agile culture, artifact repositories, automated cloud deployment, automated deployment, development team, progressive delivery, release environments

« How the World Measures DevOps Quality
DevOps Chat: Holistic Kubernetes and Cloud-Native App Security, With StackRox »

Techstrong TV – Live

Click full-screen to enable volume control
Watch latest episodes and shows

Upcoming Webinars

https://webinars.devops.com/overcoming-business-challenges-with-automation-of-sap-processes
Tuesday, April 4, 2023 - 11:00 am EDT
Key Strategies for a Secure and Productive Hybrid Workforce
Tuesday, April 4, 2023 - 1:00 pm EDT
Using Value Stream Automation Patterns and Analytics to Accelerate DevOps
Thursday, April 6, 2023 - 1:00 pm EDT

Sponsored Content

The Google Cloud DevOps Awards: Apply Now!

January 10, 2023 | Brenna Washington

Codenotary Extends Dynamic SBOM Reach to Serverless Computing Platforms

December 9, 2022 | Mike Vizard

Why a Low-Code Platform Should Have Pro-Code Capabilities

March 24, 2021 | Andrew Manby

AWS Well-Architected Framework Elevates Agility

December 17, 2020 | JT Giri

Practical Approaches to Long-Term Cloud-Native Security

December 5, 2019 | Chris Tozzi

Latest from DevOps.com

npm is Scam-Spam Cesspool ¦ Google in Microsoft Antitrust Thrust
March 31, 2023 | Richi Jennings
5 Key Performance Metrics to Track in 2023
March 31, 2023 | Sarah Guthals
Debunking Myths About Reliability
March 31, 2023 | Kit Merker
New Relic Bets on AI to Advance Observability
March 30, 2023 | Mike Vizard
Vega Cloud Commits to Reducing Cloud Costs
March 30, 2023 | Mike Vizard

TSTV Podcast

On-Demand Webinars

DevOps.com Webinar ReplaysDevOps.com Webinar Replays

GET THE TOP STORIES OF THE WEEK

Most Read on DevOps.com

Don’t Make Big Tech’s Mistakes: Build Leaner IT Teams Instead
March 27, 2023 | Olivier Maes
How to Supercharge Your Engineering Teams
March 27, 2023 | Sean Knapp
Five Great DevOps Job Opportunities
March 27, 2023 | Mike Vizard
The Power of Observability: Performance and Reliability
March 29, 2023 | Javier Antich
Cloud Management Issues Are Coming to a Head
March 29, 2023 | Mike Vizard
  • Home
  • About DevOps.com
  • Meet our Authors
  • Write for DevOps.com
  • Media Kit
  • Sponsor Info
  • Copyright
  • TOS
  • Privacy Policy

Powered by Techstrong Group, Inc.

© 2023 ·Techstrong Group, Inc.All rights reserved.