DevOps.com

  • Latest
    • Articles
    • Features
    • Most Read
    • News
    • News Releases
  • Topics
    • AI
    • Continuous Delivery
    • Continuous Testing
    • Cloud
    • Culture
    • DevSecOps
    • Enterprise DevOps
    • Leadership Suite
    • DevOps Practice
    • ROELBOB
    • DevOps Toolbox
    • IT as Code
  • Videos/Podcasts
    • DevOps Chats
    • DevOps Unbound
  • Webinars
    • Upcoming
    • On-Demand Webinars
  • Library
  • Events
    • Upcoming Events
    • On-Demand Events
  • Sponsored Communities
    • AWS Community Hub
    • CloudBees
    • IT as Code
    • Rocket on DevOps.com
    • Traceable on DevOps.com
    • Quali on DevOps.com
  • Related Sites
    • Techstrong Group
    • Container Journal
    • Security Boulevard
    • Techstrong Research
    • DevOps Chat
    • DevOps Dozen
    • DevOps TV
    • Digital Anarchist
  • Media Kit
  • About
  • AI
  • Cloud
  • Continuous Delivery
  • Continuous Testing
  • DevSecOps
  • Leadership Suite
  • Practices
  • ROELBOB
  • Low-Code/No-Code
  • IT as Code
  • More Topics
    • Application Performance Management/Monitoring
    • Culture
    • Enterprise DevOps

Home » Blogs » DevOps Practice » Software Development: A Better Way to Measure Success

Software Development Measure Success

Software Development: A Better Way to Measure Success

By: Rob Duffy on June 15, 2018 1 Comment

Here’s a metaphor for software development you probably haven’t heard before: It’s like flying a plane. You have a starting point and a destination in mind, there’s a good chance you’ll change course midflight, and … sometimes you get a little nauseated?

Related Posts
  • Software Development: A Better Way to Measure Success
  • Why MTTR is a Vital Metric for DevOps Teams
  • Your DevOps Teams Can Do Better. Here’s How.
    Related Categories
  • Blogs
  • DevOps Practice
    Related Topics
  • devops
  • metrics
  • software development
  • unintended consequences
Show more
Show less

Okay, it’s not the best analogy. But there is one aspect of piloting a plane that offers valuable insight into how to measure your team’s performance more consistently. It’s called the performance-control technique, and it may be the best method you’ve never heard of for keeping your engineering teams aligned behind a common goal.

DevOps/Cloud-Native Live! Boston

Imagine for a second sitting in the cockpit of a plane, surrounded by clouds. Forget using the ground to orient yourself; you have nothing but your instruments and wits to guide you. It’s a situation I’m plenty familiar with, having taken flying lessons out here in rainy Seattle. In those cases, you use your altitude and power settings to set up a particular control scenario that results in your desired output, whether it’s ascending, descending, straight and level or turning. And then you monitor your performance—in this case via altitude, vertical speed and airspeed indicators—to verify that you’re achieving your desired output.

Now unless your software development pipeline is irredeemably broken, you are probably not flying completely blind as you work to build and ship new features. But the technique still applies: Choose your control metrics (time to release, for example), and then set up a series of performance metrics to keep it on track.

Cloudy Skies Ahead

The greatest strength of the performance-control technique is its acknowledgement that any control can result in unintended consequences. Take reducing the time from check-in to release. It’s a fantastic goal: The quicker you get code into production, the quicker you deliver updates to your customers. The quicker you do that, the quicker you get feedback and respond to their needs. The quicker you do that, the quicker you can iterate on features. And the cycle continues.

But it’s also a goal that can result in unintended consequences: When you single it out for attention, it’s only natural team members will want to do everything they can—at the expense of almost everything else—to achieve it. That’s not to say they want to do wrong; rather, they’re just trying to be good citizens. (And, okay, they don’t want their name to show up in a report about why the team didn’t meet its quarterly goal.)

So people make compromises. Maybe they drop full regression passes and replace them with incremental feature and integration testing. It’s not that they don’t care about quality. But in their quest to help the organization meet its single-minded goal of reducing the time from check-in to release, they opt for the easiest change that could get them there, indirectly prioritizing speed over quality in the process. And that happens because there aren’t checks in place to keep it from happening.

Taking Control

How do you put this into practice at your own organization? Start by choosing your goals and then selecting the control metrics. For many, it could be as simple as increasing speed; it’s a natural goal to chase, for the reasons already discussed. But it may be something different; only you and your teams can decide by being intentional in how you evaluate your production environment.

Then it’s time to choose your performance metrics for each of your goals. You can do that yourself, but it may be more illuminating—and, honestly, more fun—to get your software development team involved. That could range from encouraging them to brainstorm the negative consequences of implementing that goal to straight-up asking them how they’d game the system.

The following chart offers examples of control goals, their resultant unintended consequences, and the performance metrics necessary to keep them in check.

Desired OutputControl GoalPerformance GoalUnintended Consequences
Climb to 4500’Increase power, nose upAirspeed, AltitudeStalling
Reduce bounce rateReduce site latencyRevenue, Conversions, Customer EngagementSmaller pages, poor experience, feature attrition
Generate more conversionsIncrease availabilityReleases, Backlog Ticket aging.Lower frequency of releases, cherry picked work items, no risk taking.
Reduce MTTRSpeed up ticket closureRe-open rate, new ticket rates.Premature closure, delay in opening tickets for issues.

It’s important to note that how closely you monitor each of these performance metrics will vary, and in some cases significantly. Rather than attempt to apply a one-size-fits-all approach, instead think of each in terms of how long it takes to detect that the metric is falling out of its control envelope and how long it would take to recover if it did. If it takes one day to detect that you’re outside of your envelope and three weeks to recover, then by all means, check in daily. But if it takes three months to fall out of the control envelope and just a day to recover, you may only need to check it once every couple of weeks.

A Soft Landing

As we’ve supported software development teams in their efforts to modernize their software delivery methods, we’ve heard time and time again, “What’s the best metric for measuring success?” And we struggle to come up with an answer because there is no golden metric that works for everyone. But more important, even the good ones can lead well-meaning engineers and team leaders to optimize for one thing while letting others suffer.

So rather than search for a magic bullet, the secret is to find goals that work for your needs—and then build in metrics that keep you and your software development team accountable.

— Rob Duffy

Filed Under: Blogs, DevOps Practice Tagged With: devops, metrics, software development, unintended consequences

Sponsored Content
Featured eBook
Hybrid Cloud Security 101

Hybrid Cloud Security 101

No matter where you are in your hybrid cloud journey, security is a big concern. Hybrid cloud security vulnerabilities typically take the form of loss of resource oversight and control, including unsanctioned public cloud use, lack of visibility into resources, inadequate change control, poor configuration management, and ineffective access controls ... Read More
« DevOps and Digital Transformation: Q&A with Cloud Foundry Foundation’s Chip Childers
Transforming with the Cloud: If Not Now, When? »

TechStrong TV – Live

Click full-screen to enable volume control
Watch latest episodes and shows

Upcoming Webinars

The Complete Guide to Open Source Licenses 2022
Monday, May 23, 2022 - 3:00 pm EDT
Building a Successful Open Source Program Office
Tuesday, May 24, 2022 - 11:00 am EDT
LIVE WORKSHOP - Fast, Reliable and Secure Access to Private Web Apps
Tuesday, May 24, 2022 - 3:00 pm EDT

Latest from DevOps.com

DevOps and Hybrid Cloud: Life in the Fast Lane?
May 23, 2022 | Benjamin Brial
DevSecOps Deluge: Choosing the Right Tools
May 20, 2022 | Gary Robinson
Managing Hardcoded Secrets to Shrink Your Attack Surface 
May 20, 2022 | John Morton
DevOps Institute Releases Upskilling IT 2022 Report 
May 18, 2022 | Natan Solomon
Creating Automated GitHub Bots in Go
May 18, 2022 | Sebastian Spaink

Get The Top Stories of the Week

  • View DevOps.com Privacy Policy
  • This field is for validation purposes and should be left unchanged.

Download Free eBook

The State of the CI/CD/ARA Market: Convergence
https://library.devops.com/the-state-of-the-ci/cd/ara-market

Most Read on DevOps.com

DevOps Institute Releases Upskilling IT 2022 Report 
May 18, 2022 | Natan Solomon
Apple Allows 50% Fee Rise | @ElonMusk Fans: 70% Fake | Micro...
May 17, 2022 | Richi Jennings
Making DevOps Smoother
May 17, 2022 | Gaurav Belani
Creating Automated GitHub Bots in Go
May 18, 2022 | Sebastian Spaink
Is Your Future in SaaS? Yes, Except …
May 18, 2022 | Don Macvittie

On-Demand Webinars

DevOps.com Webinar ReplaysDevOps.com Webinar Replays
  • Home
  • About DevOps.com
  • Meet our Authors
  • Write for DevOps.com
  • Media Kit
  • Sponsor Info
  • Copyright
  • TOS
  • Privacy Policy

Powered by Techstrong Group, Inc.

© 2022 ·Techstrong Group, Inc.All rights reserved.