DevOps Practice

Software Development: A Better Way to Measure Success

Here’s a metaphor for software development you probably haven’t heard before: It’s like flying a plane. You have a starting point and a destination in mind, there’s a good chance you’ll change course midflight, and … sometimes you get a little nauseated?

Okay, it’s not the best analogy. But there is one aspect of piloting a plane that offers valuable insight into how to measure your team’s performance more consistently. It’s called the performance-control technique, and it may be the best method you’ve never heard of for keeping your engineering teams aligned behind a common goal.

Imagine for a second sitting in the cockpit of a plane, surrounded by clouds. Forget using the ground to orient yourself; you have nothing but your instruments and wits to guide you. It’s a situation I’m plenty familiar with, having taken flying lessons out here in rainy Seattle. In those cases, you use your altitude and power settings to set up a particular control scenario that results in your desired output, whether it’s ascending, descending, straight and level or turning. And then you monitor your performance—in this case via altitude, vertical speed and airspeed indicators—to verify that you’re achieving your desired output.

Now unless your software development pipeline is irredeemably broken, you are probably not flying completely blind as you work to build and ship new features. But the technique still applies: Choose your control metrics (time to release, for example), and then set up a series of performance metrics to keep it on track.

Cloudy Skies Ahead

The greatest strength of the performance-control technique is its acknowledgement that any control can result in unintended consequences. Take reducing the time from check-in to release. It’s a fantastic goal: The quicker you get code into production, the quicker you deliver updates to your customers. The quicker you do that, the quicker you get feedback and respond to their needs. The quicker you do that, the quicker you can iterate on features. And the cycle continues.

But it’s also a goal that can result in unintended consequences: When you single it out for attention, it’s only natural team members will want to do everything they can—at the expense of almost everything else—to achieve it. That’s not to say they want to do wrong; rather, they’re just trying to be good citizens. (And, okay, they don’t want their name to show up in a report about why the team didn’t meet its quarterly goal.)

So people make compromises. Maybe they drop full regression passes and replace them with incremental feature and integration testing. It’s not that they don’t care about quality. But in their quest to help the organization meet its single-minded goal of reducing the time from check-in to release, they opt for the easiest change that could get them there, indirectly prioritizing speed over quality in the process. And that happens because there aren’t checks in place to keep it from happening.

Taking Control

How do you put this into practice at your own organization? Start by choosing your goals and then selecting the control metrics. For many, it could be as simple as increasing speed; it’s a natural goal to chase, for the reasons already discussed. But it may be something different; only you and your teams can decide by being intentional in how you evaluate your production environment.

Then it’s time to choose your performance metrics for each of your goals. You can do that yourself, but it may be more illuminating—and, honestly, more fun—to get your software development team involved. That could range from encouraging them to brainstorm the negative consequences of implementing that goal to straight-up asking them how they’d game the system.

The following chart offers examples of control goals, their resultant unintended consequences, and the performance metrics necessary to keep them in check.

Desired Output Control Goal Performance Goal Unintended Consequences
Climb to 4500’ Increase power, nose up Airspeed, Altitude Stalling
Reduce bounce rate Reduce site latency Revenue, Conversions, Customer Engagement Smaller pages, poor experience, feature attrition
Generate more conversions Increase availability Releases, Backlog Ticket aging. Lower frequency of releases, cherry picked work items, no risk taking.
Reduce MTTR Speed up ticket closure Re-open rate, new ticket rates. Premature closure, delay in opening tickets for issues.

It’s important to note that how closely you monitor each of these performance metrics will vary, and in some cases significantly. Rather than attempt to apply a one-size-fits-all approach, instead think of each in terms of how long it takes to detect that the metric is falling out of its control envelope and how long it would take to recover if it did. If it takes one day to detect that you’re outside of your envelope and three weeks to recover, then by all means, check in daily. But if it takes three months to fall out of the control envelope and just a day to recover, you may only need to check it once every couple of weeks.

A Soft Landing

As we’ve supported software development teams in their efforts to modernize their software delivery methods, we’ve heard time and time again, “What’s the best metric for measuring success?” And we struggle to come up with an answer because there is no golden metric that works for everyone. But more important, even the good ones can lead well-meaning engineers and team leaders to optimize for one thing while letting others suffer.

So rather than search for a magic bullet, the secret is to find goals that work for your needs—and then build in metrics that keep you and your software development team accountable.

Rob Duffy

Rob Duffy

Rob Duffy is an experienced tech leader with a demonstrated history of building high-performance engineering teams in both high-tech organizations and transformational situations, including Amazon and TIME, Inc. As co-founder and CEO of sodo, he partners with organizations from startups to decade-old corporations to provide the tools and training DevOps teams need to successfully navigate the modernization of software delivery.

Recent Posts

Building an Open Source Observability Platform

By investing in open source frameworks and LGTM tools, SRE teams can effectively monitor their apps and gain insights into…

1 hour ago

To Devin or Not to Devin?

Cognition Labs' Devin is creating a lot of buzz in the industry, but John Willis urges organizations to proceed with…

2 hours ago

Survey Surfaces Substantial Platform Engineering Gains

While most app developers work for organizations that have platform teams, there isn't much consistency regarding where that team reports.

18 hours ago

EP 43: DevOps Building Blocks Part 6 – Day 2 DevOps, Operations and SRE

Day Two DevOps is a phase in the SDLC that focuses on enhancing, optimizing and continuously improving the software development…

19 hours ago

Survey Surfaces Lack of Significant Observability Progress

A global survey of 500 IT professionals suggests organizations are not making a lot of progress in their ability to…

19 hours ago

EP 42: DevOps Building Blocks Part 5: Flow, Bottlenecks and Continuous Improvement

In part five of this series, hosts Alan Shimel and Mitch Ashley are joined by Bryan Cole (Tricentis), Ixchel Ruiz…

19 hours ago