In a data-driven society, organizations rely heavily on sets of metrics to evaluate how they’re doing on a particular task, initiative or resource allocation. Metrics are particularly important to DevOps teams, whose very function is tied to concepts like collaboration and continuous improvement.
But, while metrics can help drive efficiencies, they can also drive DevOps teams down the wrong path. Just because you’re measuring specific things doesn’t mean you’re measuring the right things. And if you add more metrics, it doesn’t necessarily mean you’re creating more value; you may just be spinning your wheels faster and getting nowhere.
The key is choosing the right metrics that drive the right outcomes for your organization. But how do you choose the right metrics? And how do you measure the true value you’re creating? Organizations that successfully navigate this challenge will not only build successful DevOps cultures but also improve their overall competitiveness in an increasingly demanding business climate.
Be Measurable – and Be Actionable
First, choose metrics that are objectively measurable. Eliminate individual subjectivity from your metrics palette. Rather than ask “How beautiful is my UI?” focus on measurable data such as, “Is the UI accessible to people with disabilities?” or “Did it drive more clicks?”
It is important to choose metrics that are actionable, not just “vanity metrics.” Back in the 1970s, McDonald’s used to update the signs outside its restaurants with actual numbers: “3 Billion Served;” “4 Billion Served.” Now the signs read, “Billions and Billions Served.” That is the ultimate vanity metric – it’s not easily auditable and not actionable. How many billions? Should McDonald’s be aiming for trillions? A good example of a DevOps vanity metric is server uptime. In fact, it’s actually a contraindicative metric: with cloud instances playing such an important role, you could make the case that low server uptimes are actually preferable.
Metrics should focus on inter-team and intra-team performance and outcomes. DevOps is all about cross-functional teams — development, test, operations and information security, among others — working together to deliver business value. So, rather than focus on individual metrics like “lines of code,” focus on outcomes like “cost of customer acquisition,” “new customer signups” or “overall revenue.” It may not always be obvious how to move the needle, but that is where experimentation comes in.
Avoid the Noise
Focus on as few metrics as possible and maintain a high signal-to-noise ratio. When everything is a priority, nothing is. If you’re presenting your metrics to the board of directors and you have 50 different graphs on the screen, you’re focusing more on how the presentation looks than on showing real, tactical information. Choose metrics that align with your current goals and be ruthless about ensuring those metrics are accurate, timely and focused on a strategic goal.
Expect goals – and the metrics you track – to evolve over time. You may want to shorten feature delivery time, but to do that, you may start with measuring things like build time, regression testing time, deployment time, etc. You will be impacting the feature delivery metric but doing it in a roundabout way. The key strategy is to pick an outcome you want to achieve. Once you knock down that problem, you can keep that metric around, but you’ll want to push it down. You can put an alert on it so if it ever crosses a threshold where you don’t want it to be, you can act on it.
Engineering organizations should focus on four to five metrics that they actively are attempting to improve at that time, so they can achieve those outcomes. As they achieve those outcomes, they can retire them or put those metrics in the background. Then, they can pick the next one, identify the next outcome to achieve and assign some new metrics around that.
Align Metrics With Your Priorities
Metrics can be situational. If you work in the network ops center, there are going to be some things you want to have up all the time. You’ll want to know the second a denial-of-service hits or the second error rates start going up. But for a software delivery pipeline, you can’t just throw out a bunch of metrics and hope the right ones will find their way to the forefront. You want to focus on metrics that align with your priorities today, next week, next month, this quarter or this year.
So, which metrics are most important for a DevOps organization to pursue? A series of metrics identified by the Google DevOps Research and Assessments (DORA) team has emerged as the industry standard to determine how successful a company is at DevOps. These measurements include deployment frequency (DF); mean lead time for changes (MLT); mean time to recover (MTTR) and change failure rate (CFR). These metrics show how well development teams deliver better software, faster, to their customers.
I would add a fifth metric: mean time to discover. From the time a problem is injected into a project, how long does it take for us to discover it? Did our organization discover it or did the customer? From there, we can strategize about the cheapest possible way we could’ve caught a particular bug ourselves. Could we have written a five-line unit test that takes 100 milliseconds to run that would have caught that? Or was it more complicated? Was the UI misaligned so that it was impossible to catch that automatically?
Different segments of the software delivery process will be concerned with different sets of metrics. While the Dev/CI practice focuses on development lead time, cycle time and work in progress/technical debt, the QA team pays closer attention to idle time, defects discovered and escaped, and the aforementioned mean time to discover. While the deployment group aims to improve deployment frequency and change success rate, operations gets jazzed from studying the cost and frequency of outages.
Metrics are useful tools for DevOps organizations. They can push teams to be more efficient and more accountable. But to truly optimize the value they’re getting out of people and particular initiatives, it’s important to choose metrics carefully. Recognize that metrics aren’t a one-size-fits-all mechanism, and that more metrics doesn’t always equate to better performance. Choosing the right metrics – and reevaluating them on a consistent basis – is the best path to long-term improvement.