Categories: BlogsDoin' DevOps

Reducing Incident Resolution Time

From raw incident count to response time, there are several key metrics that top Operations teams track to measure and improve their performance. One of the most popular metrics teams track for is mean time to resolution (MTTR). It’s the time between failure and recovery from failure, and is directly linked to your uptime. While MTTR may be the gold standard when it comes to operational readiness, it’s important for teams to look at the bigger picture to effectively decrease incident resolution time.

Putting MTTR into perspective
Your overall downtime is a function of the number of outages as well as the length of each. Dan Slimmon does a great job discussing these two factors and how you may want to think about prioritizing them. Depending on your situation, it may be more important to minimize noisy alerts that resolve quickly (meaning your MTTR may actually increase when you do this). But if you’ve identified MTTR as an area for improvement, here are some strategies that may help.

Working faster won’t solve the problem
It would be nice if we could fix outages faster simply by working faster, but we all know this isn’t true. To make sustainable, measurable improvements to your MTTR, you need to do a deep investigation into what happens during an outage. True – there will always be variability in your resolution time due to the complexity of incidents. But taking a look at your processes is a good place to start – often the key to shaving minutes lies in how your people and systems work together.

Check out your RESPONSE time
Some call it MTTA (mean time to acknowledge) or MTTR (same acronym, but meaning mean time to respond), but the clock starts ticking as soon as an incident is triggered, and with adjustments to your notification processes, you may be able to achieve some quick wins.

If your response time is on the longer side, you may want to look at how the team is being alerted. Do alerts reliably reach the right person? If the first person notified does not respond, can the alerts automatically be escalated, and how much time do you really need to wait before moving on? Setting the right expectations and goals around response time can help ensure that all team members are responding to their alerts as quickly as possible.

Establish a process for outages
An outage is a stressful time, and it’s not when you want to be figuring out how you respond to incidents. Establish a process (even if it’s not perfect at first) so everyone knows what to do. When you’re designing your process for responding to incidents, make sure you have the following elements in place:

  1. Establish a communication protocol – if the incident is something more than one person needs to work on, make sure everyone understands where they need to be. Conference calls or Google Hangouts are a good idea, or a room in HipChat or Slack.
  2. Establish a leader – this is the person who will be directing the work of the team in resolving the outage. They will be taking notes and giving orders. If the rest of the team disagrees, the leader can be voted out, but another leader should be established immediately.
  3. Take great notes – about everything that’s happening during the outage. These notes will be a helpful reference when you look back during the post mortem. At PagerDuty, some of our call leaders like using a paper notebook beside their laptop as a visual reminder that they should be recording everything.
  4. Practice makes perfect – if you’re not having frequent outages practice your incident response plan monthly to make sure the team is well-versed. Also, don’t forget to train new-hires on the process.

To learn more, check out this talk about incident management from Blake Gentry, a former lead software engineer at Heroku.

Find and fix the problem
Finding out what is  actually going wrong is often the lion’s share of your resolution time. It’s critical to have instrumentation and analytics for each of your services, and make sure that information helps you identify what’s going wrong. For problems that are somewhat common and well understood, you may be able to implement automated fixes.

David Shackelford

David Shackelford is a product manager at PagerDuty, the leader in operations performance management. David works with teams across the company to plan, build, and ship features that improve operation teams’ quality of life, decrease time to incident resolution, and ultimately improve uptime for PagerDuty customers. Prior to PagerDuty, David worked in education technology, creating integrations between school information systems and digital content, and as a Teach for America corps member, teaching Mathematics in San Francisco public schools.

Recent Posts

Copado Applies Generative AI to Salesforce Application Testing

Copado's genAI tool automates testing in Salesforce software-as-a-service (SaaS) application environments.

2 days ago

IBM Confirms: It’s Buying HashiCorp

Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.

3 days ago

Embrace Adds Support for OpenTelemetry to Instrument Mobile Applications

Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…

3 days ago

Paying Your Dues

TANSTAAFL, ya know?

3 days ago

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

5 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

5 days ago