Many Fortune 500 companies have recently undergone extensive layoffs and now find themselves having to achieve more with fewer people—while grappling with how to integrate generative AI into their products. They must build while economizing, streamlining and increasing efficiency in every part of their company. Yet one of the most vital cogs of the machinery has struggled to evolve over the past 20 years.
In fact, until now, managing high-tech engineering teams at the world’s biggest companies has not been all that high-tech.
Back in the 1970s, the “waterfall” model was introduced: An inflexible process in which each phase of software development depended on the previous one’s completion. Then, around the 2000s, companies started moving towards “agile” models that tracked progress in two-week sprints.
Now, a third era of software development management is emerging. It’s not waterfall or agile; it’s data-driven.
Current management systems cannot give managers real-time, granular insights so they can make decisions on the spot and run projects in the most efficient way possible. However, data-driven models generate live data on how teams are progressing which has far-reaching effects on businesses’ productivity and future survival.
The emergence of DORA, a framework for tracking development metrics makes this possible. For the first time, new tools enable us to capture the software development life cycle automatically, without any manual input from the engineer, leading to fewer roadblocks, a 20% faster cycle time and greater ROI.
Despite this, only a handful of Fortune 500 companies have embraced data-driven metrics. Here are the reasons that shift should happen now.
Real-time Insights are Game-Changers
Until now, companies have been using inaccurate methods to track their software development teams. Project management reports on engineers’ output are reviewed every two weeks, monthly, or even quarterly. This means that impactful changes are made at a delayed point in time rather than when they can make the greatest difference to cycle time.
In contrast, DORA measures specific real-time performance indicators: Deployment frequency (DF), lead time for changes (LT), mean time to recovery (MTTR) and change failure rate (CFR) (For more information, see here). These far more data-driven metrics measure efficiency and velocity at every key point of day-to-day software development, with feedback to management happening continually.
Often, individual engineers won’t have enough capacity or oversight to inform managers of specific issues during a cycle. Using DORA metrics is like being able to detect that a team has an abnormally high BPM though they are in a rest phase. It’s a signal of illness; that more or different resources are needed. That then requires you to go in and inspect the micro factors that can be adapted.
For any Fortune 500 company, this quick turnaround is vital. Any small issue in a product cycle can easily escalate and become a critical problem in a matter of days. Instead, using DORA can accelerate software development and, ultimately, time to market.
DORA Gives you Internal Benchmarks
But, how can you tell if something is wrong while using DORA? In the first few months of adoption, it’s likely that companies will compare their internal data to industry benchmarks. Unprecedented data has recently been published that offers general performance benchmarks for each DORA metric based on industry research.
But the end game is to be able to compare your DORA metrics to your own internal benchmarks. With time, each company will develop its own personal baselines that better represent internal capabilities and limitations based on innumerable variables.
The purpose of DORA benchmarks, however, is not to draw a fixed red line. Think of it more like a speed limit that ensures that people are working at a steady pace while allowing for flexibility.
A clear sign that your team may be burnt out is a really low cycle time. At times it can be a technical bottleneck, but the beauty of DORA is that it can also point you towards well-being issues.
DORA Needs to be Coupled With Granular Insights
DORA is unparalleled when it comes to flagging managers that something is wrong. DORA metrics can flag that a team has a high cycle time – for example, an abnormally long deployment frequency suggests that engineers are working more slowly or that there may be a build-up of code that isn’t being uploaded to the repository quickly enough during the review process. At that point, you need to zoom in to get more information. There are multiple metrics that provide insights into the exact pain points in a team’s development, and you can develop these internally or via tools or engineering management software that provides more holistic oversight.
In this particular case, you can use a granular metric called “time to merge from first review” to see how fast engineers incorporate feedback from their peers during code reviews. If it is taking a week for the first review to be carried out, integrated into the code and finally uploaded to the general platform, then perhaps there’s a bottleneck occurring during the review process.
Perhaps the most important benefit of Engineering Leadership 3.0 is that it no longer has to be a guessing game. If they sense something’s wrong, they can check their DORA dashboard, receive confirmation of the problem, hone in with granular metrics, and take corrective actions immediately.
Teams that adapt their behavior as a result of these data-driven metrics will add incredible ROI to the company. But these changes can’t happen overnight. It will start with getting buy-in from engineering managers and the executive team. For Fortune 500 companies, that often requires elaborating action plans, integrating external tools to operate the tracking process and training engineering managers (in our case, this takes up to five weeks). But once they’re a part of the business, they will become indispensable in running smoothly without stacking up issues.