Undo, a provider of a software failure replay platform, this week published a report in collaboration with a Cambridge Judge Business School MBA project that estimates 620 million developer hours a year are wasted on debugging software failures, at a cost of roughly $61 billion.
The report also notes software engineers spend on average of 13 hours to fix a single software failure.
According to the report, 41% said identified reproducing a bug as the biggest barrier to finding and fixing bugs faster, followed by wiring tests (23%) and actually fixing the bug (23%). Well over half (56%) said they could release software one to two days faster if reproducing failures were not an issue. Just over a quarter of developer time (26%) is spent reproducing and ﬁxing failing tests.
On the plus side, 88% of respondents said their organizations have adopted continuous integration (CI) practices, with more than 50% of businesses reporting they can deploy new code changes and updates at least daily. Over a third (35%) said they can make hourly deployments.
Undo CEO Barry Morris said the report makes it clear organizations need to be able to record software to reduce the amount of time it takes to find bugs. Unfortunately, even then finding a bug is still a labor-intensive process that can involve analyzing millions of lines of code. In the future, software replay systems will be infused with machine learning algorithms to accelerate the bug discovery process, he noted. As software becomes more instrumented, the observability data and metrics that machine learning algorithms require to detect patterns is becoming more accessible.
In the meantime, while CI has been embraced, the continuous delivery (CD) side of the DevOps equation remains a challenge. Each platform software is deployed on is unique, so most organizations find automating software delivery problematic.
Regardless of DevOps approach, it’s clear there’s still lots of room for improvement when it comes to developing software. Even the most advanced practitioners of DevOps are limited by the rate at which software bugs, including security flaws, can be discovered and remediated. Of course, there may come a day when machine learning algorithms automate much of that process, including determining which types of tests to run. For now, however, many organizations could streamline the software debugging process by recording instances of applications when they are built and when they are deployed. After all, each platform that software is deployed on has attributes that inevitably impact how software on the platform runs.
As software continues to become more complex in the years ahead, challenges associated with debugging applications will only continue to grow. In the age of microservices, the issue that software engineers are trying to troubleshoot might not have anything to do with the code they wrote; rather, the issue emanates from a service they invoked via an application programming interface (API). Whatever the source of the problem, it’s apparent to all a lot of time and money is being wasted on debugging software that could be put to far better use.