Continuous Testing

The Bug in Production: What You Don’t Know Can – and Will – Harm You

Despite the risk of unplanned downtime, many organizations that develop software and services push them live without adequately testing for bugs that will manifest against production traffic. This is a huge gamble: Those bugs could lead to errors that bring down the service altogether. Pushing live without sufficient testing can be detrimental to your business.

The Problem of Going Live With Bugs in the Code

Software downtime leads to loss of revenue and of reputation. In fact, Gartner analysts have estimated that the average cost of downtime is $5,600 a minute—that’s well over $300,000 an hour. To provide a real-life example of what this looks like, Microsoft Azure suffered a major outage in November 2018 caused by issues introduced as part of a code update, lasting for 14 hours and affecting customers throughout Europe and beyond. With migration from legacy systems to micro-environments in the cloud, outages and downtime pose a growing and serious problem.

As companies switch to DevOps and CI/CD models to move faster and provide application updates sooner, software developers continually release new features and often push code updates as fast as they’re written. The classic six-month development timelines of dev, quality assurance (QA) and beta testing have been compressed to days and sometimes hours. Gone is the time when teams could beta test with customers for extended periods to flag real-time bugs.

With current quality testing tools, developers don’t know how a new software version will perform in production—or whether it will even work in production. The Cloudbleed bug is an example of this problem—in February 2017, a simple coding error in a software upgrade from security vendor Cloudflare led to a serious vulnerability discovered by a Google researcher several months later.

Flaws can lead to serious security issues later, in addition to having the immediate impacts mentioned above. Heartbleed, a vulnerability that arose in 2014 and stemmed from a programming mistake in the OpenSSL library, left large numbers of private keys and sensitive information exposed to the internet, enabling theft which otherwise would have been protected by SSL/TLS encryption.

Standard QA Testing Isn’t Enough: Test With Production Traffic

The way QA testing is typically done is no longer sufficient for today’s increasingly frequent and fast development cycles. Traditionally, DevOps teams haven’t been able to do side-by-side testing of the production version and an upgrade candidate.  The QA testing used by many organizations is a set of simulated test suites, which may not give comprehensive insight into the myriad ways in which customers may actually make use of the software. Just because upgraded code works under one set of testing parameters, doesn’t mean it will work in the unpredictable world of production usage.

As in the case of the Cloudflare incident, the error went entirely unnoticed by end users for an extended period of time and there were no system errors logged as a result of the flaw. Just as QA testing isn’t sufficient, relying on system logs and users also has a limited scope for what can be detected.

It is estimated that fixing flaws after a software release can be five times as expensive as fixing them during design—and it can lead to even costlier development delays. Enabling software teams to identify potential bugs and security concerns prior to release can alleviate those delays. Clearly, testing with production traffic earlier in the code development process can save time, money and pain. Software and DevOps teams need a way to test quickly and accurately how new releases will perform with real (not simulated) customer traffic and while maintaining the highest standards.

By evaluating release versions side by side, teams can quickly locate any differences or defects. In addition, they can gain real insight on network performance while also verifying the stability of upgrades and patches in a working environment. Doing this efficiently will significantly reduce the likelihood of releasing software that later needs to be rolled back. Rollbacks are expensive, as we saw in the case of the Microsoft Azure incident.

Some organizations stage rollouts, which requires running multiple software versions in production. The software teams put a small percentage of users on the new version, while most users run the status quo. Unfortunately, this approach to testing with production traffic is cumbersome to manage and costly, and still vulnerable to rollbacks. The other problem with these kinds of rolling deployments is that while failures can be caught early in the process, they are—by design—only caught after they’ve affected end users.

This brings more questions, including: How do you know whether the new software is causing the failures? and, How many failures does the business allow before recalling or rolling back the software, since the business does not observe side-by-side results from the same customer? This disrupts the end user experience, which ultimately affects business operations and company reputation. And staging may not provide a sufficient sample to gauge the efficacy of the new release versus the entire population of customers.

Cost is still an issue as well. If you stage with 10% of customers on the new version and a failure costs more than $300,000 an hour, then a failure affecting 10% of users could potentially still cost more than $30,000 per hour. The impact is reduced, of course, but it’s still significant—not counting the uncertainty of when to rollback.

Looking Ahead

Standard QA testing is no longer enough. To reduce the risk injected into the software development life cycle by today’s rapid iterations, DevOps teams can test in production and evaluate release versions side-by-side. This will help prevent costly rollbacks or staging while still releasing a quality, secure product. The old way of doing things is not sufficient, but fortunately, there is a better way.

Frank Huerta

Frank Huerta

Frank Huerta is the co-founder and CEO of Curtail, Inc., a provider of redundancy-based traffic analysis and continuous network security solutions. Curtail is changing how IT is implemented for service providers, enterprise organizations, government agencies and financial institutions that are developing and launching new software and services, particularly in DevOps environments. Huerta is a seasoned CEO and founder of three other companies including the security company, Recourse Technologies, which was acquired by Symantec.

Recent Posts

Exploring Low/No-Code Platforms, GenAI, Copilots and Code Generators

The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an…

1 hour ago

Datadog DevSecOps Report Shines Spotlight on Java Security Issues

Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are…

20 hours ago

OpenSSF warns of Open Source Social Engineering Threats

Linux dodged a bullet. If the XZ exploit had gone undiscovered for only a few more weeks, millions of Linux…

1 day ago

Auto Reply

We're going to send email messages that say, "Hope this finds you in a well" and see if anybody notices.

1 day ago

From CEO Alan Shimel: Futurum Group Acquires Techstrong Group

I am happy and proud to announce with Daniel Newman, CEO of Futurum Group, an agreement under which Futurum has…

1 day ago

CDF Survey Surfaces DevOps Progress and Challenges

Most developers are using some form of DevOps practices, reports the CDF survey. Adopting STANDARD DevOps practices? Not so much.

2 days ago