The observability market paired with cloud security has been predicted to reach $70B by 2026, but that market prediction is a myth, and I’ll tell you why.
If observability was a grand buffet and every company needed a feast of features and capabilities of this proportion, maybe this prediction would be valid. But most companies don’t need an over-the-top feast. Legacy APM and observability solutions are serving up an all-you-can-eat buffet when the reality is that most companies only need the essential ingredients to satisfy their observability strategy. And the cost for that can be much more modest. The $70B observability market prediction is, therefore, suspect, considering the true needs of most organizations.
Complex cloud ecosystems, the rise of applications built on microservices and Kubernetes, and the overall scale and complexity of applications have led to exponentially more data being created in the pursuit of observability. Storing and processing all that data is expensive, and as the volume of data balloons, so does the cost.
And while the complexity of business-critical systems and the volume of data have scaled, so have the overall cost of those legacy tools—in direct proportion. Partially, this is driven by the features and instrumentation added that only contribute to the bloat and valueless telemetry.
Customers aren’t seeing value in any of this bloat, and for the most part, all that’s been accomplished is increased complexity in deploying and operating the system, with a commensurate lengthening of time to value. Plus, pricing models haven’t significantly changed despite the advancements in the market. The fact is, organizations can’t really say that they have gotten more insight into the overall performance of their mission-critical systems. What they are saying, however, is that these tools are becoming overwhelming to manage and overwhelmingly expensive, and developer productivity is not increasing.
What organizations need and want are real-time insights into the health and performance of their cloud infrastructure and applications–specifically the confluence of logs, metrics and traces within the context of modern observability. There is little tolerance for software bugs, slow web experiences, crashed apps and other service interruptions, and the responsibility lies on developers, engineers and ITOps teams to quickly resolve production issues before they impact customer experience.
There is a call for observability, but rather than an explosion of features and noise, what’s being called for is a narrowing of focus.
Observability Has an Efficiency Problem
Telemetry data gathered from the distributed components of modern cloud architectures needs to be centralized and correlated for engineers to gain a complete picture of their environments.
Engineers need a solution with critical capabilities such as dashboarding, querying and alerting, and AI-based analysis and response, and they need the operation and management of the solution to be streamlined.
What’s important for them to know is that it’s not necessary to spend more to ensure peak performance and visibility as their environmental complexity grows.
Legacy players experiencing some maturity in the market may have conditioned them to believe this is true—they have set a standard for higher costs correlating with incremental amounts of data, no matter the quality of that data. Companies are being fed the lie that all this data may be relevant and they’ll be doing themselves a disservice by neglecting even one byte of it.
Data Does Not Inherently Equal Insights
No doubt, more data is being generated, but most of it is not relevant or valuable to an organization. Observability can be optimized to bring greater value to customers, and that’s where the market is headed.
Call it “essential observability.” It’s a disruptive vision to propose a re-architected approach to observability, but what engineers need is a new approach making it easier to surface insights from their telemetry data while deprioritizing low-value data. Costs can be reduced by consuming only the data that enables teams to maintain performance and drive smart business decisions.
Essential observability is cutting out the fluff and only focusing on—and paying for—the impactful data. It’s about arming engineers with meaningful insights so they can quickly resolve production issues and business disruptions. Organizational performance increases without inflating data complexity and resource consumption.
The Essential Observability Revolution
Observability tools have evolved beyond the model of linear price increases as complexity and data scales. For companies (especially growth-stage organizations) that want a partner to help them make more informed decisions and keep their costs more reasonable (i.e., stop recklessly spending for useless data), they need to focus on tools offering them the essentials.
Solutions are available, offering targeted features and automated capabilities out of the box, delivering the insights needed to understand and fine-tune data. These solutions provide continuous support, reduce complexity and accelerate time to value.
The next revolution in observability isn’t features and functionality but providing insights to make data-driven decisions that keep essential observability cost-effective while not compromising the value derived.