Observability has become one of the most overused buzzwords in IT and cybersecurity. Today, the term is used by vendors to refer to everything from application performance to network monitoring, cybersecurity and data and analytics.
While the term’s ubiquity has created confusion for everyone from end users to journalists, startups in the space have also attracted over $2 billion dollars in venture capital investment over the past two years. This momentum prompted TechCrunch to ask if this area, specifically data observability, was effectively recession-proof.
This observation triggers two questions. First, what is observability? And second, what are the different kinds or variants?
What is Observability?
Observability, as a broad practice or capability, originated in the 1960s as part of industrial control theory. The idea is that by watching the output of a system, you can figure out what’s happening inside the black box. This sounds a lot like monitoring, a popular practice for IT and security teams. However, observability takes a different approach than monitoring’s alert-based methods; it allows you to ask questions about a system that dig deeper than the pre-defined thresholds from your monitoring system.
Observability requires collecting massive amounts of data from systems, networks and applications to feed its discovery process. As systems and applications become more complex, figuring out what went wrong and why becomes much more challenging. At a high level, these are the kinds of challenges many startups in this area are addressing. Each takes a different approach and solves a different part of the observability problem.
Types of Observability
The most frequently mentioned type is data observability, which is concerned with the health and quality of data passing through data pipelines for analytical use cases like feeding a data warehouse. Vendors like Acceldata, Bigeye or Monte Carlo Data typically target their products to data engineers tasked with building and operating analytical data pipelines.
Another group of companies addresses applications. These firms, like Honeycomb and Observe.ai, collect data from applications to help site reliability engineers (SREs) understand performance issues and aid them in debugging and troubleshooting.
There are also companies specializing in machine learning observability, and these are concerned with the performance and drift of models in production. These companies, like Arize and WhyLabs, target data scientists.
Finally, there are companies in the observability data space which is comprised of logs, events, metrics and traces essential for all other forms of observability and monitoring to work. Companies in this space, like Cribl, use specially-designed pipelines to connect the sources and destinations of observability and security data. These pipelines allow companies to route data to multiple destinations, enrich data in flight and reduce data volumes before ingestion.
As you can see, there are dozens of companies in the space. Each category of vendors addresses a unique challenge in today’s sprawling IT and security landscape.
Beware Observability-Washing
As the hype around this topic grows, companies hoping to differentiate themselves from their competitors may suddenly rebrand as observability companies. This is already happening. Legacy monitoring and alerting companies are calling themselves observability companies, hoping to attract the same kind of attention as others in the space that actually do have differentiated products and positioning.