It appears that, in many IT environments, observability is becoming too much of a good thing. A global survey of more than 315 IT executives, cloud application architects, DevOps professionals and site reliability engineers (SRE) conducted by Dimensional Research on behalf of log data management platform provider Era Software, found 96% are witnessing an explosion of log data.
The survey also found 79% of respondents reported the overall cost of observability and log management will skyrocket in 2022 if current tools don’t evolve.
More than three-quarters of respondents (78%) also noted attempts to manage volumes of log data have had mixed or unwanted results such as an inability to access data. Nearly two-thirds (65%) are also evaluating their observability options while the other 41% are considering it.
Stela Udovicic, senior vice president of marketing for Era Software, said the survey makes it clear that as organizations rely more on logs to analyze IT events, the cost of storing all that log data is becoming a significant challenge. Overall, the survey finds usage of observability tools and platforms has jumped by 180% as IT teams struggle to manage increasingly complex IT environments.
On the plus side, more business users are benefiting from log data. A full 83% of respondents report that business stakeholders outside of IT use insights from log data, with 96% reporting log data is being used to solve business problems.
Observability has always been a core tenet of DevOps best practices, but achieving it has always been a challenge. Monitoring tools are designed to consume predefined metrics to identify when a specific platform or application is performing within expectations. The metrics tracked generally focus on, for example, resource utilization. However, whenever there is an issue, it can still take days, sometimes weeks, to discover the root cause of an issue through what amounts to a process of elimination.
In contrast, observability combines metrics, logs and traces—a specialized form of logging—to instrument applications in a way that makes it simpler to troubleshot issues without relying solely on a limited set of metrics that have been predefined to monitor a specific process or function. DevOps teams can employ queries to interrogate data in a way that makes it easier to discover the root cause of an issue. Observability platforms correlate events in a way that makes it easier for analytics tools to identify anomalous behavior indicative of an IT issue.
The issue that arises is that IT teams need to find a way to store all the data generated by the logs they need to analyze. Each IT team can, of course, could reduce the amount of log data they retain. However, there is always a concern that in the event of an incident critical log data won’t be available.
Udovicic said most IT organizations have historically viewed storing log data as a necessary evil. The primary issue most of them encounter is the simple fact that log data is not easy to work with, she noted.
Regardless of how organizations view log data, the amount of it that needs to be managed will only increase as DevOps workflows become more mature. The issue IT teams need to come to terms with now is finding a way to minimize the cost of storing it all.