LogicMonitor today announced it will provide an option that allows IT organizations to retain an unlimited amount of log data on its software-as-a-service (SaaS) platform.
Tej Redkar, chief product officer at LogicMonitor, said one of the issues that is holding back advances in observability is the cost of storing log data. LogicMonitor has decided to remove that issue by allowing IT teams to store log data on its LM Logs service for as long as they require, Redkar said.
That data will also always be readily available as hot storage with no need to wait for data that has been stored offline to be rehydrated, he noted. Instead, that data is readily accessible alongside the metrics and distributed tracing data that LogicMonitor also collects, said Redkar.
In general, LogicMonitor is making a case for centralizing observability for the entire IT organization via a SaaS platform infused with machine learning algorithms that automatically surface anomalies in real-time based on millions of events captured via log data. The issue many organizations encounter today is silos of decentralized observability data that are more complex to maintain while providing less visibility at a higher total cost, said Redkar.
The LogicMonitor platform, in contrast, is designed to be equally accessible to, for example, a network operations team as it is to a DevOps team, he noted.
Not every organization needs to store log data forever, but Redkar said as IT management continues to evolve, more organizations are moving toward storing metrics, traces and logs for longer periods of time. The rise of open source agent software is also making it less costly for organizations to capture that data in the first place.
It’s not clear to what degree other providers of observability platforms will provide unlimited retention of data, but as the cost of storing data in the cloud continues to drop it’s apparent that billing IT organizations for storage costs is becoming untenable. There is always going to be some cost associated with storage, but it’s not enough to charge extra beyond the core cost of the service. The only thing accomplished by billing separately for storage is discouraging IT teams from storing the data required to both attain observability and better train the machine learning algorithms that surface anomalies.
Observability, in one form or another, has been a core tenet of DevOps best practices for years. Initially, DevOps teams focused on continuous monitoring as the most effective way to proactively manage application environments. Observability platforms infused with machine learning algorithms make it possible to correlate events so that analytics tools can more easily identify anomalous behavior in real-time. Armed with those insights, it becomes a lot simpler for IT teams to resolve issues faster.
In fact, there may even come a day when the so-called “war room” meetings that are convened to identify the cause of an IT issue via a painstaking process of elimination are no longer required. In the meantime, however, the total number of IT incidents that actually lead to disruption should steadily decline, even as IT environments become more complex.