Sawmills today emerged from stealth to launch a platform that makes use of artificial intelligence (AI) models to manage telemetry data more cost-effectively.
Fresh from raising $10 million in seed funding, Sawmills CEO Ronit Belson said as more organizations embrace observability they are collecting massive amounts of log, traces and metrics that are proving costly to store.
Sawmills addresses that issue with a management platform based on the OpenTelemetry Collector tool developed under the auspices of the Cloud Native Computing Foundation (CNCF). Designed specifically for telemetry data, the Sawmills platform makes it simpler to identify in real-time telemetry data that is worth storing in the first place, compress it, and then route to both where it is needed and then ultimately stored.
Additionally, the Sawmills platform identifies duplicate data, missing data points and inconsistent formats that conspire to make root-cause analysis costly and far too time-consuming, noted Belson.
The Sawmills platform will also surface recommendations that with a single click can be applied to, for example, limit spikes in data storage costs. Those smart sampling policies provide DevOps teams with more granular control over how telemetry data is managed within the context of a larger observability initiative, said Belson.
That’s crucial because as organizations embrace observability they are starting to realize they might only need to store 10 to 30% of telemetry data being collected, added Belson.
There is, of course, no shortage of observability platforms that to varying degrees provide a capability to manage telemetry data. However, many of those platforms charge organizations based on the amount of data an organization collects. Sawmills provides more granular control over telemetry data in a way that can be applied across multiple observability platforms, noted Belson.
DevOps teams can also use the same platform to route telemetry data to other platforms, such as an analytic tool that cybersecurity teams are using to determine the root cause of a breach, added Belson.
It’s not clear at what pace DevOps teams are moving beyond simple monitoring of pre-defined metrics to embrace observability platforms that enable them to apply advanced analytics to increasingly complex IT environments that thanks to advances in AI might soon be running an exponentially higher number of applications. The issue, as always, is all these applications have dependencies that are now beyond the ability of a DevOps team to manually track and resolve.
The one clear thing is the amount of telemetry data being generated across modern IT environments continues to exponentially increase. There are several platforms already available to manage that data but as the volume of telemetry data continues to expand it is already apparent DevOps teams will need access to AI tools to make sense of it all. Otherwise, the telemetry data being collected quickly becomes too much of a good thing resulting in investments in observability platforms that don’t yield the level of real-time actionable intelligence that DevOps teams now sorely require.