Splunk’s data collector runs as a sandbox-level program at the kernel of a Linux operating system and takes advantage of extended Berkeley Packet Filter (eBPF) technology to make it simpler to collect networking telemetry.
The OpenTelemetry initiative itself spans a range of open source tools, application programming interfaces (APIs) and software development kits (SDKs) that are used to instrument applications. Previously, Splunk contributed the SignalFx Smart Agent and Smart Gateway to the OpenTelemetry project along with more than 64,000 code contributions.
Morgan McLean, a director of product management for Splunk, said that while OpenTelemetry is officially in beta, some elements of the project are more mature than others. For example, tools for capturing metrics and traces are already being employed, while another set of tools for capturing log data will be ready sometime next year, he said.
The data collector contributed by Splunk is among the first that operate at the kernel level to collect data, added McLean. That approach makes it possible for some types of data to be captured by default rather than requiring developers to always instrument their applications using agents to enable an observability platform. It will also contribute toward the eventual convergence of network operations and DevOps processes.
Going forward, DevOps teams should expect to employ a mix of open source data collectors that operate at both the eBPF and application level, said McLean. The eventual goal is to instrument every application by default by making it much simpler to collect data. Today, most DevOps teams are employing a mix of proprietary and open source agent software to collect data that then must be deployed and then integrated with all the applications they build. Given the cost and level of effort required, the percentage of applications that are actually instrumented is, not surprisingly, fairly low.
However, as agent software becomes more readily accessible, the percentage of applications that are instrumented should increase considerably in the years ahead. For DevOps teams that depend on that data to optimize application performance, that capability should prove to be a boon for improving overall observability in IT environments.
Observability, in one form or another, has always been a core tenet of any DevOps best practice. Initially, DevOps teams focused on continuous monitoring as the most effective way to proactively manage application environments. However, it can still take days, sometimes weeks, to discover the root cause of an issue.
Monitoring focuses on predefined metrics to identify when a specific platform or application is performing within expectations. The metrics tracked generally focus on, for example, resource utilization. Observability platforms combine metrics, logs and traces—a specialized form of logging—to instrument applications in a way that makes it simpler to troubleshoot issues without relying on a limited set of predefined metrics focused on a specific process or function.
Those observability platforms then make it possible to correlate events so that it is easier to identify anomalous behavior indicative of an issue’s root cause. Armed with these insights, it becomes a lot simpler for IT teams to resolve issues faster.
It’s unclear when OpenTelemetry tools will become more widely employed, but as more tools for collecting data become available, the impact on DevOps will be nothing short of profound.