AppSignal has extended its application performance monitoring platform to provide native support for OpenTelemetry, an open source framework for instrumenting applications that is being advanced under the auspices of the Cloud Native Computing Foundation (CNCF).
Wes Oudshoorn, chief product officer for AppSignal, said the implementation of OpenTelemetry is designed to enable DevOps teams with a simple line of code to begin collecting telemetry data from Go, Java, PHP, Ruby, Elixir, and Node.js applications that begin to yield actionable insights in less than five minutes.
That zero-configuration approach will make it possible for a much wider range of organizations to instrument applications, he added.
Historically, one of the major challenges with instrumenting applications was simply adding the ability to actually collect telemetry data. As a result, organizations tended to limit the number of applications they actually instrumented. AppSignal has now streamlined the instrumentation process to the point where OpenTelemetry can be easily added to any endpoint that the AppSignal platform monitors, said Oudshoorn.
It’s not clear at what pace organizations are instrumenting applications, but having this capability will become even more critical as the pace at which applications are being built and deployed continues to accelerate in the age of artificial intelligence (AI), noted Oudshoorn. Otherwise, IT and software engineering teams will simply be overwhelmed by the number of potential issues being created by so-called citizen developers that generally lack programming expertise, he added.
In fact, much of the code being generated by AI tools is overly verbose, noted Oudshoorn. The more code there is, of course, the more likely it becomes there will be a performance or cybersecurity issue that IT teams will be called upon to address, said Oudshoorn.
Each IT organization will need to determine how best to respond to the rise of AI in coding, but the number requiring more advanced monitoring and observability capabilities will undoubtedly increase. Most smaller organizations are going to lean toward platforms that provide both capabilities at a level of cost that is far more accessible versus a larger enterprise that might have separate platforms for monitoring IT environments and investigating code-related issues, noted Oudshoorn.
The issue then becomes determining how best to manage application environments that in terms of the amount of code running will soon increase to a level that not too long ago might have seemed unimaginable. Even small- to medium-sized organizations will soon be managing much larger application portfolios, said Oudshoorn.
In the meantime, IT teams, and DevOps engineers especially, should be crafting strategies to instrument all that code. Many of the existing workflows for responding to issues and incidents simply will not scale to meet the needs of organizations in the age of AI. Hopefully, as advances in AI are made more broadly, IT teams will gain additional tools that enable them to rise to that challenge. The real issue, of course, is determining to what degree they might be able to proactively acquire and deploy those tools today before the now inevitable onslaught of code being created becomes too overwhelming to effectively manage.