Fastly, Inc. this week added a set of observability tools to its portfolio that are optimized for edge computing platforms connected to its content delivery network (CDN).
Laura Thomson, senior vice president of engineering at Fastly, said the company is now providing a set of tools that enable DevOps teams to capture logging data and metrics in real-time via a single console.
The first two available tools are Origin Inspector—to observe network traffic flows including HTTPS response codes—and Domain Inspector—to access metrics based on both historical and real-time data. In total, Fastly is now providing access to more than 200 metrics spanning client, origin, cache, web application firewall (WAF), Compute@Edge, Image Optimization services and others that cover 100% of the data generated since the service was spun up.
There is no sampling of data, and the company is also providing integrations with 33 storage and analysis services provided by either Fastly or a third party, noted Thomson. IT teams can now log any aspect of HTTP requests and responses through a large variety of logging formats, including the Common Log Format, JSON or as a key/value store. IT teams can also store their own logs to make sure they retain complete control over access.
Over the next several months, Fastly will continue to expand this portfolio of observability tools, including an Edge Observer dashboard it is providing to customers that are deploying applications at the edge, added Thomson.
In general, observability tools are being used to not only improve processes and overall customer experience but also identify and block suspicious activity. Fastly is making a case for observability tools that enable DevOps teams to achieve those goals without necessarily having to deploy a separate observability platform.
Of course, most DevOps teams have been continuously tracking various pre-defined metrics for years. Observability tools, in contrast, are designed to make it feasible to interrogate logs, metrics and traces so that IT teams can discover issues before there is a disruption to services. The challenge, of course, is that not every IT team necessarily knows how to shape the queries required to discover the root cause of a potential issue.
One way or another, DevOps teams will need to be able to remotely diagnose edge computing issues as the number of workloads deployed on these platforms expands. IT teams are not going to be able to travel to the physical location of every edge computing platform every time an issue arises. In many ways, the rise of edge computing will force a transition to observability tools that will enable IT teams to more proactively manage IT environments rather than respond after a metric has already exceeded its threshold.
It’s not clear to what degree that approach will prevent IT issues from occurring, but the amount of time required to detect and remediate issues should dramatically decline as more visibility into IT environments, including edge computing platforms, is gained.