The rise of edge computing is about to drive a long-overdue convergence of DevOps, data engineering, security, networking, operational technology (OT) and machine learning operations (MLOps) best practices, which will ultimately make IT teams more responsive than ever to the needs of the business.
Historically, each of these IT disciplines has tended to work in isolation, with deliberate handoffs being made between the various teams employed to provision and maintain various services. The challenge is that as more applications are deployed at the network edge, the need for IT teams to work hand-in-glove is crucial. Applications not only need to be deployed across a much more distributed computing environment, but they also need to be regularly updated and patched.
At the same time, increasing amounts of data are being processed and analyzed at the point where it is collected and consumed. More stateful applications running at the network edge are passing aggregated analytics data back to applications running in the cloud or on-premises IT environment. That shift requires a fundamentally different approach to how storage is managed across multiple edge computing environments and the various federated backend systems spanning the enterprise.
Organizations will typically need to store files locally on edge computing platforms while also storing copies of those files in the cloud using an S3-compatible storage service that provides access to inexpensive object-based storage to enable everything from data protection to driving artificial intelligence (AI) applications. That multi-protocol capability creates an opportunity to not just build new stateful distributed applications but also modernize existing applications regardless of what protocols are being used to store data.
Most edge computing platforms today are managed by OT teams that report directly to the leaders of specific business units. However, as more of these platforms are connected to networks, the greater the imperative to centrally manage them as part of any effort to lower the total cost of IT. After all, the most expensive element of IT remains the cost of the labor still required to manage it.
Naturally, sharing data in near real-time across a distributed computing environment puts more pressure on network operations (NetOps) teams. Instead of updating applications in batch mode once a day, data continuously streaming from thousands of edge computing platforms needs to be synchronized with multiple applications to surface the latest, most accurate insights.
More challenging still is that much of that data is being used to train AI models that ultimately require the associated inference engine created to be deployed at the edge. To process terabytes of data, the underlying platform required to support those inference engines also will need to be measured in terabytes. Over time, AI models tend to drift or, in the case of generative AI, outright hallucinate, over time. Data scientists will need to work closely with DevOps teams to replace those AI models because, unlike a traditional application, they can’t be updated via a patch. The entire AI model needs to be replaced with one that is not just more reliable but also—and just as importantly—safer.
Finally, the need to secure edge computing platform is nothing short of critical. Cybercriminals tend to view these platforms as gateways to the rest of the enterprise. Each new edge computing platform deployed extends the size of the attack surface that needs to be defended. Unless cybersecurity teams are deeply involved in building and deploying edge computing applications and platforms, it’s only a question of when, rather than if, a breach will occur.
It may be a while before these various IT fiefdoms converge, but at this point, it is all but inevitable. In fact, it will soon be difficult to distinguish between edge computing and the rest of the IT environment as event-driven applications spanning multiple platforms become more the norm than the exception.
DevOps teams, as part of their relentless commitment to automation, should naturally be at the forefront of these efforts. Rather than confining DevOps principles to how applications are built and deployed, the time has clearly come to automate IT workflows on an end-to-end basis. The only way to programmatically manage edge computing at scale is to also apply DevOps principles at scale. Also known as platform engineering, that approach enables organizations to manage distributed IT environments at scale without requiring organizations to hire a small army of IT teams to manage it all. There will always be a need for specialized expertise, but as IT continues to evolve the siloes that slow down the pace of innovation will need to come down.
Not every member of an IT organization needs to know how to expertly manage every task, but they should be familiar with the inherent dependencies that exist between applications, networking, storage and cybersecurity to advance the goals of businesses that have never been more dependent on IT.
Each organization needs to determine the pace of convergence that best suits its needs, but the faster it occurs, the greater the return on the investment in edge computing will become. The challenge and the opportunity, as always, will be managing yet another major IT transition, where the issues that need to be resolved are as much cultural as they are technical.