Under an early access program, ngrok today made available an application programming interface (API) gateway that can be consumed as a service.
Rather than having to deploy, maintain, and secure API gateways themselves, application development teams, via a single command or function call, can now invoke a service through which they can route API traffic, authenticate and authorize access, and apply rate-limiting policies using an ngrok policy engine.
ngrok CEO Alan Shreve said this approach also makes it simpler for IT teams to unify ingress management at a time when application networking is becoming more challenging to manage. That approach also reduces friction because instead of being dependent on an IT operations team, it gives developers more direct control over APIs, he added. Today, most access to APIs is granted after developer first create a ticket request in an IT services management (ITSM) platform, noted Shreve.
In addition, the ngrok gateway also streamlines access to APIs spanning multiple clouds and on-premises IT environments in a way that enables organizations to implement a multi-cloud computing strategy, he added. Today, many organizations are locked into a single platform because they are dependent on the API gateway services provided by their cloud service provider, noted Shreve.
Finally, organizations will only need to pay for the API gateway service based on actual usage versus incurring the cost of acquiring a gateway upfront, said Shreve.
Previously, ngrok has been making available software development kits (SDKs) to provide programmable access to its API gateway. The as-a-service option being offered now is the next logical extension of that effort, noted Shreve.
In general, more infrastructure than ever is being consumed as a service, so making available an API gateway service is only the latest example of that larger trend, he added.
Each organization, based on total cost, will naturally need to decide when it makes the most economic sense to consume IT infrastructure as a service; however, as application networking continues to become more challenging to manage at a time when the number of IT professionals with that expertise remains limited. Relying on a service provides a viable alternative for managing what are often hundreds of APIs spanning a distributed computing environment.
It’s not clear yet precisely which teams within an IT organization are assuming responsibility for application networking, but as layer 4 through layer 7 of the networking stack become more programmable, more responsibility for networking can be assumed by DevOps and platform engineering teams. Existing networking teams, meanwhile, continue to manage the physical networking underlays used to route network traffic. Regardless of how application networking evolves, the rigidity that has characterized the delivery of network services is starting to fade away in a way that makes it possible to dynamically invoke networking services as needed.
IT teams must decide how best to approach application networking across legacy monolithic applications, microservices and event-driven applications. The goal, however, should be to eliminate as much friction as possible at a time when applications have become more dependent on networking services than ever.