APIs are the satisfaction of a long and deep need to be able to create consistent and reliable integrations between disparate systems, operating systems and datasets. As we started to use REST-based APIs, we also realized they fulfilled a previously little-addressed gap in automation. Frankly, they are a force multiplier in DevOps delivery.
Early efforts at system-independent APIs were not great, but we learned lessons and are steadily getting better. While REST is the default solution for APIs, it probably isn’t the end of the road, as we continue to learn and improve. That’s what we in tech do – get excited about an idea, drive it to broad use and then iteratively improve it.
And that’s the root of the problem. We have been, and are, saturating our applications with APIs. Internal, external, external-that-rely-on-another-external. The thing that comes to mind every single time I look at an API dependency map for even a medium-sized application is “spaghetti code.” Those lines are going all over, quite frequently returning to themselves.
And we’re going to have to maintain this creation. Maybe not us, individually, but us as in IT and DevOps teams. The only constant is change, and far too many of us are not only failing to plan for API changes, but failing to even think about them. Because things are generally going swimmingly and most IT managers and DevOps teams have more urgent things to think about.
But risk management – particularly broad risk – is an essential part of future planning. If you lose access to external API X, what will you do to keep your application running? If you lose an entire internal system that hosts API Y, how can you recover in a timely manner? While most organizations will identify and repair single points of failure, the solution is a pool, and the impact of losing Z servers in the pool needs to be understood, if nothing else – questions like, “At what point would user experience be impacted?” need to be asked, and a plan formulated to keep that from happening.
We are already seeing issues created by using external APIs. All of them are predictable – an API going offline, or even disappearing completely; an API interface or attendant data format change; a change to the algorithms underlying the API, causing a change in data results, etc.
API usage is far easier than it has historically been – and companies are working hard to make it easier to access APIs, both yours and others’. This is a good thing, as it means less time spent figuring out and tweaking API calls and request data. But it means our API dependence will only grow, and the number of lines in our dependency graphs – each representing some amount of risk – will only grow.
“But we’re doing things in the cloud, so we can auto-scale, and failures are no longer and issue for us!” some shout. While that can help some scenarios, it doesn’t help others – if a vendor goes under and takes an API you use with it, or a preferred vendor (or inside component) becomes a security risk, you’ll still need to take steps. Knowing what and where your weaknesses are before they become issues is important, and worth your time.
Most of you are killing it, cranking out solutions faster, iterating to make them better, using APIs to extend functionality and reduce delivery times. I’m just saying, stay aware of the technical debt you’re creating, because debts always come due, and risks eventually become issues. So know what’s a risk in your infrastructure, and have a plan to react when it does cause problems.