That automation is one of the critical components of DevOps is apparent in numerous surveys and studies that focus on the benefits organizations have realized from implementing the approach across operational groups. Automation itself starts with (or should start with) a comprehensive API. That means infrastructure – network and application focused – needs to expose via an API the myriad configuration options it requires as well as optional features it offers.
It is through these APIs, after all, that integration between devices and systems and across operational groups occurs. The much vaunted SDN or cloud “ecosystem” is really a euphemism for pre-packaged and validated integration. Integration that occurs via an API.
These same APIs are available to everyone, of course, and lay a foundation to operationalize app deployments by automating the various provisioning and configuration tasks required to move an app from dev to production.
APIs allow for fine-grained automation and orchestration, with each and every knob, button and slider option available to be flipped, switched or turned. The more features and functions possible, the more buttons and knobs and sliders there are to manipulate, figuratively speaking.
Each and every one of those calls to do so needs to be carefully managed. Exceptions happen, errors occur, and any script or tool ought to carefully validate that each and every one of those calls has been successful. Which is going to take a lot of code. Code that itself will introduce the possibility of errors and require maintenance and reviews and all that goes along with good development practices. Code that will also impact the time required to execute. Given that most APIs are RESTful and require invocation via standard HTTP exchanges, each and every API call incurs time to send, time to execute, and time to receive a response. The overhead is basically an API tax; for every API call you invoke, you’re paying with time… and more time.
Martin Fowler touches on this in his post “Microservices and the First Law of Distributed Objects“:
The consequence of this difference is that your guidelines for APIs are different. In process calls can be fine-grained, if you want 100 product prices and availabilities, you can happily make 100 calls to your product price function and another 100 for the availabilities. But if that function is a remote call, you’re usually better off to batch all that into a single call that asks for all 100 prices and availabilities in one go.
Thus for sufficiently complex systems another option is needed: templates. Templates do not replace APIs, but rather provide an additional and far more compact method of accomplishing the same tasks automatically, but without the overhead imposed by highly granular sets of API calls. Templates encapsulate a configuration; codifying the settings for a variety of knobs, buttons and sliders into a single configuration artifact. In a fine-grained API-base integration, each of those settings must be maintained somewhere external to the automation script or system. Each API call will include one (or maybe two) of those variables as read from some file or system that maintains the application-specific set of data. This introduces many potential points of failure. Whether file or database, variables must be retrieved before being associated with an API call and then transmitted to the receiving device. Errors may occur on read, on format, or on transmission.
In contrast, if all the variables exist as a single, validated configuration template artifact, it can then be sent, as a single entity, via an API call to the device or service in question. The number of potential points of failure is effectively reduced to read/retrieve and on transmission. This greatly simplifies the process as well as reduces the time it takes to actually execute the configuration of the target device or service.
In addition to simplifying integration by reducing the provisioning and configuration tasks from potentially hundreds of API calls down to a small set of calls, this has the benefit of fitting much easier into an infrastructure as code strategy, where common configurations are treated as artifacts and stored in repositories much like its application counterparts.
There is an inflection point at which it becomes advantageous to use a template-based approach over individual API calls. That point occurs before the scripts driving the API-based automation become as complex as an application and thus bring with it associated issues like error rates, time spent troubleshooting and maintainability.
A template-based approach simplifies upstream workloads and integration as well as discrete automation scripts by reducing the set of variables required to just a few – usually those dependent on network-specific values such as IP addresses and network segmentation membership (tags or ids representing VLAN, VXLAN, NVGRE, etc…).
This further ensures a consistency (and in turn results in greater stability of the underlying infrastructure) in app deployments.
APIs are a good thing. They’re a key enabler of software-defined architectures like SDDC, cloud and SDN. But API-enabling infrastructure doesn’t necessarily mean only on a checkbox and radio-button basis. That can be valuable but it can also lead to integration efforts that are just as complex (or more so) than their manual counterparts. A template or policy-based (application-driven) approach coupled with an API through which to deliver and execute such constructs results in a much cleaner, more consistent and stable means of integrating provisioning processes into the greater software-defined architecture.