A significant amount of bandwidth is spent discussing automation and praising the value of APIs within the broader DevOps umbrella. These concepts are particularly important in the context of CI/CD and the evolution of application architectures toward decoupled, service-oriented systems. Automation is used for build and test, to drive the workflow that moves an app from disconnected sets of services to a working release candidate ready to breach the wall of production and be put into the hands of adoring fans impatient consumers.
But in between the consumers and your app stands the production pipeline; a process-filled wasteland through which an app must travel so that the appropriate services required to deliver it can be provisioned, configured and put into place. It is this pipeline that will meet DevOps head on next, whether we call it DevOps or SDN or SDDC. It stands in the way of an increasingly critical “time to market” metric by which all of IT is being measured. Research shows that the production pipeline is a major pain point for IT; taking weeks or even months to traverse. The network is still in the way, and that must be addressed if C-level concerns about meeting increasing pressure to get apps to market are to be met.
Applying DevOps principles to the infrastructure and network makes sense as it’s as much about optimizing processes (workflow) as it is writing scripts and it is the production pipeline process that would benefit. Automation and orchestration, of course, will help drive a more consistent, efficient, and repeatable production pipeline traversal. Identifying where inefficiencies (latency) lies in the process and eliminating it by establishing more efficient (and perhaps automated) handoffs using automation and orchestration systems can go a long way toward shortening the amount of time between hand off to production and delivery to consumers.
APIs are our “go to” technology today for driving automation, orchestration, and integration with the frameworks and systems we rely on to scale operations. Most every infrastructure and network device today has an API and can be controlled via frameworks like Puppet and Chef as well as directly using .
But most are imperative methods of control. It requires processes to be strictly defined through a sequence of API calls. Unlike simpler resources like virtual machines, one does not simply need “start, stop, or pause.” Configuration that was once (and still is, let’s be honest) accomplished via CLI (command line interfaces) using SSH is simply migrating to an API using HTTP instead.
But otherwise, not much has changed. The same commands used on the CLI are now sent via an API. We’re still in an imperative mode and if we need to tweak or adjust the configuration, we need to tweak and adjust the script that drives the configuration.
Compare that now with a different approach: a declarative approach. Using templates to describe what needs to be configured or accomplished (data) as opposed to how it needs to be accomplished (commands) makes a significant difference in not only the process but in the long-term technical debt acquired (or not acquired, as the case may be). APIs change; methods deprecate, versions matter. Treating application services (load balancing, caching, optimization and acceleration, security) as “code” (templates) instead of a series of API calls can dramatically change the velocity with which such services can be provisioned and configured. This has the added effect of reducing the amount of churn in the automation code which should reduce technical debt as well. Such methods are also more portable, enabling easier migration into cloud environments for applications that might be (or will be) moved into or around different environments.
Ultimately, the question is whether or not you want to tie your processes – and the interfaces between workflow steps in that process – to many API calls or a standardized set of API calls that leverage declarative policies (templates) to provision and configure the services that make up the production pipeline.
Open source components have become an integral part of today’s software applications — it’s impossible to keep up with the hectic pace of release cycles without them. As open source usage continues to grow, so does the number of eyes focused on open source security research, resulting in a record-breaking ... Read More