The DevOps Revolution started with a handful of simple principles. A key one was that systems should be managed by code, not by humans, to eliminate complexity and ensure better security, compliance and scalability. To manage compute infrastructure at any scale, the process should be automated with logic captured in code. And, ideally, automation also would apply as you move up to the stack to middleware and applications layers.
Unfortunately, it’s not that simple. In fact, automating applications turns out to be way more complex than automating server deployments. For this reason, we need a new approach to managing applications and infrastructure. Managing servers and lower-level environments could be simpler, but existing bottom-up methods work fine. Managing applications, however, must come from the top down. That is, applications should come with everything they need to be successful: configuration, packaging, discovery, scaling requirements and more. And that just isn’t how we have historically built software. Applications are built on layers and layers of dependencies that connect all the way down to the system level. So how do we break the mold and make an application “self contained?
Thousands of Apps with Millions of Dependency Permutations
By automating the Linux and Windows server ecosystem and a small number of derivative operating systems we can cover roughly 95 percent of operating systems, and the series of functions that are common across them. The complexity arises when we try to lay that matrix upon the array of applications that exist in the real world of deployment and production. In smaller companies, you usually have at least a dozen applications and often more. In larger companies, you are dealing with hundreds or thousands of applications.
This melange spans bespoke apps running in legacy languages, proprietary third-party software and more modern open source software. There are literally millions of permutations that must be automated across dependencies, along with security and compliance regimes. In the world of Kubernetes, for example, each container requires its own Sidecar that contains configuration and management information. Kubernetes Sidecars, however, don’t have robust automation capabilities and quickly sprawl out into a mass of human-defined management instructions that can easily conflict or break down under stress.
The upshot of this? Today we can write a small amount of code that can effectively manage hundreds of thousands of servers for Facebook. This is true to basically infinite scale. But to automate management of Facebook’s software has injected orders of magnitude more complexity.
Fear of Application Complexity Killing Cloud Migrations
This dichotomy has led infrastructure automation to go sideways; organizations have adopted inauthentic automation coupled with departures from the real ideals of DevOps and agile methodologies. Many companies just aren’t realizing the true benefits of infrastructure automation because they are doing something that is needlessly complicated and may have characteristics of automation but in fact is even more complicated. Thus, they have lost the value of the word and merely exchanged one complexity for another.
Similarly, many application development teams are doing what they call “ad hoc” infrastructure automation as DevOps teams avert their eyes and pray. Translation? A roll-your-own mishmash of tools to manage containers or VMs, applications and all the dependencies from the top to the bottom of the application runtime environment. What this really translates into is a human executing an ever-changing set of narrow paths against an individual server—a recipe for chaos. This chaos is why the majority of our customers believe infrastructure automation is a critical step to achieve agility but a smaller subset have fully embraced it.
It’s also a major reason why many companies that would love to migrate to the cloud have not yet done so. Managing the repackaging and movement—the “lift-and-shift”—of all their applications is a daunting prospect that they rightly believe will be far more difficult than simply switching from bare-metal or VMs to instances in AWS or Azure or Google Compute Engine.
Now, we don’t need to throw the baby out with the bathwater. The automation of operating systems and VMs and containers works just fine and is actually a tremendous improvement on past efforts. In fact, we could easily operate there with a smaller set of features and capabilities than we have to date. But applications must be managed from the top-down to accommodate the inherent complexity of managing so many more moving parts.
Enter The New Paradigm: Top-Down Application Management
Top-down application management works in a fundamentally different way, Rather than rigid configurations and choices about what an application should do, modern application management tools for DevOps teams should ask what an application needs to succeed and do its job well across the entire life cycle, from development to production to retirement. These tools should automatically discover changes in dependencies of an application stack, from front end to middleware to database and load-balancing and security.
If there are changes in these dependencies or other critical factors, then those tools need to automatically rebuild and test the applications, and have them ready to deploy either as a canary or directly into production if the changes are minor. For example, if a new SSL patch is published and a DevOps team updates its underlying infrastructure, then the application management toolchain should recognize which applications need to make which changes, and how that might affect both upstream and downstream dependencies. Ultimately, the toolchain should quietly and seamlessly fix all affected applications with the patch but avoid service interruption.
Once the success criteria of the applications are defined by a human DevOps person, then the application configuration and management tools’ job is ensure that criteria is always met. Collectively, this is a new type of packaging that captures the best of all possible worlds, merging configuration and packaging into intelligent application management that can scale and flex as needed.
By capturing the dependencies of configuration and the strictures of traditional packaging, top-down application management methodologies and tools allow a CIO to move an aged C or C++ back-office application into a cloud-native environment just as easily as a passel of Node.js microservices backstopping a mobile application, and have them both run with the same level of continuity and flexibility. Which is really what everyone wants—a safer, easier and better way to run applications without having to worry about the underlying infrastructure.
In a sense, then, the DevOps Revolution is entering a new phase. As microservices continue to slice and dice the old functionality of monolithic apps into numerous smaller apps, the challenge of managing this app proliferation will only grow—and become more complicated.
The way we thought about DevOps tools in the days when cloud computing was relatively young was understandably naïve and not yet informed by the vast application cornucopia we would face in subsequent years magnified by the reality that legacy applications decades old would continue to run in corporate data centers until their code keels over. Now we know and we can design an automation methodology and practices that map to our modern realities. We can do this by giving infrastructure the simple rules and tools it needs, while giving apps more leeway to be themselves and succeed regardless of what is happening at lower levels of the stack.