Hardware vendors face a quandary. How do you distinguish yourself as superior from your competitors if you are unable to continue to grow a distinct architecture as your competitive advantage? But this dilemma does not begin and end with only hardware vendors; operating system vendors and virtual machine vendors are on the same horizon. In an effort to “uncomplicate” hardware idiosyncrasies, our industry evolves efforts from open standards to least-common-denominator functionality for a “common” language or set of features, and over time technologies such as containers emerge. If I can abstract my application from the “uniqueness” of hardware vendors, from the complexity of configuration settings, and make only assumptions about basics such as CPU, disk, memory, etc., then I am free to focus on the application and allow my “hardware mitigation technology” to perform its function.
The Relationship Between App and Storage
But to accomplish this feat, I must agree to a basic set of features or functions I intend to call, such as making “x” amount of machine memory available to my application. Whether disk or RAM, I am looking to define what my application needs and then expect the container (or hardware mitigation/abstraction manager of my choice) to manage the actual allocation. Then the quandary begins. Asking for 1TB of disk space as an example is generic enough. But does my request include the ability to specify I am looking only for disks that spin at more than 7,500 revolutions per minute, or will I be saddled with 5,000 rpm disks? There is a clear implication in speed for my app.
Ultimately, as I sit and tune my code because my app has become “slow,” it could be something as innocuous as having its actual execution occurring in a cloud provider who uses the slower disk revolutions as opposed to the faster ones. Worse, it simply could be that the same cloud provider has a batch of the slower ones in one of its data centers where I wind up executing (only on a temporary basis), and I am never fully able to figure out why my app got “slow” for a while. That kind of random bad ju-ju requires the execution of a chicken to fix (I prefer Popeye’s, myself).
Different storage vendors might have additional peculiarities that make one or the other superior in speed, reliability or performance over their respective competitors. But the container’s main functionality is to provide “disk” to my app. Specifications of specific disk vendors or providers are the problem of the cloud provider, not the app developer, in theory. But hardware refresh can be pricey, and to make a claim that “all our disks” have rpm specifications above 10,000 will be meaningful only for about 10 minutes when the 12,000 rpm models come out. And of course, there are many more architectural designs in the engineering that are intended to boost speed and reliability. If these are not to be ignored, a decision must be made as to “who” has the responsibility to manage their usage (usually the cloud provider), and “who” will complain when things get slow (usually you the customer).
And what happens when “breakthroughs” occur in a specific technology? Solid-state hard drives, for example, are not exactly new now, but they once were. When you move your app from operating on a high-speed hard drive that spins to a chip-based hard drive the operates at the speed of the RAM it is based on, the difference in speed is an order of magnitude better. The inevitable question emerges, Will my container have the ability to specify RAM-based disks instead of traditional spinning disks, as part of the disk commodity it is providing to my app?
Since the difference in speed is so great, you would think it would come up with a way. You would equally expect that providing offline storage such as tape (from the age of the dinosaur) would have its own designation in my container structure (not be equated with offline spinning drives due to the speed differential). When you go to make a backup or restore, the difference between tape and hard drive becomes as apparent as the difference between solid-state and spinning drives on app execution. So when a platform evolves, my container mitigation must evolve with it. But to offer the uniqueness of vendor superiority requires my programmers to learn who to request (or what level of hardware specs they must know), which is the whole point of wanting a commodity in the first place: I don’t have to know.
Impact of Commodity on Other Components
This same dilemma can be exhibited in CPU vendors. Intel touts its superiority, AMD would disagree, and there may always be an upstart who wants to rival the current leader in any space. Lest we think Intel-based computing is the undisputed champion, quantum computing may upset that assumption (unless, of course, Intel corners that market as well). The point here is simply to ask: how many features, how many specifications, how much do we the programming team need to know about our platform provider to understand why and where our application runs better and worse? In the days before container-style ideology, hardware vendors touted their differences and their uniqueness as the reason they were better. Now what?
Then comes our virtual machine providers, and eventually our operating systems. If I want to truly abstract from my hardware layer, I must not call an OS-unique command that only Linux or only Windows can do. I must stick to functionality that both OS instances can reasonably understand and accommodate. While this least common denominator is great for extending where my app can run (on multiple types of hardware, in multiple types of OS installations), it has the same effect on uniqueness and potentially on competitive advantage. If the intent is to come up with a one-world OS, I am generally in favor, but our journey there is sure to have certain casualties in innovation that a single OS is suited for and its competitors are not.
The inevitable question becomes, How far down, or how far up, do we want to form commodity provisioning of computing capabilities? One could argue that the simplicity of the iPhone platform is a user-centric expression of commodity provisioning at the top of the stack. Drag-and-drop programming languages are yet another expression of bucketing computing capabilities into more simplistic expressions (that more people can understand and, therefore, utilize). Commonality and simplicity do increase audience. In many segments of our industry, this is the nirvana of sales to the public.
Impacts to the Bottom Line
DevOps focuses on enabling innovation. At its core DevOps is focused on software innovation. Let’s face it—changes to hardware platforms are generally less frequent. If those hardware platforms are married to a particular app (not serving a large number of apps on the same box, same virtual instance, etc.) then at least provisioning can be tuned one to one with the app. But with the desire to turn infrastructure into a generic commodity that application programmers don’t have to think about comes the fact that they stop thinking about them. Over time they will understand less and less about why things become slow, or work better, as the knowledge of uniqueness goes away.
Depending on your market focus, ubiquity is either your friend (as it touches consumers) or your enemy (if you are the hardware vendor attempting to maintain relevance over your competitors in an increasingly commoditized market). The questions I raise in this article are designed to focus on how to keep our trend toward hardware mitigation from stifling innovation in the very hardware platforms we depend upon. In any case, I welcome your thoughts on the topic.
To continue the conversation, feel free to contact me.