Infrastructure is changing as more services replace the need for hardware-based architectures
There has been a lot of change in the last few years, and that change shows no sign of slowing down. I’m one of those people who likes to know they could work offline if needed, yet in this day and age if you want me to install a fat client on my laptop, I go looking at your competition.
Granted, for me the selection of these apps is small—I like having Office on my desktop to be able to use it to write or create presentations at will, regardless of whether I’m connected—but for most people, it is growing at a faster pace. The idea of buying a monolithic system that you have to install and maintain is becoming less and less appealing.
That same syndrome is occurring in the data center, too, but with a twist. It is largely infrastructure that is feeling this pinch. Just prior to cloud/container mania, the idea was that each non-compute box in your infrastructure was a platform. It offered you a ton of functionality, all designed to simultaneously lock you in and help you get your job done. All in all, it wasn’t a bad trade-off, and we can see the same idea in the large public cloud vendors.
The difference is, with an infrastructure box, you were making a serious investment upfront that offered you other functionality to sweeten the deal. Once that investment was made, it was natural and realistic to want to take advantage of the bundled platform functionality rather than search for yet another box to drop in your network.
Containers and what survived of software-defined networking (SDN) have changed all of that. The ability to quickly spin up something in software that fits your needs perfectly and can be automated with your DevOps toolchain makes the idea of a platform box—with its high cost of entry and bells and whistles you will never use but might have to lock down—seem quaint.
In the agile/DevOps world, boxes for low-level infrastructure are still necessary. Don’t get me wrong on that. If you are a large org, you are using one of a few vendors to do core switching and routing. But top-level (layer 4-7 and 8) networking is not something you would use those static boxes for.
It used to be that all networking functionality was the same—it needed to be rock-solid and slow to change to guarantee that apps were functional. Over the last few years we have seen a move from “all networking is networking” to “this is low-level stuff that must be rock-solid, that is closer to application layer stuff that we can change rapidly, and thus need not be in the ‘rock solid’ category.” Load balancing is a great example of this (and yeah, I used to work for F5, which is the king of load-balancing): Load balancing was a network function that you went to a hardware vendor for if you were beyond a certain size. The resulting physical box might serve one or 100 apps and required specialized training or experience to manage.
Today, load balancers are a systems function that you go to a software vendor for and work into your network with as many copies of the software load balancing as you needed. Or you turn on load balancing within your container management system and manage it there. Either way, it fits in with your agile/DevOps environment better, and is easier to get budget for your project.
I expect to see this change intensifying. There is power in physical boxes for some things, but there is also power in portability and packaged deployment scripts. And a physical box is not portable to a public cloud, nor is it readily dropped into container management tools (though container management integration is getting better in most network security/performance appliances). Portability and ease of use will win over technical edge, in my opinion. Not because people are cheap and lazy, but because portability is key to agility and they are too busy.
There are still things that just aren’t that portable, and I have yet to see convincing tools to make them that portable. Data is my go-to example because it is easy. If you have 2TB of data, your portability options are limited. For those with 2PB of data, there is no real option; the app is going where ever the data is, and that data is unlikely to move. Oh, if you were motivated, there are ways to move that much data, but then you’ve just shifted the problem to the new platform, because the data is still largely non-portable, and apps will have to go where the data is. Again.
But for most functionality? Anything that can be made into highly portable software will be. And that software will run wherever it is most needed or most beneficial to the user.
All in all, this is a good result. Multi-core, multi-network, multi-terabytes of memory PCs can rival many appliances for speed and functionality, and portability plus ease of integration makes software a winner. Just stay aware of where/when a physical box still makes sense, because the world doesn’t change as fast as high-tech marketing wants you to believe. A working appliance that does what is needed quickly should not be ripped out just because a software solution is available.
But you knew that. Because you all rock. Keep kicking it, and use whatever solution serves the business best—which is a different equation for every business.