There’s a joke making its way around the internet that goes like this:
Q: How do DevOps engineers change a lightbulb?
A: They don’t. It’s a hardware problem.
The trend to abstract hardware away from the day-to-day work of most IT personnel continues. We in DevOps have given it a name: infrastructure as code. Hardware has been pushed so far back into the recesses of IT that I’ll wager good money to say that today most folks working in information technology have never seen the inside of a data center, and fewer have done something as labor-intensive as run cable wire over a rack of servers. For many, hardware is but a concept that’s made real every month by a bill from Amazon, Azure or Google Cloud.
But hardware is real and it does matter, a lot. If you don’t think so, look at your cellphone. That little piece of hardware had changed just about everything about how the world works today. The cellphone has made possible not only Tinder, Twitter, Instagram, Uber, Lyft and a multitude of startups that will never see an IPO or private equity buyout, but it also has brought earth-shaking events such as the Arab Spring, Brexit and SpaceX landings into the palms of viewers worldwide.
Hardware counts and it’s going to count more in the coming years. Those working on the forefront of innovation understand that special hardware is, and will continue to be, a critical factor in the development of increasingly complex software. We’re already seeing the ability to choose specific hardware configurations available on cloud providers such as AWS. For example, the g3.16xlarge EC2 instance type is backed by both CPU and GPU chips. This is only the tip of the iceberg.
“People ask what’s next. Well, this stuff [GPUs] is an example of what’s next. It’s not just really expensive stuff, but really specialized stuff. Whenever you start to do something that’s really big and really important, which are a lot of the things we’re doing on the internet and in the world, whether it’s bioinformatics or cars driving themselves, those require really big workloads. There’s a lot of data, a lot of processing and you just can’t do it with generic hardware. The reason that hardware is the next innovation layer is because of all the software you guys have been writing to make software portable and deployable around the world in minutes, now the software wants to touch on the new kinds of hardware. And I think that embracing this is the next wave.”
Packet, and companies like it, are looking forward to a time when computing platforms will move beyond the generic to the specialized. We’re already seeing the trend play out: The value of the stock of Nvidia, a major GPU manufacturer, has gone up 10x in the last three years. (See Figure 1.)
Also, the projected demand for specialized chips is attracting a lot of venture capital (VC) money that’s funding startups such as Cerebras, Wave Computing and Graphcore. Even Google is getting into the specialized hardware business with Tensor Processing Units (TPU). Hardware is indeed becoming very cool again.
So, what does all of this have to do with automation?
Automation is a processor-intensive undertaking. And, as AI weaves itself more into the fabric of day-to-day data processing and machine learning, we’re going to see more demand for specialized hardware to handle the activity. In a way, that’s not new; we’ve had specialized hardware for a while now. TV remote controls have been around well before Bluetooth came on the scene. What is new is the degree of intelligence at play in these new devices. All my old-style TV remote control had to do was change the channel when I clicked the button. Today, my remote control recognizes my voice and helps me find a show of interest. It’s a big leap from an operation that responds to a button-click to one that uses voice recognition and inferential lookups.
As Packet’s Smith pointed out above, as the complexity of software grows, so, too, will the need for special hardware to drive it. The new trend will be to wrap hardware around software. Thus, I have no trouble imagining a new type of hardware that is designed to handle a very particular process—gene editing, for example. Imagine ingesting a pill that is really a nanodevice that contains the intelligence to travel your body and physically alter your genetic makeup. It’s not that far-fetched an idea. Making the idea a reality isn’t so much about the software, it’s the hardware. We don’t have the nanotechnology—yet. Hence, the opportunity.
Right now, most automation activity is run on generic chips housed in remote data centers around the world. A lot of the work we’re doing is still commodity: Spinning up Kubernetes clusters, facial recognition under Tensorflow, discovering security vulnerabilities using Macie, for example. Yet, as artificial intelligence (AI) becomes more commonplace, the use cases for which AI is appropriate will grow. This growth will require new types of specialized hardware. Engineers will imagine the software and then design the hardware to make it run. We’ll come to a point, which is not that far away, where innovation will be as Smith describes. It will indeed be about wrapping hardware around software. As a result, we’ll go from the “Internet of Things” to the “Internet of Really, Really Smart Things, from the Microscopic to the Gargantuan.” The implications will be profound.