“As with any new technology, it’s really important that we be thinking now about how to do [it] ethically and responsibly.” —Robert High, Vice President and Chief Technology Officer, IBM Watson
Here’s something to think about: According to an article in The Wall Street Journal, more than 30 countries have defensive weapons run by automation. South Korea has robotic sentries along the DMZ, while Israeli robots patrol the border at Gaza. Russia has robotic vehicles armed with 30mm cannons and anti-tank guided missiles that are well-equipped to battle it out with NATO forces should the need arise.
Got your attention? Wait, there’s more.
The SeaHunter is a U.S. military vessel that can travel the high seas on its own for months without a crew to feed or protect. It’s completely unmanned. And, it costs only $20 million per boat, less than half of the $49 million it costs to fire 59 Tomahawk missiles. At that price, the SeaHunter is practically throwaway.

The implications are profound. The value proposition is hard to resist, Why spend billions on building and manning traditional warships when, for the same expense or less, you can have a navy of predominantly robotic ships patrolling the oceans of the world? And, you can have robotic aircraft, capable of automated takeoff and landing, using these ships as mobile bases. An air force untethered to any land mass is a game changer, particularly when non-state actors can get their hands on a few of them. It sort of brings new meaning to the word “pirate”—no need to board a vessel by force, just get the access credentials.
And, at the core of it all this modern weaponry is automation and growing capabilities in AI. That should give you something to think about.
We in DevOps are all about automation. It’s our raison d’être. Our fondness for, and dedication to, automation is admirable. We really have changed the way tech works. Yet, many of us think of our activities as mundane. It’s akin to the nonchalance of a Major League baseball player. They’ve hit the ball so much that many players have forgotten how skillful one needs to be just to foul off a bad pitch. Most people on the planet can’t hit a Big League pitch, let alone make it go foul by intention. But for the pro, it’s no big deal—it’s just part of a day’s work.
It’s the same for us. We’re so accustomed to automating our world that it’s become no big deal. But, it is a big deal. It’s a very big deal.
The easiest thing in the world is to trivialize the incredible impact our work has on others. After all, we’re “just working on” an ecommerce site, a dating app, or IoT endpoint for wired bicycles, right? Those of us working in the defense industry might grasp the significance of the work, but for most of us, it’s business as usual.
Here’s the raw truth: Those of us doing the automation are changing the world. Each line of code we write is intelligence that alters the digital infrastructure of some part of the planet. The code we write will live on well after we’re gone, whether it’s from a company or from the planet. Are we really clear about the impact we have? Yes, some of our tech superstars are asking us to pay attention, but how many of us really are?
Every day that passes we give automation more power. It might be a little piece of power, such as letting only authorized users access to an application. It might be significant power, such as determining whether a person qualifies for a loan to buy a house. It might be enormous power, such as manipulating political forces toward a desired behavior. Or, it might be the ultimate power: Deciding who gets to live and who gets to die. But, regardless of degree, it is power and we have it. And, we continue to make automation more powerful. As surprising as it may sound, despite appearances otherwise, we’re the ones the world trusts intrinsically to do our work safely and responsibly. A lot of the world’s safety depends on how we do our work. Political leaders don’t write the code. We do.
Originally, I was going to call this piece “Death and Destruction in the Age of Automation” and write about how we’ve taken automation way beyond commercial IT into the realm of life-and-death decision-making. I decided otherwise. The world does not need another gloom-and-doom piece, nor does it need another piece of sensational journalism. We’d do better to focus more on the positive things we’ve achieved. Technology is doing a whole lot more good than bad.
But the fact remains that technology has accompanied death and destruction since Roman catapults assaulted the frontiers of ancient Europe. The thing that’s really different now is that the technology thinks—or, at least, it’s doing a really good job of emulating thinking. And, we’re the ones making it think better. Thus, we should ask, Are we making technology think in ways that are safe and trustworthy? Are we acting in ways that are safe and trustworthy?
These are big questions. Do most of us really care about them? Or, is it easier to accept our work as mundane, with little consequence in the Big Picture? The nice thing about mundane is that it doesn’t require a lot of ethical contemplation. But, when you consider that the script that deploys an updated container to a Kubernetes cluster is not that far from the one that deploys an unmanned warship to a regional hotspot, things look different. And, contemplating the implications thereof becomes more than an academic exercise in a computer science ethics class.
More than 2,000 years ago the philosopher Socrates said, “The unexamined life is not worth living.” Given the immense role that technology plays in today’s world and the power it wields—power that continues to move further away from our direct control—we might do well to say, “The unexamined technology is not worth doing.” It’s something to think about, while we still can.