As software eats the world, it’s hard to have a conversation about anything that isn’t directly related to or driven by modern mobile, web and IoT applications. DevOps describes the modern approach to building these applications. But like all new movements, there is a cost. And while some of the costs associated with modern development are not new, it’s becoming more and more clear that DevOps organizations have to pay their tech tax or prepare for trouble.
This post was inspired by a Twitter conversation with a longtime technical peer, Nick Kellett (@NickKellett).
Nick knows a little about the evolution of modern development, as he has been a part of the transition from enterprise line-of-business (LOB) application development such as SharePoint toward more Agile adoption of APIs within those heavy systems, and he is now working on bringing LOB app development into DevOps. Not a small feat. As an artifact of working with enterprises, he also knows that some of the effort in deploying anything new is making sure that it sustains over time.
Our conversation centered around the fact that eventually, you have to get real about your DevOps implementation. And the sooner you get over your Docker, microservices and culture honeymoon, the sooner you can focus on something that is sustainable.
I have a former co-worker who was fond of saying, “You either sell candy (something people are excited about), or medicine (something people have to take but don’t want to).” In the DevOps world we are very much stuck in the massive ingestion of candy, without much consideration of the long-term impacts. But ignoring the medicine aspect results in technical debt.
We most often refer to technical debt as the compounding of application issues that are ignored over time (perhaps small bugs ignored at the beginning of the year which have compounded to more serious implications by the end). But technical debt is not limited to the application. It also relates to the pipeline itself.
As implementations mature, many organizations start to see debt surface in a very clear way. Components of the pipeline start to go from automated to manual, integration between tools becomes a library of scripts, and team members get sucked into the care and feeding of one problematic process versus the more beneficial strategy aspects of their jobs. But there are some organizations that have already felt the pain and have bucked up and paid the tax earlier on.
What are those things you are not considering that will cost you with interest later?
- No Pipeline Oversight: Without visibility into the pipeline itself, and only the application within it, the pipeline won’t last long enough to support the speed and robustness of the application it delivers. The same powerful logging tools used for APM and server monitoring can be used to create this oversight. You just need to deliberate and plan for it.
- Node/Server/Container Sprawl: It’s so easy to spin up infrastructure. And with microservices, infrastructure will expand rapidly. But few organizations can say with confidence that they can point to any instance and know what it is for, what workload it has, what version of the application it has, and when it should be de-provisioned. This is server sprawl. It causes issues and poses a risk, but it also hits the bottom line when running instances are not being used, but are being paid for. Monitoring—and smarts in building in better consistency across the entire stack—both address this.
- Wild Wild West Component Adoption and Management: It takes a developer about five minutes to decide to use mongo, download mongodb and implement mongodb. There doesn’t need to be a lot of justification or reasoning. But open source components (OSS/frameworks/databases/packages/artifacts) come with baggage. There are outdated components with known vulnerabilities—and in some cases, as with public containers, code that could be malicious. While many would say this is isolated to the developer, this frequently is not true. It’s very easy for a container image, or particular component, to be introduced into production based on faith. It’s easy to leverage component monitoring tools to avoid this, and they can slipstream into continuous deployment chains.
- No/Minimal/Poor Functional Quality Assurance: Even though quality is part of the definition of DevOps, teams are afraid to embrace QA in a greater capacity than the effort that’s made to make sure things are not broken. QA can be so much more, and everyone can be accountable for it. It should take place regularly and early on in Continuous Integration environments. And it should be a strategy, not just a thing that’s done.
- Minimal or No Performance Testing: Similar to QA, you might think you can monitor server loads over time and be confident you have scaled for peak loads. As long as you have given 30 percent free resources on top of average peak times, you are good, right? Well maybe, but hopefully not. If your application takes off suddenly, as happened with Stance, then you probably are not ready for a 5x burst. But performance testing is not just for the entire application; it is also for components and individual services.
What is interesting in DevOps is those tools that are designed to build DevOps speed—that candy—also can be used to build in sustainability—the medicine—which means that what I’m proposing does not create gates or overhead. So there is no reasonable excuse to avoid it.
When development teams prepare for their rapid release, pace creates some new challenges along the way. They are able to address those challenges, and with the same automation that they already love, make sure that the delivery chain sustains and doesn’t break apart over time. There might be a tax you have to pay for modern development, but if you pay it on time, it will not compound into DevOps technical debt. And it is still far less than the tax paid in the days of Waterfall development.