Containers

Containers: All Your Base are Belong to Us

The world is still changing. No surprise there, but the direction and velocity have been modified by the growth of container usage. Back in 2015, we at DevOps.com showcased a survey we performed that said container usage was staged to take off. That it did, with a vengeance. No matter the category—pilot programs, test, production—it seems to have a greater penetration each year.

And why not? There are a whole collection of benefits to containers—smaller footprint than VMs, more portable than cloud instances, more agile than … well, any of the alternatives. And the weaknesses—the use (and abuse) of IPTables and persistent problems with persistent storage—are being addressed at a rapid clip.

But It’s Not All Roses

Sorry to say that there is, as always, a fly in the ointment. We have to identify weaknesses in new architectures to plan for them and overcome them. And container infrastructure is not immune to this process. Nothing is perfect.

The first problem is with overall security. Your container management system is now the center of the world. A single point of penetration that could yield dozens to thousands of infected servers. It is like a magnet to ne’er-do-wells. Protect it, lock it down, follow advice on Container Journal and Security Boulevard about how to keep your system as protected as possible.

That was the easy one. We have a ton of experience protecting important central systems.

The more difficult ones all revolve around your physical infrastructure and where it meets your virtual container infrastructure. Look at all the boxes in your data center and the services they provide, then map out how those services integrate with your containers. Once you’ve mapped them, work the mappings into your DevOps projects. Load balancing and storage provisioning are two areas in which what a traditional DC does is duplicated inside the container environment, and where that processing is to occur should be a conscious choice about what is best for the application, not a decision based on what is easiest/in front of you.

It is an (arguably) bigger issue with storage. Just because the container can grow or shrink its internal storage does not make it the right place to position a fair selection of your data. Think about what will be needed in the future—with or without the application that this container or container set represents—and make a conscious choice about what is ephemeral data that can go away should an instance die versus what is needed long-term. Work those choices into DevOps processes. It can be a pain to access persistent data that is remote to a container, so make the painful parts automated via your DevOps processes.

Both of these problems are compounded in most (but not all) cloud environments. Too much cloud functionality is designed to lock users in. While that has been a standard practice in IT for decades, the problem with containers is that they are inherently portable. Build them to be so. Yes, lock-in functionality always brings benefits, but mobility lets a team choose where best to deploy a container system with each inflection point (purposely being vague here—a highly agile/DevOps shop could have daily inflection points, while a single team just piloting containers in their DevOps environment may only have them on occasion when something large changes with the system). Should something change at a preferred cloud vendor that makes them no longer preferred, the fewer modifications required to move to another platform the better. So building default container/container management into the DevOps toolchain is preferred to building in vendor-specific options when a choice is available. Storage is an easy one to point at and say “choice might not be available.” Allocating disk is not very portable between platforms, but if that’s one of only a handful of things that must change, portability is still an option.

In short, containers are coming, and they’re coming fast. Build them into your DevOps toolchain as portable solutions that automate the worst of the speed bumps. Make conscious choices about where infrastructure-based functionality should reside, and implement based upon those choices.

And keep rocking it. If moving from any other environment (with the arguable exception of cloud), containers will be faster to spin up and to spin down. Just address the issues before they hit you, and all will be well.

Don Macvittie

Don Macvittie

20 year veteran leading a new technology consulting firm focused on the dev side of DevOps, Cloud, Security, and Application Development.

Recent Posts

Valkey is Rapidly Overtaking Redis

Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.

25 mins ago

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

5 hours ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

11 hours ago

Exploring Low/No-Code Platforms, GenAI, Copilots and Code Generators

The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an…

1 day ago

Datadog DevSecOps Report Shines Spotlight on Java Security Issues

Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are…

2 days ago

OpenSSF warns of Open Source Social Engineering Threats

Linux dodged a bullet. If the XZ exploit had gone undiscovered for only a few more weeks, millions of Linux…

2 days ago