The evolution of platform is an interesting study. From physical to virtual to cloud to containers, the progression has been steady, and each successive platform has benefits for both Dev and Ops that make them appealing. In the case of containers, portability is the word.
As easy to fold into your DevOps toolchain as cloud, yet able to be deployed wherever you need, containers have absolutely taken the industry by storm. While all cloud vendors offer container options these days, their “value add” capabilities are thinly veiled attempts at vendor lock-in, so most organizations have avoided them.
Development languages are as diverse as they are for physical or cloud; you can pretty much find containers hosting every language imaginable. That makes perfect sense, since one use of containers is to keep an otherwise-obsolete environment alive. That’s certainly not the primary use of containers, but it is an important one. The catch with containers is that the primary languages depend greatly on usage. A container that is persistent and intended to last months or even years will tend toward a different set of languages than one that is ephemeral and intended to last only minutes.
The big weaknesses of containers – security, persistent storage, and management – have been addressed more and more as container usage has grown. Complexity of environments is now the big issue, but that is not inherent to containers; merely enabled by the ease with which new machines can be spun up.
For longer-term containers, Java and C# are still the languages of choice, though others are certainly making headway. A combination of how new container technology is, and the extra layer between applications and hardware in the current popular container solutions make C/C++ less popular than on physical machines. They are still used because like other targets, containers still have operating systems that are linked to more easily in these languages.
Hooking into DevOps toolchains is at least as easy as it is in a cloud environment. There are even a fairly large number of companies that have moved to spinning up a container to build, then destroying it, only to do it all again on the next build. This is not mainstream yet, but the ability to completely configure the container and its loaded toolset makes it an appealing option.
The kicker – one that cannot be overstated – is that it is simple and easy to get developers running with a copy of a complex system that once would have required several physical boxes. I, personally, have worked on a seven-plus server client system that I had a complete copy of on my (admittedly beefy) development laptop. It wasn’t the most performant, but it was enough to do unit testing.
The security weaknesses of cloud come along with containers too – there is a lot more than just configuring an IP to directing traffic, for example – but mitigating tools and processes are available.
This pundit truly believes that containers are the future of DevOps. They have the best of previous iterations, while offering mobility that just didn’t exist in other solutions. The ability to move an entire infrastructure from one deployment target to another (data center to cloud, for example), or from one cloud vendor to another makes this option the king for organizations that require (for business objectives, compliance or finance reasons) the ability to move their applications quickly.
Meanwhile, rock on.
Containers are the cutting edge of wherever it is we are headed. They are more agile than any other solution, they do not limit programming or networking options, can be run wherever the organization needs them and can be modified or redeployed easily. There is more complexity, but most of it has been obfuscated; only when things go terribly wrong is the complexity really an issue. Development for containers, with a few exceptions, is development for the target OS. That makes it easy to keep the organization’s systems preferences while enabling developers. All good. And many (I daresay most) of you are already taking advantage of them. Keep rocking it. Use what the org needs, and don’t forget to check every once in a while to make certain what is in use is what needs to be in use.