DevOps Practice

The Role of Containers in Business Transformation

Containers hold new promise for business and digital transformation. Yet many differences exist between monitoring containers and monitoring other elements of infrastructure, such as VMs, storage, memory and compute. Understanding those differences is vital for ITOps managers as they navigate their businesses’ digital transformation.

Interestingly, the differences are as much technical as they are organizational. Companies need to explore these differences and consider questions such as, how to ensure that a container-based project performs when it goes into production, and what changes you must make to infrastructure to ensure you have enough compute and storage power to support the containerized applications your development team is going to launch.

The essence of container management is to be prepared for the changes coming from application development, which means understanding resource and infrastructure as it is today to estimate the effect of those changes tomorrow.

Here are four differences ITOps managers need to consider with container management as they move their digital transformation journeys forward.

Containers Versus VMs

The signature feature of containers is they can hold and run an entire application all in isolation. VMs offer operating system level isolation, but at the cost of a much larger footprint; to begin with, each one has its own copy of the entire operating system. VMs are also far less portable than containers, especially when moving among private and public clouds and data centers. As a step toward digital business transformation, most companies are looking for DevOps tools and for ways to deploy their applications to the cloud, whether for QA or production. Organizations rolling out new applications are also gravitating to a container-based architecture from the start.

Advantages of adopting containers include:

  • Agility: Instantiating a new container goes as much as 20 times faster than instantiating a new VM. Startup is measured in seconds rather than minutes and once deployed, containers scale up and down rapidly in response to load.
  • Density: Most containers are megabytes in size rather than gigabytes. With their smaller footprint, containers allow for as much as five times greater density than VMs.
  • Cloud-readiness: Containers can be deployed to any cloud (private, public, virtual or physical) which opens up the possibility of using the same containerized application in hybrid and multi-cloud environments.
  • Blast radius: As software developers incorporate and release changes more quickly, they look for ways to control potential damage from bugs. A problem in a container is less likely to cause widespread damage and is easier to isolate and repair than a problem in an entire VM or physical server.
  • Cost savings: Every VM requires a hypervisor and an operating system (OS), which can involve licensing and fees. Containers run side by side, sharing the OS kernel of a single physical machine while holding isolated copies of specific library versions they depend on. Architecturally, VMs tend to contain more components and require greater allocation of resources. Containers, on the other hand, tend to hold a specific component or a single micro-service, minimizing resource overhead.

Containers, DevOps and Real-time Monitoring

A company adopting DevOps to speed up deployment of its software products automates as many development processes as possible and relies heavily on automation scripts to move code along, from continuous integration through continuous deployment and continuous delivery (CI/CD/CD) to production. Along the way, multiple developers must check in on new changes. Automated processes, such as health checks and unit testing, take place and then the build is deployed to the next environment, such as integration or QA. With each set of changes, ITOps needs the answers to several questions:

  • How will this affect performance?
  • What will be the impact of the changes when we deploy?
  • Do we have enough compute, storage, memory and networking to support the changes?

In a traditional environment, developers have plenty of time to answer those questions. But if they’re deploying modern, containerized, cloud-based applications, that window tends to be much smaller which can lead to the kind of guesswork that makes ITOps uneasy. That’s where container monitoring comes in. When ITOps can see real-time and historical analytics of containers and their hosts, across physical, virtual and cloud environments, they can create performance benchmarks to help them make informed choices about infrastructure.

Microservices Get Added into the Mix

Microservices are loosely coupled and independent services, each with their own release schedule and lifecycle. Developers are able to make small changes to elements of these applications which allows them to mitigate the risk of negative business impact by reducing the blast radius around those changes. In case of failure in a single change, there’s less to repair and less impact on surrounding, dependent services. We’re seeing developers move away from packaging a single application per container to packaging a single service of an application per container. This allows them to make changes more frequently and keep up with business demands.

Multiple Silos

The goal of DevOps is to bring application development and traditional ITOps closer together. However, container monitoring and management changes things. Traditionally, areas such as storage, network bandwidth, VMs and physical hosts that require monitoring have all been in the ITOps silo, an area that developers don’t typically own. Developers write code and build software in the application development silo and then hand the software off to the ITOps silo to run. However, things have changed and now developers aren’t only writing code, but also owning and using containers in the application development silo. Once the software and containers leave this silo, their consumption of resources like compute, storage, network bandwidth and memory will also affect the ITOps silo. That’s why monitoring and management is as necessary for containers as it is in areas like storage, VMs and physical hosts.

While container technology may be new for the traditional enterprise, the impact it will have on business transformation cannot be ignored. Containerization brings increased simplicity, consistency and portability to production environments, enabling faster IT delivery and increased performance, arming businesses with a new agility and allowing them to rapidly respond to changing customer needs. In fact, according to 451 Research, application containers will be a $2.7bn market by 2020.

The businesses that embrace this opportunity will be taking a vital step towards transforming into bustling digital enterprises that can deliver innovative products and top-notch customer experiences. To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon NA, November 18-21 in San Diego.

Yinghua Qin

Recent Posts

IBM Confirms: It’s Buying HashiCorp

Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.

8 hours ago

Embrace Adds Support for OpenTelemetry to Instrument Mobile Applications

Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…

16 hours ago

Paying Your Dues

TANSTAAFL, ya know?

18 hours ago

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

2 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

3 days ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

4 days ago