Containers

Growing Pains for Containers, Data Centers Call for Better Management

Containers have taken the tech community by storm with their ability to drive agile and effective innovation. Compared to the rigid nature of virtual machines, it’s no wonder developers and IT teams have flocked to containers and their flexibility. By dropping the strict hardware focus of virtual machines, containers are allowing teams to surface innovation into operating systems and create an elegant solution for moving code.

As we continue to see the adoption of container-driven operations grow, the industry’s excitement for innovation is overshadowing a problem growing at the same scale: over-containerization. Whereas virtual machines traditionally provide a predictable understanding of hardware performance, containers are not in touch with the hardware native to their operating system. As a result, containers are often left to fight for control over one system. In these instances, data centers are inefficiently distributing energy as they experience drastic power pulls that drain energy and dollars.

While this might sound like another bout of industry growing pains, without a proper reaction, this can cost companies millions of dollars lost to inefficiency. Just as teams are investing in the latest orchestration features to manage their virtual environments, it’s time to consider technology’s hardware roots and monitor containers’ effect on data center functionality. Data center management tools provide a simple solution to this problem by monitoring power usage, gauging utilization across data centers and presenting data needed to make a winning strategy.

Monitoring Power

Data center management tools provide granular insight into power consumption, with options to monitor per server, rack, workload and application. Given the hyperfocused, often greedy nature of containers, power monitoring allows data center managers to identify how much energy containers need and how to best allocate it across multiple containers. This gives managers the information they need to create a centralized management policy for server energy consumption and avoid power pulls from hungry containers in the future.

Gauging Utilization

While the industry’s initial concern is containers’ tendency to pull power away from the majority of an environment, there is an opportunity within this dynamic. Data center management tools offer visibility into uptime and cross-platform consumption levels, which presents a chance to identify underutilized servers that could support more containers. Given that containers are far easier to migrate than virtual machines, server migration is likely to occur more often and data center managers will need to provide a recommendation quickly. With enough insight into the long-term, low utilization trends, teams will be able to consolidate and balance workloads across existing devices.

Adjusting the Data Center Strategy

Data center management solutions solve an immediate power problem, but what are they doing to keep up with the ever-evolving enterprise space?

Some data center management tools offer features that visualize power consumption and predictive modeling for efficient planning. With these insights, not only will IT teams be able to give power to containers across their environment, but they also will be able to save resources and lower operational costs for years to come. This presents a win-win situation for developers and managers, in which containers are given the energy to power innovation and data centers are operating at maximum efficiency.

As we look to the future of containers and data center management, it’s in the best interest of enterprise companies to invest in their hardware just as they have invested in containers. As the industry continues to become focused on virtualization, a solid, foundational understanding of data center performance and capacity will be key to long-term success.

George Clement

George Clement

George Clement, Intel Data Center manager and software application engineer. As an Intel veteran, George’s specialty has always been the data center—from IT and commercial implementations to efficient utilization and capacity management to business continuity processes. He currently works on the company’s data center manager team which focuses on extracting power and thermal data from servers and supporting infrastructure so that data center managers can assess real-time utilization from a single console or API. George earned a Master’s degree in Information Systems from Stayer University, Washington D.C.

Recent Posts

IBM Confirms: It’s Buying HashiCorp

Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.

18 hours ago

Embrace Adds Support for OpenTelemetry to Instrument Mobile Applications

Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…

1 day ago

Paying Your Dues

TANSTAAFL, ya know?

1 day ago

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

3 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

3 days ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

5 days ago