The IT industry has historically been a leader in the ongoing race to reduce CO2 emissions. Reducing emissions is good for the environment and for the corporate bottom line. The data center is well-known as a voracious data consumer. But the data center consumption findings keep pace with the trend. Even in the face of remarkable growth in data center capacity and performance, their energy consumption has increased only a few percentage points since the 2010s. This almost-flat rate of energy consumption is primarily the result of the shift to cloud computing as well as the shift to solar and wind energy, where it makes sense.
Investments in renewable energy sources have helped tamp down criticism over the overall IT carbon footprint. Knowing that it takes a tremendous amount of energy to power a data center, owners and operators are always looking for ways to improve efficiency and reduce carbon expenditure. Today, there are newer and better ways for organizations to deliver the performance to properly power modern applications while keeping in mind the energy efficiency and carbon reduction the world requires.
Minimizing Energy Use and CO2 Production
How did we get here? Decades of semiconductor process miniaturization in concurrence with living out Moore’s Law has culminated in ongoing improvements in performance, storage capacity and device density. Today’s mobile phones are a great example of that ongoing progress. A secondary factor in the mobile phone revolution is a processor architecture optimized for efficiency and flexibility. This ARM architecture has been used to power almost all devices since the first iPhone. Over the years, ARM has also become a realistic platform for servers; often a better choice than an x86 system.
Energy efficiency is one area where ARM systems win out over conventional server architecture. But it was not until recently that ARM processors could deliver the required performance for the data center. Today’s systems, based on Armv8 designs, have brought into play x86-class capabilities in a more energy-efficient way. The advent of the ARM-based AWS Graviton processors, which by some estimates have already captured 20% of the instance share on AWS, demonstrates that ARM can run production workloads while delivering what AWS calls “up to 40% better price performance over comparable current-generation x86-based instances for a wide variety of workloads.”
To illustrate the impact on carbon production when a data center shifts from x86-based architecture to ARM servers, an analysis of the top elements contributing to data center energy use was undertaken. The energy savings was calculated and translated to reduced CO2 emissions based on the fuel mix used to power the electric grid in various locations. The results prove that an ARM-powered data center reduces CO2 production by 74%, equivalent to almost half a million barrels of oil.
Power Consumption Reduced by Efficient Servers
The premise is simple—innovative server design significantly reduces data center energy use. In this model, a server’s power draw and physical size are key factors. The energy needed for other data center subsystems was extrapolated by using ratios derived from experience and validated by estimates from other data center operators.
Data center energy use can be divided into five categories of equipment and infrastructure: Servers, network communications, power supplies, conversion and UPS, cooling systems and storage. Servers and cooling comprise about 75% of the total.
The calculations were based on the energy used by a medium-sized data center with 750 racks of conventional 1U servers (42 servers/rack; around 31,500 in total). Assumptions used included servers comprising two-thirds of the floor space and that the equipment space accounts for 65% of the total, translating to a 62,000 square-foot data center. For comparison, facilities operated by the major cloud vendors can be a million square feet or more. Knowing server power consumption and the ratios for the equipment categories, algebra can be used to calculate the energy consumption for each.
The data center in this hypothetical example has, as we said, 31,500 servers. Using the 912.5W power draw of a Dell R640 as typical of 1U x86 servers, the total server usage in the ARM data center would be 251,795 MWh/year. Using this formula, the amount of energy saved by using ARM servers can be calculated. Each server requires about one-quarter the electricity of a standard server. In the formula, the assumption is that the ARM server installation would require half the equipment racks (375 instead of 750) and 52% of the power-per-system, comprising a data center using only 26% as much electricity (152,409 MWh/year) as one based on x86 systems.
The math shows that switching one medium-sized data center to ARM servers can produce annual savings equivalent to 45,459 fewer cars on the road or 486,749 fewer barrels of oil used.
The value in CO2 reduction is widely understood for preserving the environment. It is also advantageous from an economic standpoint. In fact, there are many technical and economic reasons to adopt energy-efficient ARM-powered servers in the next-generation cloud data center.
To hear more about cloud-native topics, join the Cloud Native Computing Foundation and the cloud-native community at KubeCon+CloudNativeCon North America 2021 – October 11-15, 2021