How Growing Data Center Energy Consumption Has Changed Cooling Efficiency9 min read

by | Oct 19, 2016 | Blog

Data centers have grown from virtually nothing 10-15 years ago to consuming about 3% of the global electricity supply and accounting for about 2% of total greenhouse gas emissions. That gives it the same carbon footprint as the airline industry. As more organizations place their environments into the data center, server cooling efficiency and data center management have become extremely important for multiple reasons. Not only are data center administrators working hard to cut costs – they’re also working hard to minimize management overhead and improve infrastructure agility.

Consider this, a recent Data Center Knowledge article pointed out that the demand for data center capacity in the US grew tremendously over the last five years:

US data centers consumed about 70 billion kilowatt-hours of electricity in 2014, the most recent year examined, representing 2 percent of the country’s total energy consumption, according to the study. That’s equivalent to the amount consumed by about 6.4 million average American homes that year. This is a 4 percent increase in total data center energy consumption from 2010 to 2014, and a huge change from the preceding five years, during which total US data center energy consumption grew by 24 percent, and an even bigger change from the first half of last decade, when their energy consumption grew nearly 90 percent.

Efficiency improvements have played an enormous role in taming the growth rate of the data center industry’s energy consumption. Without these improvements, staying at the efficiency levels of 2010, data centers would have consumed close to 40 billion kWh more than they did in 2014 to do the same amount of work, according to the study, conducted by the US Department of Energy in collaboration with researchers from Stanford University, Northwestern University, and Carnegie Mellon University.

Is it Cool Enough for Your Data Center?

Many data center operators have created a science out of maximizing server utilization and data center efficiency, contributing in a big way to the slow-down of the industry’s overall energy use. Today, data center providers are making investments in improvements which will positively impact the efficiency of their facilities infrastructure, as well as the power and data center cooling equipment that supports their clients’ IT gear.

And, it’s not just server cooling – these are next-generation cooling systems designed to increase efficiency, reduce costs, and improve overall uptime. New platforms and systems around liquid cooling are being designed for advanced processing and high-density computing.

With the growth in cloud, come new requirements around the rack and even server density; placing, even more, requirements around advanced cooling and data center management systems. A 2015 NRDC report indicates that data center electricity consumption is projected to increase to roughly 140 billion kilowatt-hours annually by 2020. This is the equivalent annual output of 50 power plants, costing U.S. businesses $13 billion annually in electricity bills. Furthermore, in a recent Green Grid research into European data center usage, energy efficiency and operating costs are the most common areas of the data center reported as requiring improvement. Furthermore, the difficulty in predicting future cost (43 percent) and the cost of refreshing hardware (37 percent) are cited as top challenges of developing resource efficient data centers, along with a difficulty of meeting environmental targets (33 percent).

The latest AFCOM State of the Data Center report showed that 70% of respondents indicated that power density (per rack) has increased over the past 3 years. 26% indicated that this increase was significant.

Cooling must be a big consideration in the digital-era data center age. Data centers are increasing density, and cooling is critical to keep operations running efficiently. As the AFCOM State of the Data Center report discussed, when we look at cooling, more than 58% indicated that they currently run, and will continue to run at least N+1 redundant cooling systems. Both today and three years from now – 18% will operate a N+2 cooling redundancy architecture.

Cooling as a Science

We’re seeing organizations evolve their cooling strategies to align with quickly changing business needs. The days of a static data center environment are over. Organizations must create agile cooling ecosystems capable of keeping up with business demands. A great way to approach this is to look at cooling as a science. One way to accomplish this is by conducting regular cooling assessments of your environment. However, this doesn’t just mean asking your data center admins what they need. Your approach must involve business leaders and stakeholder. A good cooling assessment session would include

  • Cooling Capacity Factor (CCF) Reviews, Forecasts, and Analysis
  • Stranded Capacity Analysis and Optimization Recommendations
  • Airflow Management (AFM) Review
  • Data Center ‘Best Practices’ Review and Scale Capabilities Review
  • Thermographic Analysis

The goal here is to find problems which could potentially impact your business. More so, these types of analysis also look at ways to create cooling efficiency which better support technology initiatives. In general, approaching data center cooling design from a holistic perspective will create a more agile environment. Most of all, you’ll allow your servers to breathe easier and support new business goals.

Bill Kleyman

Bill Kleyman

CTO, MTM Technologies

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest