The Science of Keeping a Data Center Cool9 min read

by | Apr 26, 2017 | Blog

Your data center continues to change and evolve with current market trends and demands. Throughout this evolution – keeping a data center cool and operating efficiently is always a concern and priority. Cooling must be a big consideration in the digital-era data center age. Data centers are increasing density, and cooling is critical to keep operations running efficiently. In the AFCOM State of the Data Center report more than 58% of data centers indicated that they currently run, and will continue to run at least N+1 redundant cooling systems. Both today and three years from now 18% will operate a N+2 cooling redundancy architecture.

Consider this, a 2015 NRDC report indicates that data center electricity consumption is projected to increase to roughly 140 billion kilowatt-hours annually by 2020. This is the equivalent annual output of 50 power plants, costing U.S. businesses $13 billion annually in electricity bills. Furthermore, in recent Green Grid research into European data center usage, energy efficiency and operating costs are the most common areas of the data center reported as requiring improvement.

Today, there is a digital revolution happening within the modern data center. In fact, data centers have grown from virtually nothing 10 years ago to consuming about 3% of the global electricity supply and accounting for about 2% of total greenhouse gas emissions. To that extent, data center managers are always looking for better ways to keep their data centers cool and, most of all, efficient.

With that in mind, a new concept for data center cooling is to approach it from a practice and scientific perspective. That is to treat your data center as an ever-changing entity that needs a scientific approach to fully understanding environmental variables. Let’s examine some options:

  • Applying AFM science to cooling and efficiency. It’s critical to remember that there are services which can help you analyze your cooling ecosystem on an on-going basis. These types of approaches are specifically designed with optimum airflow management as the foundational science. Furthermore, they can be custom-designed to generate energy savings, release stranded capacity and improve system reliability. Furthermore, these types of services offer a global perspective and understanding of the importance of energy conservation while contributing to operating expense reduction, increased density capabilities, and improved data center reliability. This brings us to the next point.
  • Leveraging data center studies and evaluations to create actionable solutions. When you do cooling and environmental studies against your data center, you create direct visibility into the most critical components, which help your data center stay healthy. So, in approaching this as a science, you get on-going metrics and variables that you can control and optimize. For example, an evaluation of your data center ecosystem could show you how to dramatically reduce costly bypass airflow.

“Data center managers are always looking for better ways to keep their data centers cool and, most of all, efficient.”

  • Eliminate inefficiency via precision cooling and best practices. How clear is your data center environmental visibility? Are you testing it constantly? A scientific approach to data center variables also allows you to compare your data center to other locations and even overall industry optimizations. This allows data center administrators to support data center best practices and ASHRAE compliance as required by the organization.
  • Evaluating constant capacity requirements. Keeping up with industry demand is a great way to create competitive advantages for the business. However, you can’t evolve what you don’t measure. Approaching cooling as a science will allow the organization to optimize current airflow infrastructure to increase cooling capacity and IT density. Remember – cloud, virtualization, and high-density computing are all impacting the architecture of the data center. Specifically, it creates a platform requiring dense workloads and the support of more users. Leveraging a good partner who can guide you through the “cooling-as-a-science” approach can drastically help you gain a better understanding of your data center, and where you can create optimizations.

The capability of your data center will directly revolve around your ability to keep it running efficiently. A lot has changed with your data center ecosystem. During the 90’s and mid-2000s, designers and operators worried about the ability of air-cooling technologies to cool increasingly power hungry servers. Today, there are even more density and efficiency challenges to think about. This is why it’s so critical to look at cooling and data center environmental management from a scientific perspective. Again, this is an on-going, ever-changing concept. By approaching cooling as a science, you’ll be able to address the transformational, power consumption needs within the data center.

Bill Kleyman

Bill Kleyman

CTO, MTM Technologies


Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite


Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest