Data Center Cooling Algorithms: Consolidation – Part 315 min read

by | Jul 31, 2019 | Blog

Ten months ago, when I started this project of producing a series on algorithms for managing or planning a data center’s mechanical infrastructure, my thinking was that I could demystify the mechanical plant by breaking it down into discrete elements and making the Excel spreadsheet math transparent. In fact, I actually approached this project with a slight twinge of trepidation that my friends in the DCIM industry and in the architectural engineering consulting worlds would somehow feel their black box magic was under attack and therefore respond by taking shots, poking holes and otherwise protesting the over-simplification of their necessarily nuanced expertise. It turns out I was concerned for naught. I have heard from a couple readers with suggestions on considerations I may have omitted and from a couple other readers with plans to apply my spreadsheets to their spaces and use them to promote and assess their continuous improvement efforts. (I have not got back any progress reports yet.)

By the time I got to this point of the project where I am set to exercise all the algorithms against some case studies, I think I may have actually achieved the opposite effect. Looking at over 100 spreadsheets with up to 104 data columns on some of them and 8760 lines of data on forty eight different worksheets and equations with over 100 terms, I think I may have actually provided evidence for the worthwhileness of DCIM investments and consultants’ fees. Nevertheless, as promised, I will wrap up this series by exercising all the elements of all the algorithms against some test cases.

For a refresher on any of the details or background for any aspect of these calculations, I refer my readers back to:

The Shifting Conversation on Managing Airflow Management: A Mini Case Study

Cooling Efficiency Algorithms: Coil Performance and Temperature Differentials

Cooling Efficiency Algorithms: Chiller Performance and Temperature Differentials

Cooling Efficiency Algorithms: Economizers and Temperature Differentials (Water-Side Economizers – Series)

Cooling Efficiency Algorithms: Economizers and Temperature Differentials (Water-Side Economizers – Parallel)

Cooling Efficiency Algorithms: Air-Side Economizers and Temperature Differentials

Cooling Efficiency Algorithms: Condensers and Temperature Differentials

Cooling Efficiency Algorithms: Heat Exchangers and Temperature Differentials

Cooling Efficiency Algorithms: Encroaching into the Allowable Data Center Temperature Envelope

For the sake of this exercise, let us assume a one Megawatt (1MW) IT load with eight 50 ton air handlers providing us N+1 cooling redundancy. These units are delivering 23,000 CFM and their cooling capacity is based on a 22˚F ΔT between return air and supply air. I should note here the importance of good airflow management, in that if bypass airflow was reducing that ΔT to 18˚F, we would need nine cooling units to cool the same load with the same level of redundancy. And if that ΔT were only 15˚F, we would need ten CRAH units. It seems to me that buying decisions for blanking panels and grommets for sealing all variety of holes make a little more sense than purchasing and operating extra cooling units. We will operate this data center within the ASHRAE recommended environmental limits with a maximum 80˚F server inlet temperature and, unless otherwise noted, we will assume excellent airflow management exemplified by a 2˚F ΔT between supply air temperature and maximum server inlet temperature.

COOLING ENERGY FOR ONE MEGAWATT DATA CENTER

 

10˚F CRAH Coil Approach

6˚F CRAH Coil Approach

 

kW Hours

MLC

kW Hours

MLC

Denver

3,498,255

1.40

3,049,356

1.35

Des Moines

3,653,670

1.42

3,158,856

1.36

Phoenix

3,766,853

1.43

3,245,630

1.37

San Jose

3,613,825

1.41

3,127,570

1.36

San Antonio

4,369,897

1.50

3,661,268

1.42

Table 1: Impact of Variables Affected by CRAH Coil Approach Temperature

Table 1 compiles the results of applying the previously developed algorithms to the sample data center located in five different geographic areas and looking at the impact of varying the approach temperature across our cooling coils. We know that a higher approach temperature is associated with a lower pressure drop across the coils and therefore provides an opportunity for fan energy efficiency optimization. (The math of those relationships was discussed in the September 12, 2018 blog cited above.)

In the conditions studied in this example, we saw about an 8% CRAH fan energy savings in the data centers with a 10˚F approach temperature versus a 6˚F approach temperature. However, a lower approach temperature is associated with higher supply temperatures which allow us to harvest some efficiencies with both our chillers and towers. In these sample data centers, we saw a 21-22% reduction in chiller energy by increasing our leaving water temperature 4˚F to maintain the same supply air temperature at the lower approach. In addition, the lower coil approach allows the tower to expand its efficient operating ambient wet bulb envelope. While our tower is going to be more efficient in cool, dry environments like Denver than in warmer, more humid environments like San Antonio, both sample data centers benefited from the reduced CRAH approach temperature. In fact, we saw a 16.2% increase in tower efficiency in San Antonio compared to a 12.8% improvement in Denver, as the changes helped performance where the wet bulb histogram is more pronounced in south Texas.

The purpose of this discussion has nothing to do with the relative merits of the site locations studied, as there are obviously many other considerations involved with site selection. Rather, I just wanted to take the opportunity to provide one simple example of the value for applying concept and design analysis tools that take into account multiple variables and “what if” perspectives. And finally, remember that the cases I have discussed here assume relatively pristine airflow management practices. Just as we saw how a few degrees change can deliver us double digit efficiency improvements in our data centers, poor airflow management can deliver even larger results moving in the opposite direction.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest