Case in Point: Sample Applications of Data Center Economizer Algorithms28 min read

by | Feb 6, 2019 | Blog

I am told there comes a time in all discussions, even academic discussions on building algorithms to forecast comparative energy use for different types of data center economization architectures, where a body finally just has to get to the point. That time would be now. In previous discussions, I have shared algorithms I have developed for making such comparisons, how to collect and organize reference look-up data and how it can all be done on relatively simple Excel worksheets. In this piece, I will apply these methodologies to assessments of series waterside economization, parallel waterside economization and airside economization in four different climate geographies.

At the beginning of this series on algorithms for predicting and measuring the energy content of different aspects of the mechanical plant, I established some baseline conditions to maintain some consistency across different scenarios to make for clear comparisons. These assumptions are:

  1. One Megawatt (1 MW) IT load
  2. 400 ton water-cooled chiller
  3. Eight 50 ton CRAH units (or comparable capacity if no CRAH deployment)
  4. 184,000 CFM cooling air supplied into space at N+1 redundancy
  5. Supply air temperature = 78˚F
  6. Maximum supply ΔT = 2˚F (therefore maximum server inlet = 80˚F)
  7. Cooling coil (or economizer) RAT:SAT ΔT = 22˚F
  8. Excellent airflow management (per 5, 6, &7 above)
  9. Non-economizer cooling energy for each scenario is calculated by:

                        CP   =   (1-(LWT-45).024) (BPxCT)

                                    Where: CP   =   Chiller plant power

                                                LWT=   Chiller leaving water temperature ˚F

                                                BP   =    Base power (kW per ton @ 45˚F LWT)

                                                CT   =    Chiller tons

I invite my regular readers now to skip ahead to the end to see Table 1 for the results of running these algorithms for the sample data center in Chicago, Denver, Seattle and Phoenix. For those lucky readers who just discovered this little island in the storm, I recommend you hunker down and see if you find any value for your own projects in the methodologies I have developed and summarized in the following paragraphs.

For the series waterside economizer, the special challenge was to account for partial economization during those hours when ambient conditions produced tower water too warm to meet supply temperature requirements, but cooler than the return from the data center. The impact of accounting for these hours is particularly significant in situations such as Seattle and Denver, which I will address in my conclusion. The basic series waterside economizer energy forecast algorithm follows below, but a quick look at Cooling Efficiency Algorithms: Economizers and Temperature Differentials (Water-Side Economizers – Series) will reveal the reference graphics and explanations of the various look-up tables that drive some of the calculations.

            (SAT-CA) > Look up calculation for H

                  Q1 = H (CFP + PP + TFP)

            Q2 = (8760 – H) (CP + PP + CFP + TFP)

 

            Where:

SAT=     Supply air temperature in the data center (See “The Shifting Conversation on Managing Airflow Management: A Mini Case Study,” Upsite blog)

CA=      Cumulative approach temperatures (tower + heat exchanger or chiller + CRAH coils). SAT-CA will populate the cell at the head of the wet bulb column in the look up table.

H =       Hours below the necessary wet bulb temperature to utilize free cooling. This number will be the sum (∑) of the hours under X˚F WB column in the look up table.

Q1 =      Energy (kW Hours) annually to operate 100% free cooling

CFP =    CRAH fan power, total in the data center

PP =     Pump power, chilled water loop and condenser loop

TFP =    Tower fan power

CP =     Chiller power with no free cooling

Q2 =      Energy (kW Hours) to operate mechanical plant with no free cooling

 

That roll-up might look like we are done, but not quite. One of the real benefits of series water-side economization is that even when our hxEWT is too high to give us 100% free cooling, we can still get partial free cooling when the chiller entering water temperature (EEWT) is lower than the data center leaving water temperature (DCLWT), minus the cumulative CRAH coil and evaporator approach temperature, thereby reducing the load on the chiller. There are a variety of ways to sneak up on a partial free cooling estimate; one way would be to take advantage of the look-up table of bin data for the data center site.

 

            IF DCLWT – (HxEWT + CA1) > 0˚F, then DCΔT- Value (=ΔT1)

            PL1 = (DCCFM x ΔT1) ÷ 3140

            PL2 = (DCCFM x ΔT2) ÷ 3140

            PL3 = (DCCFM x ΔT3) ÷ 3140

            PL4 = (DCCFM x ΔT4) ÷ 3140

            Until every whole ˚F increment from 0 to Value has been accounted for

            PLS1 = (PL1 ÷ Q) x CP

            PLS2 = (PL2 ÷ Q) x CP

            PLS3 = (PL3 ÷ Q) x CP

            PLS4 = (PL4 ÷ Q) x CP

            Then

            Budget = Q1 + (Q2  – ((H1 x PLS1) + (H2 x PLS2)+ (H3 PLS3) + (H4 x PLS4))

 

            Where:

                        CA1 =    Cumulative approach temperatures for heat exchanger and chiller

DCΔT = Difference between ELWT and DCLWT, i.e., the temperature rise of the data center (across CRAH coils)

Value = ˚F for calculating partial loads

ΔT1 =    (i.e. VALUE) Difference between data center return water and condenser loop plus approach temperatures. Any value here represents opportunity for reducing chiller load and obtaining partial free cooling

PL1 =    Partial load value #1in kW

DCCFM = Data center CFM (airflow demand)

PL2 =    Partial load value #2 in kW (etc.)

ΔT2 =    ΔT1 – 1˚F (and continue until 0)

PLS1 =   Partial load savings for partial load value #1, etc.

Q =       Data center total IT heat/power load

H1 =      From the bin data look-up table: Total hours available for that one degree bucket, i.e. H+1

H2 =      Total hours available from (H+1) to H+1+1, etc.

Budget = Total mechanical plant cooling energy budget with free cooling, partial free cooling and no free cooling with series water side economization at a particular load, particular supply air temperature, particular approach temperatures and particular IT load.

 

For parallel waterside economization, the special challenge was to develop an algorithm that would prevent us from putting extra wear and tear on our mechanical plant from constantly cycling the chiller on and off for short periods. The sample equation in the original piece assumed we would want to avoid going into economizer mode for less than six hour increments. In the actual hypothetical case studies reported at the end of this piece, I am assuming we want to maintain at least a minimum of four hours of economization to go through the chiller shutdown protocol. The obvious trade-off is longer increments will reduce economization savings and shorter increments may reduce the life of key components of the mechanical plant. That, my friends, is a business decision. Again, for diagrams illustrating the different elements of the algorithms and for explanations of the look-up tables, I refer my readers back to, Cooling Efficiency Algorithms: Economizers and Temperature Differentials (Water –Side Economizers – Parallel.

            (SAT-CA) > Look up calculation for H

                  Q1 = H (CFP + PP + TFP)

            Q2 =  (8760 – H) (CP + PP + CFP + TFP)

 

            Where:

SAT=     Supply air temperature in the data center (See “The Shifting Conversation on Managing Airflow Management: A Mini Case Study,” Upsite blog)

CA=      Cumulative approach temperatures (tower + heat exchanger or chiller + CRAH coils). SAT-CA will populate the cell at the head of the wet bulb column in the look up table.

H =       Hours below the necessary wet bulb temperature to utilize free cooling. This number will be the sum (∑) of the hours under X˚F WB column in the look up table. The one complication for parallel versus series is that we may not want to count only one or two hours that dip below our trigger temperature if we don’t want to exercise our chiller with a lot of off/on cycling. If we say the wet bulb 1/0 column in Table 1 is column F and we want to have at least five hours of free cooling before we power down chillers, then we could add an additional column to our look-up table (H) and we could tabulate our free cooling hours with an equation like:    =+IF(OR(AND(H3=1,F4=1),AND(F4=1,F5=1,F6=1,F7=1,F8=1)),1,0), and copy that down the 8760 rows. The five elements in the second OR proposition could vary higher or lower depending how many hours of free cooling you prefer to get when you cycle chillers on and off.

Q1 =      Energy (kW Hours) annually to operate 100% free cooling

CFP =    CRAH fan power, total in the data center

PP =     Pump power, chilled water loop and condenser loop

TFP =    Tower fan power

CP =     Chiller power with no free cooling

Q2 =      Energy (kW Hours) to operate mechanical plant with no free cooling

 

 

For airside economization, the challenge was to account for adding water to the data center and removing water from the data center to keep within the ASHRAE recommended minimum 41˚F dew point and maximum 59˚F dew point. Given that a one megawatt data center will move around 100 billion cubic feet of air in one year and our sample cities were using outside air from as little as 48% of the year up to 98% of the year, the effect of outside air humidity can be profound. For example, in our Denver case study we needed to add over 10 million pounds of water and in Phoenix we needed to add around 9 million pounds of water, while in Chicago we had to remove 1.6 million pounds of water. For exercising the algorithms, I used ultrasonic humidifiers, but high pressure atomizing would get much better results and any of the heat steam producers would net much poorer results. For dehumidification, I used condensing technology, but something like a desiccant heat wheel might reap better performance in some situations. I will offer a couple example comparison in my conclusions. As with the previous two sections, I refer my readers to, Cooling Efficiency Algorithms: Airside Economizers and Temperature Differentials, for graphic explanations of the different elements in the algorithm equations and how the look-up tables are organized.

(SAT-CA) > Look up table for calculation of H

            Q1            = (H*FE)+HuL*E1)+(DHuL*E2)

            Q2              = MC(8760-H)

            Where:

                        SAT   =   Supply air temperature

                        CA    =    Cumulative approach temperature (could be 0; should be no more than 2˚F)

                        H      =   SAT-CA will establish DB column head in Table 1 and H=column∑

                        Q1    =   Cooling energy in economizer mode

FE    =   Total fan energy (.76 times horsepower times number of fans. Be sure to cube the percent utilization to get affinity law economies)

HuL = Total humidification load. Pounds of water that must be added from Table 1 worksheet

DHuL = Total dehumidification load. Pounds of water that must be removed from Table 1 worksheet.

E1     = Humidification energy. Use values from Table 3 if vendor information has not been identified

E2     = Dehumidification energy. Use 2.4 kW per 13.75 pounds for condensing or 4kW per 250 pounds for desiccant wheel if vendor information has not been identified.

Q2   = Total cooling energy without economization

MC   = Mechanical cooling – If provide by a traditional chiller mechanical plant, use:

                                    CP   =   (1-(LWT-45).024) (BPxCT)

                                                Where: CP   =   Chiller plant power

                                                            LWT=   Chiller leaving water temperature ˚F

                                                            BP   =    Base power (kW per ton @ 45˚F LWT)

                                                            CT   =    Chiller tons

                                    If provided by integrated DX system, use vendor data.

 

COOLING ENERGY REQUIREMENTS

 

Waterside (Series)

Waterside (Parallel)

Airside

Chicago

725,135 kW Hours

923,179 kW Hours

822,156  kW Hours

Denver

662,800 kW Hours

669,970 kW Hours

683,253 kW Hours

Phoenix

840,107 kW Hours

1,036,971 kW Hours

1,476,448 kW Hours

Seattle

662,712 kW Hours

666,932 kW Hours

256,641 kW Hours

Table 1: Cooling Energy Use for 1MW Data Center with Different Economizers

Are the algorithm outputs reported in Table 1 accurate to the watt? Hardly. However, they are more accurate than many bar napkin calculations, particularly those of relative newcomers to the industry or of those who may be working on their tenth napkin well beyond happy hour. More importantly, they communicate relative merits in clear terms. For example, it looks like you cannot go wrong in the Denver area and airside economization appears less attractive than waterside economization out in the desert and the opposite appears to be the case up in the Pacific Northwest. More subtly, the large differences between parallel waterside and series waterside energy use in both Chicago and Phoenix suggest that a good bit of the year in those locations experience wet bulb temperatures that hover right around our trigger temperature going into economizer mode and we therefore are losing a lot of hours by not cycling on and off for one or two hours at a time. I would stress here that these results could vary widely with different data center supply air temperature set points.

The algorithms are also useful in assessing the impact of different supporting technologies. For example, if we deployed desiccant wheel dehumidification in Chicago instead of condensing dehumidification, we would reduce our cooling energy use to 614,125 kW hours, or a 25% decrease in the mechanical load component (MLC, or “cooling only” part of PUE calculation). Conversely, since Denver has zero de-humidification load, it would be difficult to justify a capital expenditure for anything more than one of those little desiccant packets that come in your vitamin bottle. Likewise, changes to the humidification technology in Phoenix could produce dramatic results. Changing from ultrasonic to either electrode or resistive would increase the total cooling energy budget to 4,265,048 kW hours, or a 236% increase to the MLC; whereas a change to high pressure atomizing humidification would reduce the MLC 16% to 1,269,344. The beauty of the algorithms is that each of these calculations took only the second or two required to make a single data entry.

Finally, the algorithms clearly reveal the importance of effective airflow management. Our base assumption for all the reported scenarios in Figure 1 included excellent airflow management with no more than a 2˚F difference between supply air entering the data center and the highest temperature at any server inlet. For perspective purposes, the MLCs in Figure 1 range from a low of .076 up to .168, with most within the .07 to .09 range, meaning PUEs ranging from 1.1 up to 1.25 are achievable, depending on electrical conversion efficiency. Without optimized airflow management in a data center with only properly arranged hot aisle – cold aisle cabinet deployment and return air set point, our energy use for cooling a Seattle data center with series water side economization would increase to 1,554,948 kW hours for a 136% increase to the MLC and our energy use for cooling a data center in Denver with airside economization would increase to 1,364,372 kW hours, or a 99% increase to the MLC. These algorithms not only provide tools for assessing alternative design scenarios, they also serve to provide excellent evidence for justifying investments in airflow management.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest