Airflow Management Considerations for a New Data Center – Part 1: Server Power versus Inlet Temperature20 min read

by | May 3, 2017 | Blog

One major benefit of good airflow management is the option of running the data center at a higher temperature and perhaps even exploiting some degree of the allowable temperature maximums for different classes of servers. With good airflow management, a 75˚F supply temperature can result in a maximum server inlet temperature somewhere in the data center of 77-79˚F, whereas with poor airflow management, a 55˚F supply temperature could easily result in server inlet temperatures ranging anywhere from 77˚F up to over 90˚F. Such common extremes notwithstanding, let’s assume that poor airflow management had been compensated for with massive over supply of over-cooled air and with implementation of good airflow management, one is ready to consider letting temperatures start to creep up, per the recommendations of most data center thermal management experts. What can we expect to see happen? Some of the earlier empirical answers to that question suggested that we might see PUE could go down, while total data center energy use might increase, due to server fans working harder to cool the servers with higher temperature inlet air. One such study was reported on by Hewlett- Packard at the Uptime Symposium in 2011 where they attempted to capture all the costs associated with performing some specified task, including the power for the computers, power for the mechanical plant, power conversion losses, lights, etc. They tested at four different server inlet temperatures and measured PUE and the actual cost for doing the work. As one might expect, as they raised the temperatures, they gained mechanical plant efficiencies and the PUE went down; however, operating energy costs did not track parallel to PUE. In fact, as illustrated in Table 1, at some point PUE and cost for doing the work actually diverge.

Hewlett-Packard Report on Server Inlet Temperature Effects

Server Inlet TemperatureResultant PUECost to Perform Task
68˚F1.27$510
77˚F1.20$501
86˚F1.19$565
95˚F1.18$601

Table 1: Temperature Test Data Reported at 2011 Uptime Symposium

Hewlett-Packard dollarized and corroborated the conclusions from a 2009 Dell and APC study1 that at some elevated temperature server fan speeds would increase to such an extent that they would consume more energy than would be saved by the mechanical plant operating at a higher temperature. In large part, the industry responded with a smug “I told you so” about the evils of raising the temperature in the data center.

Interestingly enough, many in the industry still have not recovered from this leap into a premature conclusion. Both the HP and Dell-APC experiments seem very sound and I think it would be a waste of time and energy to quibble with the results reported for the conditions tested. However, the conditions tested may not necessarily reflect the overall design considerations being evaluated for a space. For example, are you working a cost-benefit assessment for free cooling? Or, are you building in a state or municipality that is current on energy and building codes and actually required economizers for new builds or upgrades spaces? The HP experiment included precision perimeter cooling units and a constantly running chiller plant and the Dell-APC study included three different configuration scenarios, all with in-row cooling – one DX and two with continually cooling chilled water. Free cooling at 95˚F, for example, might eliminate the need for running and even for designing and building a chiller plant in most parts of the country. In addition, back in 2009 and 2011, most installed servers would fall into what today is categorized by ASHRAE as Class A1 or A2 servers, whereas today, almost all currently deployed servers are Class A3, and we can expect Class A4 will eventually migrate from “Hardened” to standard. Both the use of economizers and newer servers will dramatically impact the total cost of operating at higher server inlet temperatures.

Server Energy in Watts at Different Inlet Temperatures

800 Watts Nominal

Inlet TemperatureHigh EndLow EndMean (Typical)
68˚F808784800
72˚F816792800
77˚F820800808
81˚F824800816
86˚F832816824
90˚F856824840
95˚F880832856
99˚F920840880
104˚F960848904

Table 2: Example of Class A3 Server Energy at Different Inlet Temperatures2, Shown in Watts

Table 2 just provides an example of an 800 watt rack mount server with nominal 80 watts in fan energy with variations based on input from ASHRAE IT Subcommittee members. As a point of clarification, high end values are actually for lower performing servers with less efficient fan systems and the low end values are better performing servers. For the remaining hypothetical studies, I’ll be using the mean values from this table, but for anyone performing this exercise as part of a design project, you need to know if you’re going to be deploying Energy Star servers or inefficient machines. If you don’t know, you can plan for worst case and hope for a good surprise or plan around the mean and hope for no surprises. I used these mean values for a quick example assessment for an 800kW data center, running 1000 of these servers, and assuming all the best airflow management practices listed in my opening paragraph and show the results in Table 3.

Server and Cooling Energy at Different Server Inlet Temperatures

TemperatureChiller (kW)CRAH (kW)Server (kW)Total Energy (kW)
68˚F15440.7800994.7
72˚F15140.7800991.7
77˚F13344.2808985.2
81˚F12448.7816988.7
86˚F11252.7824988.7
90˚F104618401005.0
95˚F9568.88561019.8
99˚F8881.18801049.2
104˚F8193.39041078.3

Table 3: Example of Mechanical Efficiency Component Elements at Different Server Inlet Temperatures

While the “sweet spot” for newer Class A3 Servers is somewhere between 86˚F and 90˚F in this example, which is 10+˚ higher than the spot identified in the HP and Dell/APC studies, there is still that tipping point where the combined server and CRAH fan energy losses exceed the gains from the chiller operating at higher temperatures. It is also worth noting that the Mechanical Efficiency Component (MEC), i.e., the cooling-only part of the PUE calculation, drops slightly at each increased temperature bucket. In this example, the MEC at 68˚F is 1.24 and it drops to 1.19 at 104˚F, even though the total energy use increases by over 8%. That anomaly can be attributed to about two-thirds of the total fan energy increase is in the servers, which goes into the divisor of the PUE and MEC calculations. Regardless, everyone’s mileage will vary on this type of exercise, but the point is that there is some tribal knowledge floating around that 77˚F is that magic temperature for server inlets and after that server fan energy increases will negate any other operational savings and that tribal knowledge turns out to be no longer accurate. That tipping point continues to move up the thermometer with advances in server efficiency as well as improvements to all the components of the mechanical plant.

When we add free cooling to the equation, we will typically extend that tipping point even further. I have selected Albuquerque for building an example of how one might assess the effect of server inlet temperature on some data center design considerations merely because it is home to Upsite’s headquarters. While the data will be different for any other geographic location, the methodology will apply to any location. For the Albuquerque example, as shown in Table 4, with an airside economizer, if temperatures were allowed to creep up to 99˚F, the data center could be built without any chiller plant or refrigerant cooling. With water-side economization, the chiller free data center could be built by allowing server inlet temperatures to reach 86˚F. To put those numbers in perspective, there would be less that twenty hours a year normally in that 95-99˚ bucket at the top of the air-side economizer scale and only two hours a year at 73˚F WB and five hours a year at 68˚F WB for supplying 86˚ and 81˚F air to server inlets with a water-side economizer, i.e., for 364 ¾ days a year, server inlet temperatures would still be 77˚F or lower. Obviously, sites like Houston and Miami could not come close to these numbers driven by dry climate wet bulb temperatures, which is why this power/temperature consideration needs to be worked and not just considered. In addition, while all the temperatures are within the envelope of ASHRAE’s allowable temperature range for Class A3 servers, and well within the operating ranges for most server OEM’s, there can be a reliability trade-off, which I will discuss during the sixth installment of this series.

Cooling Costs with Economizers at Different Server Inlet Temperatures

800kW IT Load                                                                                               Albuquerque, NM Bin Data

Max Inlet

Temperature

Air-Side EconomizerWater-Side Economizer
ACEconomizerTotalACEconomizerTotal
68˚F$340,685$453,635$794,320$260,313$565,586$825,899
72˚F$269,941$513,411$783,352$167,994$650,567$818,561
77˚F$175,957$594,216$770,173$32,118$778,092$810,209
81˚F$121,017$643,470$764,487$494$808,416$808,911
86˚F$69,703$689,739$759,442$198$808,703$808,901
90˚F$39,396$718,428$757,824$0$808,900$808,900
95˚F$12,544$743,754$756,298N/AN/AN/A
99˚F$1784$754,149$755,932N/AN/AN/A
104˚F$0$755,883$755,883N/AN/AN/A

Table 4: Example of Cooling Costs with Economizers

When the server fans speed up to move a greater volume of air through the servers, we cannot lose sight that the fans supplying air into the data center need to respond accordingly. Performing this analysis can then uncover further surprises that you only want to be surprises on paper and not in the actual mechanical plant. For example, for the airside example in Table 4, the economizer dollars for 95˚, 99˚ and 104˚ are highlighted because at these temperatures the air volume requirement has increased to the point where air moving redundancy has been lost. While exposure to those temperatures, in this example, is for a minimal number of hours, it has to be a concern that redundancy is lost at the worst possible time when temperatures are already the highest. If mechanical cooling is provided by precision perimeter cooling units in the data center or some mechanism not relying on the economizer air-moving capacity, then that mechanical plant actually can provide a 2N level of redundancy. On the other hand, if this space is going to be planned without any mechanical cooling, then additional air moving capacity needs to be added for redundancy. While this addition obviously increases the capital part of the project, the extra fan capacity produces fan law energy savings that will usually pay for the extra capacity in less than a year.

Finally, I just want to offer a couple pointers on some of the nuts and bolts of doing your own analysis for the power and temperature consideration.

  1. If you do not know the fan energy component of your servers, for planning purposes you can assume 10% of the operating load and then either use the ASHRAE chart or you can back into the percentages for each temperature bucket by calculating the differences from the nominal 800 watts in Table 2 above.
  2. For calculating data center air mover energy in response to server temperature changes, take the cube root of each difference calculated in #1 above, apply that difference to your air handler baseline CFM, divide that by your air handler effective capacity, cube that product and multiply it by the total rated power of your air handler/fans. That will match your supply volume to demand volume changes.
  3. When you move up the thermometer for free cooling, don’t forget to only count the hours in the particular bin and not the annual hours under that threshold. For example, in my Albuquerque example, while I have 8434 hours under 64˚F WB, only 326 of those hours actually fall between 59-64˚F WB, so my server fan energy and cooling fan energy for all those other hours will reflect the lower temperatures. There are many ways of rolling these numbers up, but my methodology was:

[(FPc + EP +SP)(B2 – B1)($P)]+CumN-1

Where:

FPC = Fan power for cooling units (kW)

EP   = Total power consumed by economizer (kW)

SP   = Total server power (kW)

B2   =  Total annual hours less than this bin upper limit

B1   =  Total annual hours less than upper limit of previous bin

$P  =  Cost per kW Hour for electricity

CumN-1  =  cumulative annual energy cost through the previous bin

There is a thread of tribal knowledge and conventional wisdom that says it is counter-productive to raise data center temperatures because any efficiency gains are wiped out by increases in server fan energy and resultant increases in data center fan energy. As I have illustrated here, there is such a relationship, but the tipping point where efficiency gains are negated is usually much higher on the thermometer than most people think. In addition, the value of this exercise can also be the discovery of stretched redundancy in the cooling plant or even the possibility of completely avoiding the capital expense of a mechanical plant.

1 “Energy Impact of Increased Server Inlet Temperature,” Moss, David and Bean, John, White Paper, www.dell.com/hiddendatacenter, 2009

2 Data derived from area plot graph Figure 2.7 in Thermal Guidelines for Data Processing Environments, p.27

Continued in Airflow Management Considerations for a New Data Center – Part 2: Server Performance versus Inlet Temperature

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published.

Subscribe to our Blog

Archives

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest