What Is the Difference Between ASHRAE’s Recommended and Allowable Data Center Environmental Limits? – Part 223 min read

by | Oct 9, 2019 | Blog

This is part 2 of a three part series on ASHRAE’s recommended vs allowable data center environmental limits. To read part 1, click here, and to read part 3, click here.

What really is the difference between the ASHRAE TC9.9 recommended data center environmental limits and the allowable limits? Conventional wisdom might say something like the recommended temperature envelope for server inlet air temperature is 17-27˚C (64.4˚F – 80.6˚F) and the allowable range is something wider than that. The problem with conventional wisdom is that it accepts the confusion inherent with the misapplication of these defining terms. On close examination, I would assert that the recommended limits allow businesses to frivolously waste capital and operating budgets, while the allowable limits ought to be recommended for businesses to honor their fiduciary responsibilities to all their stakeholders and moral responsibilities to future generations.

At this point, most of our readers are well aware of the latest ASHRAE guidelines for server inlet temperatures and are probably at least somewhat familiar with the “X” Factor, so my review will be very brief. After causing some confusion in the industry with the original release of the allowable ranges for server inlet temperatures that could be permitted for some undefined short periods of time, ASHRAE clarified matters some seven years ago by developing the “X” factor, which we could all use to individually determine what was an acceptable short period of time for exposure to the wider range of temperatures allowed for the different classes of servers (A2, A3, A4). The server OEMs on TC9.9 would not share specific information about their product’s failure rates, but they did share information on variances of failure rates based on inlet temperatures. They eventually agreed on 68˚F as the baseline figure, and then agreed on what increases or decreases in that base failure rate could be expected based on server inlet temperatures being either higher or lower than 68˚F. For example, at 59˚F we should expect to see about 72% of the failures we might expect to see at 68˚F and at 77˚F we might expect to see 124% of the failures we might typically see at 68˚F. They published a table with low range, mid range and high range values for each 2.5˚C bin above and below the 68˚F baseline “X”. The original paper in which the “X” factor was launched included a case study for Chicago that showed that a data center without any air conditioning, relying on free air cooling modulated by the whims of mother nature would actually see 1% fewer server failures than a data center that maintained a constant 68˚F server inlet temperature all year long.

In today’s piece I have developed various case studies for different environmental limits, different types of cooling economization and for fourteen different data center geographic locations. Tagged on to the back end of this piece are more data tables than you probably care to pore over, but they are available for you to scroll back and forth to as your curiosity so inclines you.

The first four tables establish free cooling availability in each of the fourteen locations for servers operating within the recommended range, within the A2 class allowable range, within the A3 class allowable range and within the OpenEdge server envelope. One quick caveat on the OpenEdge server specification. It cites a maximum 45˚C inlet temperature but allows for short excursions up to 55˚C. There is no definition for “short” and this equipment was not part of the “X” factor development, so there is not yet much analysis I can include. Nevertheless, at either 45˚C (essentially class A4 allowable) or 55˚C, the possibilities are definitely intriguing. The assumptions for these available free cooling hours include mandatory excellent airflow management in which there is no more than a 2˚F increase from where the supply air enters the data center to where it is ingested by any server in the room. Therefore, I have used a 2˚F “approach” temperature for direct airside economization, a 15˚ cumulative approach temperature from ambient wet bulb to delivered supply dry bulb for plate and frame water-side economization, 5˚F approach temperature for air-to-air heat exchanger economizer (such as a heat wheel), and 7˚ approach temperature from ambient wet bulb to delivered dry bulb for indirect evaporative cooling systems. The 2˚ room variation plus approach temperatures have been backed out of every calculation in every table. The main take away from these first four tables is that, with just a couple exceptions, some form of free cooling is available just about everywhere for most of the year.

In Tables 5 and 6 I have exposed my methodology for determining the variance from the “X” factor for server failures in a data center with no air conditioning. One table for my non-Celsius readers and one for the rest of the community. The server inlet temperature bins in the left columns were all factored by the associated approach temperatures before pulling hours from my weather data base. You will note in Omaha we have fewer server failures projected for air side economization and indirect evaporative cooling than we would expect to see at a constant 68˚ inlet temperature and in Sydney we have somewhat elevated failure rates in all four scenarios.

In Table 7 I have accumulated the summary lines for all fourteen locations from similar spreadsheets to tables 5 and 6 – just the answers without all the detail, but the process was the same. We see a pretty wide range of results, ranging from server failure rates near 80% of the baseline (20% less than constant 68˚F inlet temperature) in London, Frankfurt and Amsterdam up to 120-130% in some scenarios for Phoenix and Hong Kong, with most variations being of the single digit variety above and below the “X” factor baseline.

However, if you recall the first four tables, there were many data points for locations and economizer styles wherein there was not access to free cooling 100% of the time. So for example, if we consider Class A2 servers with a maximum allowable 35˚C (95˚F) inlet air temperature, we will be out of spec and perhaps in warranty violation in many of these situations. Therefore, in Table 8 I have shown how much air conditioning we would need to add to a 1MW data center in order to not exceed those allowable limits. While the point of the “X” factor is to show us a path to a data center with no mechanical cooling, as a practical matter that ideal is not always possible. However, since we only need to bump our free cooling supply down from, for example, 98˚F to 93˚F (remember our 2˚ “room approach”?), we do not need a full mechanical plant to accomplish that. Whereas 1MW of IT load at around a 20˚F ΔT is going to require 284 tons of cooling under normal circumstances, we only need a portion of that to compensate for occasional high free cooling temperatures. I looked at the maximum temperature (minus all the associated approach temperatures) for each data center location and calculated the ratio of air conditioning that would be required to bump our inlet temperatures down into allowable A2 range. The results range from 70 tons for 4 hours in Chicago for free air cooling up to 185 tons for 5095 hours for air-to-air heat exchange in Phoenix. All the zeros (and there are plenty of them) refer to situations where no air conditioning is required. Just for comparison purposes, Table 9 shows air conditioning requirements to maintain the recommended temperature envelope.

Table 10 is at the heart of our economic argument. This is a comparison of Table 8 and Table 9 – I have captured the savings for both capital investment and operating expenses for operating at the Class A2 allowable limit versus operating within the recommended envelope. The percentages are reductions,, not just a ratio. Therefore, the 75% capacity and 99% operating hours savings for Chicago for free air cooling means that our chiller plant only needs to be sized 25% of what is required to meet the recommended envelope and we will only need to run it for less than 1% of the hours required (with free cooling) to meet the recommended limit. You will note some zeros for indirect evaporative cooling for Denver and London. There were no savings there merely because no air conditioning was required to meet the recommended limit with indirect evaporative cooling.

Finally, a small dose of reality needs to be interjected with the “X” Factor itself. How many of us have been running our data center with a constant server inlet temperature of 68˚F? I know there are still data centers out there with 68˚F set points resulting in 50˚F supply air and server inlet temperatures ranging from 65-85˚F, but most of us have left those days behind. With good airflow management, in fact, most of us have tended to inch our supply temperatures up and we have got relatively comfortable with inlet temperatures in the mid to upper 70’s. Therefore, basing a server failure forecast on a 68˚F baseline will not align well with our actual experience. While most healthy data center are probably operating at higher temperatures, just to be cautious, I have moved the baseline up to 72˚F in Table 11 and recalculated the anticipated failure rates for the four different types of free cooling in the fourteen locations on that recalibrated baseline. For the 1MW load of the running example, I have assumed 1400 servers and a baseline of 42 failures per year, about 3%. That is on the high side of data from recent studies conducted by “What Can We Learn from Four Years of Data Center Hardware Failures?”* and “Backblaze Hard Drive Stats for 2017,”** but my error on the side of caution is your ultimate pleasant surprise. The blue-shaded cells show data for scenarios where there is no air conditioning – 100% free cooling. Most of the data points show actual improvements in server reliability operating within the A2 server class allowable envelope as compared to maintaining a constant 72˚F server inlet temperature all year. As for those scenarios that show an increase in server failures, the reader is still faced with the business decision of the value of a small amount of IT hardware versus the capital expense of the cooling mechanical infrastructure as well as the cost of operating and maintaining it.

After having completed this study, I would say that I would have to recommend operating within the allowable envelope and I suppose grudgingly allow businesses with nothing better to do with their assets to go ahead and operate within the recommended range. There are obviously other ancillary issues involved with operating our data center at higher temperatures which I will touch on next time. For now, however, it seems to me pretty clear that economics and server reliability make a good case for recommending allowable temperature limits.

*Guosai Wang, Lifei Zhang, Wei Xu, Tsingua University and BaiDu (290,000 failure tickets analyzed)
**Andy Klein,
www.blackbaze.com, Feb. 1, 2018 (91,000 drives analyzed),

Direct Air –Side Economization Availability

LocationRecommendedA2 AllowableA3 AllowableOpen Edge
 HoursRatioHoursRatioHoursRatioHoursRatio
Amsterdam873399.7%8760100%8760100%8760100%
Atlanta691779%857597.9%8757100%8760100%
Chicago778988.9%8756100%8760100%8760100%
Dallas621370.9%849397.0%874899.9%8760100%
Denver777788.8%864598.7%8760100%8760100%
Frankfort848896.9%875499.9%8760100%8760100%
Hong Kong448451.2%867499.0%8760100%8760100%
London874899.9%8760100%8760100%8760100%
Omaha737484.2%868999.2%8760100%8760100%
Phoenix436749.9%686678.4%808092.2%8760100%
Reston730383.4%867299.0%8760100%8760100%
San Jose808992.3%871999.5%8760100%8760100%
Sydney847796.8%875399.9%8760100%8760100%
Wenatchee796690.9%867599.0%8760100%8760100%

Table 1: Direct Air Side Economization Availability for Various Locations and Temperature Ranges

Indirect Air-Side Economization Availability

LocationRecommendedA2 AllowableA3 AllowableOpen Edge
 HoursRatioHoursRatioHoursRatioHoursRatio
Amsterdam859798.1%8760100%8760100%8760100%
Atlanta567164.7%831395.0%869899.3%8760100%
Chicago694379.3%863198.5%8760100%8760100%
Dallas500757.2%802191.6%869199.2%8760100%
Denver731683.5%844296.4%874499.8%8760100%
Frankfort812392.7%874399.8%8760100%8760100%
Hong Kong319736.5%821593.8%8760100%8760100%
London870399.3%8760100%8760100%8760100%
Omaha856374.9%840696.0%873699.7%8760100%
Phoenix366541.8%597668.2%747885.4%8760100%
Reston643173.4%837995.7%875199.9%8760100%
San Jose744685.0%862198.4%865698.8%8760100%
Sydney729783.3%872899.6%8759100.0%8760100%
Wenatchee750785.7%857897.9%874399.8%8760100%

Table 2: Air-Side Economization Availability for Different Server Classes and Various Data Center Locations

Water-Side Economization Availability

LocationRecommendedA2 AllowableA3 AllowableOpen Edge
 HoursRatioHoursRatioHoursRatioHoursRatio
Amsterdam840395.9%8760100%8760100%8760100%
Atlanta528860.4%871499.5%8760100.0%8760100%
Chicago668776.3%874899.9%8760100%8760100%
Dallas416747.7%863098.5%8760100%8760100%
Denver836295.5%8760100%8760100%8760100%
Frankfort801391.5%8760100%8760100%8760100%
Hong Kong239527.3%669476.4%8760100%8760100%
London867999.1%8760100%8760100%8760100%
Omaha612369.9%869899.3%8760100%8760100%
Phoenix636772.8%8760100%8760100%8760100%
Reston612069.9%873799.7%8760100%8760100%
San Jose795850.8%8760100%8760100%8760100%
Sydney610869.7%8760100%8760100%8760100%
Wenatchee843096.2%8760100.0%8760100.0%8760100%

Table 3: Water-Side Economization Availability for Different Server Classes and Various Data Center Locations

Indirect Evaporative Cooling Availability

LocationRecommendedA2 AllowableA3 AllowableOpen Edge
 HoursRatioHoursRatioHoursRatioHoursRatio
Amsterdam875199.9%8760100%8760100%8760100%
Atlanta746185.2%8760100%8760100%8760100%
Chicago818893.5%8760100%8760100%8760100%
Dallas636772.8%8760100%8760100%8760100%
Denver8760100%8760100%8760100%8760100%
Frankfort874799.9%8760100%8760100%8760100%
Hong Kong441550.4%8760100%8760100%8760100%
London8760100%8760100%8760100%8760100%
Omaha784889.6%8760100%8760100%8760100%
Phoenix754586.1%8760100%8760100%8760100%
Reston807992.2%8760100%8760100%8760100%
San Jose875099.9%8760100%8760100%8760100%
Sydney862598.5%8760100%8760100%8760100%
Wenatchee8758100.0%8760100%8760100%8760100%

Table 4: Indirect Evaporative Cooling Economization Availability for Different Server Classes and Various Data Center Locations 

Omaha “X” Factor Server Failure Forecast

Inlet Temp ˚F“X” FactorDirect Air EconomizerIndirect EconomizerWater Side EconomizerIndirect Evap Cooling
  Hours Hours Hours Hours 
590.7244393196.139152818.835692569.744003168
63.50.87510443.7401348.9378328.9438381.1
681.00625625633633539539763763
72.51.13648732.2499563.9458517.5696786.5
771.248541058.9774959.8802994.510391288.4
81.51.34575770.5691925.9737987.68961200.6
861.42584829.37381047.911191588.9466661.7
90.51.48310458.8469694.17801154.45885.8
951.55163252.7425658.8343531.746.2
99.51.614267.6144231.83556.400
1041.661016.661101.30000
108.51.71001017.10000
Failure Variance from 68˚F0.96 1.03 1.06 0.95

Table 5: Example “X”” Factor Server Failure Forecast for a Data Center with no Air Conditioning in Omaha (˚F)

Sydney, Australia “X” Factor Server Failure Forecast

Inlet Temp ˚C“X” FactorDirect Air EconomizerIndirect EconomizerWater Side EconomizerIndirect Evap Cooling
  Hours Hours Hours Hours 
150.7216041154.9916659.5289208.081184852.48
17.50.8713001131926805.6684595.0812841117.08
201.0014451445133413341219121914991499
22.51.1318482088.214821674.714331619.2917221945.86
251.2416462041.019142373.416942100.5619272389.48
27.51.34716959.414571952.419352592.910741439.16
301.42143203.1580823.613711946.827099.4
32.51.483145.999146.5135199.800
351.5520313046.50000
37.51.6169.71828.90000
401.6611.734.90000
42.51.710011.70000
451.7600000000
Failure Variance from 68˚F1.04 1.12 1.19 1.07

Table 6: Example “X”” Factor Server Failure Forecast for a Data Center with no Air Conditioning in Sydney, Australia (˚C)

“X” Factor Predicted Server Failure Variance for Data Center with no Air Conditioning Compared to Data Center with Constant 68˚F Server Inlet Temperature

LocationDirect AirIndirect AirWater-SideIndirect Evap
Amsterdam0.8080.8841.0100.849
Atlanta1.0571.1391.1641.025
Chicago0.9330.9931.0240.917
Dallas1.1151.1941.2271.094
Denver0.9080.9670.9230.819
Frankfort0.8450.9160.9920.854
Hong Kong1.2701.3271.4181.270
London0.7830.8520.9630.813
Omaha0.9651.0281.0580.952
Phoenix1.2011.1981.1610.989
Reston0.9791.0461.0680.952
San Jose0.9301.0381.0870.905
Sydney1.0401.1251.1971.066
Wenatchee0.8840.9470.9150.807

Table 7: “X” Factor Predicted Server Failure Variance for Data Center with no Air Conditioning Compared to Data Center with Constant 68˚F Server Inlet Temperature 

Air Conditioning Required to Meet A2 Maximum Inlet Temperature for 1MW IT Load in Conjunction with Various Types of Economizers

 Direct AirIndirect AirWater-SideIndirect Evap
 TonsHoursTonsHoursTonsHoursTonsHours
Amsterdam00000000
Atlanta110185137442704600
Chicago704110291101200
Dallas1372671567397013000
Denver1101151373180000
Frankfort706110170000
Hong Kong70867054570206600
London00000000
Omaha11071137354706200
Phoenix172439318550950000
Reston11088137381702300
San Jose6841110390000
Sydney1197141320000
Wenatchee11085137820000

Table 8: Air Conditioning Required to Meet A2 Maximum Inlet Temperature for 1MW IT Load in Conjunction with Various Types of Economizers

Air Conditioning Required to Meet Recommended Maximum Inlet Temperature for 1MW IT Load in Conjunction with Various Types of Economizers

 Direct AirIndirect AirWater-SideIndirect Evap
 TonsHoursTonsHoursTonsHoursTonsHours
Amsterdam284272841632843572849
Atlanta2841843284308928434722841299
Chicago28497128418172842073284572
Dallas2842547284375328445842842384
Denver284983284144428439800
Frankfort28427228463728474728413
Hong Kong2844267284556328463652844345
London28412284572848100
Omaha284113628421972842637284912
Phoenix2844393284509528423872841215
Reston284145728423592842640284681
San Jose284671284131428480228410
Sydney28428328414632842652284135
Wenatchee28479428412532843302842

Table 9: Air Conditioning Required to Meet Recommended Maximum Inlet Temperature for 1MW IT Load in Conjunction with Various Types of Economizers

Capacity and Run Time Savings for Air Conditioning in Data Center at A2 Allowable Inlet Temperature versus Recommended Temperature

 Direct AirIndirect AirWater-SideIndirect Evap
 CapacityHoursCapacityHoursCapacityHoursCapacityHours
Amsterdam100%100%100%100%100%100%100%100%
Atlanta61.3%90.0%51.6%85.7%75.5%98.7%100%100.0%
Chicago75.5%99.6%61.3%92.9%61.3%99.4%100%100.0%
Dallas51.6%89.5%45.2%80.3%75.5%97.2%100%100.0%
Denver61.3%88.3%51.6%78.0%100.0%100.0%00
Frankfort75.5%97.8%61.3%97.3%100.0%100.0%100.0%100.0%
Hong Kong75.5%98.0%75.5%90.2%75.5%67.5%100.0%100.0%
London100.0%100.0%100.0%100.0%100.0%100.0%00
Omaha61.3%94.9%51.6%83.9%75.5%97.6%100.0%100.0%
Phoenix39.4%56.9%34.8%45.4%100.0%100.0%100.0%100.0%
Reston61.3%94.0%51.6%83.6%75.5%99.1%100.0%100.0%
San Jose76.1%93.9%61.3%89.4%100.0%100.0%100.0%100.0%
Sydney58.1%97.5%50.3%97.8%100.0%100.0%100.0%100.0%
Wenatchee61.3%89.3%51.6%85.5%100.0%100.0%100.0%100.0%

Table 10: Capacity and Run Time Savings for Air Conditioning in Data Center at A2 Allowable Inlet Temperature versus Recommended Temperature

NET CHANGE IN SERVER FAILURES IN DATA CENTER OPERATING WITHIN A2 RANGE VERSUS MAINTAINING A CONSTANT 72˚F INLET TEMPERATURE

LocationDirect AirIndirect AirWater-SideIndirect Evap
Amsterdam-14-10-5-12
Atlanta-30+1-4
Chicago-8-6-4-9
Dallas-1+3+4-2
Denver-9-7-9-13
Frankfort-12-4-6-12
Hong Kong+6+8+14+6
London-15-7-12-13
Omaha-7-4-3-7
Phoenix+4+7+1-6
Reston-6-4-3-7
San Jose-8-4-2-9
Sydney-40+3-3
Wenatchee-10-9-8-14

Table 11: Net Change in Server Failures in a Data Center operating within A2 Allowable Range versus Maintaining a Constant 72˚F Server Inlet Temperature

This is part 2 of a three part series on ASHRAE’s recommended vs allowable data center environmental limits. To read part 1, click here, and to read part 3, click here.

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Trackbacks/Pingbacks

  1. What Is the Difference Between ASHRAE’s Recommended and Allowable Data Center Environmental Limits? – Part 1 - […] This is part 1 of a two part series on ASHRAE’s recommended vs allowable data center environmental limits. To…

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest