Airflow Management Considerations for a New Data Center: Part 6: Server Reliability versus Inlet Temperature19 min read

by | Jul 12, 2017 | Blog

[This continues from Airflow Management Considerations for a New Data Center – Part 5: Server Corrosion versus Moisture Levels]

Airflow management considerations will inform the degree to which we can take advantage of our excellent airflow management practices to drive down the operating cost of our data center. In previous installments of this seven-part series, I demonstrated that data centers could be run warmer than conventional wisdom would suggest before increased server fan energy reversed mechanical plant savings before server performance was adversely affected and before server price premiums consumed mechanical plant savings. I then suggested chiller-free data centers are much more realistic than conventional wisdom might purport and provided evidence that ICT equipment OEM’s tend to generally allow for wider humidity ranges than mainstream standards and industry guidelines. The first five parts of this series provided evidence from manufacturers’ product information, independent lab research results and math models that together make a rather compelling argument for the efficacy of designing, building and operating data centers without chiller plants or refrigerant cooling. Today we will look at the reliability of our ICT equipment in these erstwhile hostile environments.

Clearly, there is some temperature limit after which our servers will start suffering from higher failure rates; otherwise, why would all the manufacturers’ user documentation and various industry standards set upper and lower limits on the temperature of air entering that equipment? Over time, that envelope has expanded, and we were left to wonder what the definition of a short period might be when ASHRAE TC9.9 first created a category of allowable temperature limits: was that measured in minutes, hours or days and would violations produce catastrophic meltdowns or some accelerated rate of planned failures? Six years ago we got our answer,1 but for some reason, that breakthrough has not yet reached the ho-hum of old news. The breakthrough came when the nineteen major ICT OEMs on the ASHRAE TC9.9 IT Subcommittee figured out how to open their kimonos without giving away everything. Everyone in the business of designing, building and servicing ICT equipment has some kind of data base from their warranty experience on equipment failures, and most of that data includes internal footprints on conditions surrounding the failures, including temperature. While it would not have been prudent for those open kimonos to make semi-public information like – “We had 14% failures within eighteen months on platform L when operating at 90˚F for over 30% of that period.” You don’t open your kimono and hand your competition a camera. What they decided they could reveal was that at 68˚F their equipment would experience an X failure rate, something their regular customer base would be able to peg from experience, and then they could share their actual experience that at some specific temperature below 68˚F, their equipment could be expected to fail at 90% of X (0.9X) and that at some specific temperature above 68˚F, their equipment could be expected to fail at 1.15X, or whatever their history showed. The results of this exercise are summarized in Table 1 below, wherein the baseline is identified as forecasted equipment failures at 68˚F, with variations from that baseline at temperatures above and below the baseline for above average servers (lower bound), average servers, and below average servers (upper bound).

Table 1: Server Failure Rates at Different Server Inlet Temperatures2

Relative Failure Rate x-Factor

(⁰F)

Lower Bound

Average Bound

Upper Bound

59

0.72

0.72

0.72

63.5

0.80

0.87

0.95

68.0

0.88

1.00

1.14

72.5

0.96

1.13

1.31

77.0

1.04

1.24

1.43

81.5

1.12

1.34

1.54

86.0

1.19

1.42

1.63

90.5

1.27

1.48

1.69

95.0

1.35

1.55

1.74

99.5

1.43

1.61

1.78

104.0

1.51

1.66

1.81

108.5

1.59

1.71

1.83

113.0

1.67

1.76

1.84

The motivation for exploring these limits and thresholds is to determine if a case can be made for designing and operating a data center without a chiller or any refrigerant cooling. As such, we understand that our supply temperature will not be a constant set point but will rather use some form of free cooling to follow Mother Nature, within some reasonable bounds. For example, if we are using air-side economization in Minneapolis or Fargo or Cheyenne, we will capture enough return air in a recirculation mixing box to keep our minimum temperature above some predetermined level during the winter. For the Chicago and Boise examples discussed below, we will not let our minimum server inlet temperature slip below 59˚F, equivalent to the lower allowable boundary for Class A2 servers. With the release of the “X” Factor, ASHRAE presented a case study for Chicago where, because of the number of hours per year under 68˚, (wherein the data center would operate between 59˚F and 68˚ with free cooling with no chiller installed or operating), the server reliability would actually improve by 3% over the 68˚baseline, as summarized in Table 2 below.

Table 2: Time at temperature weighted failure rate calculations for IT equipment in Chicago3

Net “X” Factor = 0.97

Inlet Temperature                                The “X” factor                % of Hours
                59⁰F  ≤  T  ≤  68⁰F                                        .865                72.45%
                 68⁰F  ≤  T  ≤  77⁰F                                        1.13                14.63%
                 77⁰F  ≤  T  ≤  86⁰F                                        1.335              9.47%
                 86⁰F  ≤  T  ≤  95⁰F                                        1.482              3.45%

I have previously developed a similar case study for a data center in Boise that illustrates in more detail how the actual factor is calculated. The calculation process is rolled up in Table 3.

Table 3: X Factor Server Failure Rate Prediction for Boise, ID4

Temperature

Hours

“X” Factor

Factored Hours

59

5460

0.72

3931.2

63.5

572

0.87

497.64

68

426

1.00

426.00

72.5

504

1.13

569.52

77

377

1.24

467.48

81.5

437

1.34

585.58

86

327

1.42

464.34

90.5

271

1.48

401.08

95

178

1.55

275.90

99.5

165

1.61

265.65

104

38

1.66

63.08

108.5

5

1.71

8.55

The total factored hours equals 7956, divided by 8760 hours a year produces an IT reliability forecast estimate of 91%, which means that operating the data center at these temperatures would have a 9% reliability improvement over running the data center all year at 68⁰F. Again, besides saving on both the capital and operational expense of some form of mechanical cooling, this data center would see fewer equipment failures than a data center running 24/7 with a 68˚F server inlet temperature. For some of my readers, I suspect that all the previous discussion has been nothing more than a rehash of the standard “X” Factor analytics. I hope you have stuck around, though, because this simple tool can be applied well beyond straightforward comparisons to the 68˚F baseline.
A project I worked on recently offers a glimpse of different practical applications for the X Factor. In this data center, there were twenty-seven temperature sensors more or less strategically located around the floor, and they had collected data with readings outside the intended range of 68˚F up to 80.6˚F, missing on both the over and the under. The site manager was concerned about how these discrepancies might have affected the reliability of his servers. The sensors were recording readings every ten minutes. For my first pass, I just looked at a period slightly longer than one month that represented the period with the highest count of sensor readings above the desired maximum. The compilation from the total available 20,136 hours (sensors X lines X 6) is captured in Table 4, with a resulting X factor total of 21,671 (hours X “X” Factor). Assuming a 68˚F baseline, we would, therefore, conclude our temperatures had produced some increase in server failures. However, rather than a theoretical baseline of 68˚F, this space had an actual baseline of 68-80.6˚F

Table 4: Practical Application Example of X Factor

Temperature

Hours

“X” Factor

Total

59˚F

0

0.72

0

63.5˚F

2875

0.87

2501

68˚F

7299

1

7299

72.5˚F

5667

1.13

6404

77˚F

3141

1.24

3895

81.5˚F

838

1.34

1123

86˚F

316

1.42

449

90.5˚F

0

1.48

0

95˚F

0

1.55

0

If we equally divide the 20,136 hours over the four bins inside that 68-80.6˚F range, the baseline X factor would actually be 27,710, or an X-factor ratio of 1.38, or 28% higher than the actual ratio based on measured sensor data. Therefore, instead of the actual twelve server failures they experienced in twelve months out of a population of over 1090 available servers, they could have expected to see fifteen failures if they had operated within their design-intent temperature range all year. Obviously this methodology lacks absolute precision, but in this case, where the temperature sensor data would be surrogates for inlet temperatures in either the intended environment scenario or the actual environment scenario, it can provide a useful relative order of magnitude and some reassurance.
Finally, all the examples I have discussed have conveniently favored the practice of allowing some cooler free cooling temperatures to compensate for occasional excursions into higher temperature excesses. That is obviously not always going to be the case. For example, ASHRAE’s introduction of the X Factor identifies cities such as San Francisco, Seattle, Boston, Denver, Los Angeles and Chicago as locations where chiller-free data centers would have improved server reliability over the 24/7 68˚F baseline; whereas cities such as Houston, Dallas, Phoenix and Miami would see X factors over 1.0, with Phoenix and Miami over 1.2.5 Does that mean some geographic areas are off the table for consideration of chiller-free facilities? Yes, but maybe not as many as you might think. For example, consider the impact of a finding that said you would experience a 20% increase in server failures if you built a data center with no mechanical cooling in a particular location. What does that number actually mean? If your experience with 4000 servers has been that you would normally see 10 failures in a year operating at 68˚F, that number would then increase by 2 server failures out of 4000, and you would want to assess that exposure against the savings associated with more free cooling hours or, perhaps, of not including a chiller in the design and construction of your new data center. Likewise, if your normal experience suggests that you would expect three failures operating at 68˚F 24/7, then a 20% increase in failures would take nearly two years to produce an additional failure. That is a business decision.
It goes without saying, or it would except I’m saying it: Your mileage may of course vary, but it will be somewhere between good and wonderful when this exercise includes all standard best practices of airflow management. Conversely, if anyone takes this path and is disappointed with the results, the likely culprit is going to be poorly executed airflow management.

Concludes in Airflow Management Considerations for a New Data Center – Part 7: Server Acoustical Noise versus Inlet Temperature

1 “2011Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance,” White Paper, ASHRAE TC9.9
2 Thermal Guidelines for Data Processing Environments, 4th Edition, ASHRAE Technical Committee (TC) 9.9, Mission Critical Facilities, Data Centers, Technology Spaces, and Electronic Equipment, 2015, page 30
3 Thermal Guidelines for Data Processing Environments, 4th Edition, ASHRAE Technical Committee (TC) 9.9, Mission Critical Facilities, Data Centers, Technology Spaces, and Electronic Equipment, 2015, page 111
4 “Understanding the Relationship between Uptime and IT Intake Temperatures,” Ian Seaton, Upsite Technologies Blog, November 19, 2014, pages 3-4
5 Thermal Guidelines for Data Processing Environments, 4th Edition, ASHRAE Technical Committee (TC) 9.9, Mission Critical Facilities, Data Centers, Technology Spaces, and Electronic Equipment, 2015, page 31

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest