Airflow Management Considerations for a New Data Center – Part 4: Climate Data vs Server Inlet Temperature18 min read

by | Jun 14, 2017 | Blog

[This continues from Airflow Management Considerations for a New Data Center – Part 3: Server Cost vs Inlet Temperature]

In case you missed the first three parts of this seven-part series, I will take just a moment to clarify that this will not be a discussion on the criticality of plugging holes with filler panels and floor grommets, separating hot aisles from cold aisles, minimizing or eliminating bypass and recirculation, deploying variable air volume fans, intelligently locating perforated floor tiles and measuring temperature at server inlets. I do not consider any of those practices to be “considerations”; rather, those practices are what I call the minimum price of admission. None of these practices fall into state of the art or leading edge categories of data center design, but are firmly established as best practices. By all established industry standards and guidelines, these airflow management tactics are the minimum starting point before you can start benefiting from being able to control airflow volume and temperature – the activity of airflow management, and the key to exploiting both efficiency and effectiveness opportunities in the data center.

Airflow management considerations will inform the degree to which we can take advantage of our excellent airflow management practices to drive down the operating cost of our data center. In my first installment of this seven-part series, I explored the question of server power versus server inlet temperature, presenting a methodology for assessing the trade-off of mechanical plant energy savings versus increased server fan energy at higher temperatures. I suggested that for most applications, a data center could be allowed to encroach into much higher temperature ranges than many industry practitioners might have thought before server fan energy penalties reverse the savings trend. In the second piece, I presented data from two well-conceived and well-executed experimental research projects that suggest data centers can run hotter than otherwise necessary without adversely affecting server operation throughput. In the previous piece, I suggested price premiums for servers that effectively operate at higher temperatures may not be that significant and appear to be trending toward equilibrium. Today we will look at the role of climate data in the overall subject of raising server inlet temperatures.

While climate can affect condenser efficiency, lower performance and external thermal load levels, when these factors are considered in association with a standard mechanical plant, savings resulting from efficiency gains from a more hospitable climate profile are going to be incremental. The big payoffs from climate considerations are going to be associated with free cooling and the possible resultant elimination of chiller plants or any refrigerant cooling.

The consideration of climate data could affect everything from selecting a location for building the new data center to determining what class of servers would produce the most beneficial total cost of ownership to whether any kind of chiller or mechanical refrigerant cooling plant can be eliminated from the design plan. Hourly climate data can be purchased directly from NOAA as ASCI text files or from Weatherbank in user-friendly Excel format. Five years of data is probably sufficient, though I typically see the risk-averse data center population prefer a ten-year database. Regardless, it is a small investment if it leads to a risk-averse elimination of a chiller plant or a decision to require Class A4 servers because of resultant millions of dollars in operational savings. Another aspect of this climate data consideration relates to design options for water-side economization. For example, if there are long periods with average wet bulb temperatures within the range of the approach temperature to the economizer, a series water-side economizer might make more sense than a parallel economizer to both take advantage of partial economization hours, wherein ambient may be higher than supply, but still less than return and eliminate the wear and tear of frequent shut-downs and start-ups.

The purchased hourly temperature data should be organized in such a way that it can be easily pinned to sum the total number of annual hours under any specified temperature. In my previous blog, I cited a Dell paper that reported on their research on climate data around the world in support of their new environmental operating specification for servers. The research supports allowing normal operation within the ASHRAE Class 2 allowable envelope, 10% of the year within the Class A3 allowable envelope and 1% of the year within the Class A4 allowable envelope. They found that with these specifications, data centers would not need chillers in 90% of the United States, Europe and Asia.1 I am not aware of other server OEMs adopting this the specification methodology yet, but the exercise of breaking the hourly temperature data into these buckets can still be worthwhile. For example, among the list of servers I surveyed in my previous blog, there were quite a few that were labeled as Class A2 compliant, but capable of operating in the A3 range (up to 104˚F) with some degraded a performance or were labeled as Class A3 compliant but capable of operating in the A4 range (up to 113˚F) with some degraded performance.2 The temperature data I have collected in Table 1 from a sampling of cities with high data center activity provides examples of why these distinctions could be important.

Table 1: Percentage of Year Free Cooling for Different ASHRAE Server Classes

Data Center Location

FREE COOLING ANNUAL HOURS

Dry Bulb Access

Wet Bulb Access

RecA2A3A4HotRecA2A3A4Hot
Atlanta

83%

16%

1%

0

0

66%

33%

1%

0

0

Chicago

92%

8%

0

0

0

81%

18%

1%

0

0

Dallas

76%

22%

2%

0

0

53%

46%

1%

0

0

Denver

91%

18%

1%

0

0

99%

1%

0

0

0

New York

95%

5%

0

0

0

80%

18%

1%

0

0

Phoenix

53%

29%

12%

5%

1%

76%

24%

0

0

0

Raleigh

82%

17%

1%

0

0

66%

33%

1%

0

0

San Jose

94%

5%

1%

0

0

96%

4%

0

0

0

Seattle

98%

1%

1%

0

0

99%

1%

0

0

0

Washington DC

86%

13%

1%

0

0

75%

24%

1%

0

0

Just to make sure we’re all on the same page, the percent of annual hours in the Rec column of Table 1 refers to the ASHRAE recommended envelope with a maximum server inlet temperature of 80.6˚F. I am not concerned with the minimums because any free cooling system will include a means of mixing return air with supply air to maintain some specified minimum temperature. The A2, A3, and A4 columns accumulate the per cent hours above the previous column and less than 95˚, 104˚ and 113˚, respectively. The “Hot” column accounts for that part of the year that exceeds the highest allowable temperature of 113˚F. The wet bulb temperatures assume the same data center supply temperatures, minus the difference between the air temperature and supply chilled water temperature, minus the approach temperature to the wet bulb temperature. As we can see with these sample cities, data centers everywhere can deploy standard Dell servers without any mechanical cooling, except when air-side economization is employed in Phoenix. In addition, in all the other cities except Dallas, any of the Class A2 servers from other vendors capable of operating in the Class A3 band (95˚ – 104˚) with some degraded performance are only going to see that performance degradation for around 80 hours a year or less. Depending on the business purpose of the data center, that minimal exposure may be a reasonable trade-off for avoiding both the capital and operating expenses of a chiller plant. Furthermore, the information on climate data and server classes can bear on more than just server procurement decisions. For example, if it is determined that with a certain class of server, mechanical cooling is only needed for 100-200 hours a year, then it might make sense to assess the capital investment savings of some form of DX cooling versus chilled water cooling because the operational savings over such a small portion of the year might produce a surprising payback on the chilled water system that exceeds the planned life of the data center.

Finally, after you have broken out your credit card, bought however many years of hourly temperature data for your planned data center site and analyzed that data in terms of the server class temperature buckets and associated free cooling options, don’t lose the data. You will need it again when we discuss temperature and server reliability in the final installment of this series. Meanwhile, coming up in Part 5 later this month, we will take a short departure from temperature and review some of the latest findings on the effect of humidity and potential for corrosive damage to our computer equipment. In closing, all of today’s discussion on Climate Data vs Server Inlet Temperature, for simplicity’s sake, equated supply temperature and server inlet temperature. That equality will seldom exist in reality, so the temperature buckets for your analysis should be adjusted accordingly. If all the best practices of airflow management we have discussed are in force – plugging all the holes and maintaining a strict separation between hot and cold areas of the data center – then that difference between supply and inlet temperature should only be a degree or two. If all those disciplines are not in place and you have given your credit card number to NOAA or Weatherbank, then you have got the cart before the horse.

Continues in Airflow Management Considerations for a New Data Center – Part 5: Server Corrosion versus Moisture Levels

1. “Dell’s Next Generation Servers: Pushing the Limits of Data Center Cooling Cost Savings,” Jon Fitch, Dell Enterprise Reliability Engineering white paper, February, 2012, page 4.
2. “Airflow Management Considerations for a New Data Center, Part 3: Server Cost versus Inlet Temperature,” www.upsite.com/blog, Ian Seaton, May 31, 2017

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest