Airflow Management Considerations for a New Data Center – Part 3: Server Cost vs Inlet Temperature19 min read

by | May 31, 2017 | Blog

[This continues from Airflow Management Considerations for a New Data Center – Part 2: Server Performance versus Inlet Temperature]

In case you missed the first two parts of this seven-part series, I will take a moment to clarify that this will not be a discussion on the criticality of plugging holes with filler panels and floor grommets, separating hot aisles from cold aisles, minimizing or eliminating bypass and recirculation, deploying variable air volume fans, intelligently locating perforated floor tiles or measuring temperature at server inlets. I do not consider any of those practices to be “considerations;” rather, those practices are what I call the minimum price of admission. None of these practices fall into the state of the art or leading edge categories of data center design, but are firmly established as best practices. By all established industry standards and guidelines, these airflow management tactics are the minimum starting point before you can start benefiting from being able to control airflow volume and temperature – the activity of airflow management– and the key to exploiting both efficiency and effectiveness opportunities in the data center.

Airflow management considerations will inform the degree to which we can take advantage of our excellent airflow management practices to drive down the operating cost of our data center. In part one of this seven-part series (drawing on ASHRAE’s server metrics for determining a data center operating envelope) I explored the question of server power versus server inlet temperature, presenting a methodology for assessing the trade-off of mechanical plant energy savings versus increased server fan energy at higher temperatures. I suggested that for most applications, a data center could be allowed to encroach into much higher temperature ranges than many industry practitioners might have thought before server fan energy penalties reverse the savings trend. In part two, I presented data from two well-conceived and well-executed experimental research projects that suggest data centers can run hotter than otherwise necessary without adversely affecting server operation throughput. Today we will look at the next question: How much more am I going to have to pay for my servers if I am planning to feed them this hotter air?

When ASHRAE TC9.9 first established the new server temperature hierarchy (Class A1, A2, A3 and A4), server cost versus inlet temperature was a more straightforward consideration. Class A4 servers, with allowable inlet temperature up to 113˚F, cost some degree of premium. Class A3 servers, were a premium from some OEMs and were becoming the standard from others, with allowable temperatures up to 104˚F, and Class A2 servers, with allowable inlet temperatures up to 95˚F, were the standard base offering from everybody else. Today, Class A1 servers, with allowable inlets up to 90˚F can be found in flea markets and on data center floors with budget-stretched technology refresh cycles. Premiums were associated with additional costs for optimized heat removal mechanisms and more robust components.

These costs would then need to be weighed against the resultant savings of chiller efficiency and increased access to free cooling hours and, with Class A4 servers, the possible elimination of both the capital and operating expense for any kind of chiller or mechanical refrigerant cooling plant. Any serious ROI studies to which I was privy at the time had paybacks of less than a year. This question is a little more interesting today, however, as more Class A3 servers have become standard offerings and a new twist that the folks at Dell Computers have applied to this question.

I have written dozens of articles over several years and have generally pursued a path of avoiding name-dropping, but I think the approach that Dell has taken is so innovative and direct-to-the-point that it bears a special mention. Let me preface this observation with the caveat that I’m a mechanical guy who cannot speak with any authority on the efficacy of one vendor’s IT equipment versus another vendor’s. I am just saying that the way Dell has chosen to frame the subject of compliance with the different ASHRAE server classes makes a lot of sense to me. Their fundamental assumption is that nobody is going to plan to operate a data center with 100˚F server inlet temperatures 24/7 all year long; rather, these allowable temperature thresholds are intended to let data center operators exploit wider ranges of temperature fluctuation with whatever free cooling option they may have deployed. In response to that basic assumption then, Dell has announced that all their new servers are Class A4 compliant, within specified limits. (See Chart 1 below) Their products are warranted to 113˚F for 1% of the year and to 104˚ for 10% of the year – annually 87 hours and 876 hours, respectively. According to their research, this definition of Class A4 compliance would allow for chiller-less data centers in 90% of the United States, Europe, and Asia.

To the best of my knowledge, no one else has picked up this excursion terminology to address the capability of their products in different temperature operating conditions, at least overtly in their marketing literature. Nevertheless, in a very random and unscientific sampling of other popular server models, I found evidence that OEMs are making the migration to Class A4, thereby extending the practicality of chiller-less data centers in a wide range of locations and climate zones. I’ve summarized this sampling below in Table 1.

A quick review of the data I’ve collected leads to a couple general observations. For many of these servers, there is a special fan component required to migrate from 95˚ to 104˚ and we would assume there is a cost premium associated with that upgrade. In addition, there are some processor options (higher wattage) that may be excluded from some models, presumably due to increased fan speed to throw more CFM at higher wattage and that linear relationship between CFM and wattage to maintain the desired temperature rise (ΔT) may tap out the fan system’s capacity before the higher wattage thresholds are hit. Less noticeable will be the inherent conservatism of engineers signing off on specifications for their little babies. For example, the product documentation for the IBM X3650M04 and X3550M04 states very clearly they are rated up to 104˚only with 90 watt or smaller processors, while in my previous paper in this series, I reported on extensive studies on these two servers with 130 watt processors operating at 104˚ with no degradation in performance along multiple test platforms. I do not mention this inconsistency to encourage my readers to disregard the boundaries of their server manufacturer’s documentation. I merely relate this as evidence of the conservatism of these specifications so my readers may feel liberated to bang up against their edges with confidence.

Server Model

Max

Notes/Exceptions

HPE ProLiant BL460c Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant BL660c Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL20 Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL60 Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL80 Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL120 Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL160 Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL180 Gen9

113˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL360 Gen9

113˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL380 Gen9

113˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL560 Gen9

113˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant DL580 Gen9

113˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant ML30 Gen9

95˚F

High-performance fan kit with some configurations w/160 watt processors

HPE ProLiant ML110 Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant ML150 Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant ML350 Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant XL170r Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE ProLiant XL190r Gen9

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE Apollo 4200 Gen9 Server

104˚F

High-performance fan kit; exclude 130+ watt processors

HPE Apollo 4500 Gen9 Server

95˚F

High-performance fan kit with some configurations w/160 watt processors

IBM x3650M4

104˚F

60-95W processor models; otherwise, 95˚F

IBM Model 8335-GTB

104˚F

 

IBM Model 8408-44E

104˚F

 

IBM Model 8408-E8E

104˚F

 

IBM System S822LC

104˚F

 

IBM System S812LC

95˚F

Heavy workloads might see some performance degradation above 86˚F

IBM Model 8247-21L

104˚F

 

IBM Model 8247-22L

104˚F

 

IBM Model 8247-42L

104˚F

 

IBM Model 8284-21A

104˚F

 

IBM Model 8284-22A

104˚F

 

IBM Model 8286-41A

104˚F

 

IBM Model 8286-42A

104˚F

 

IBM Model x3550M4

104˚F

60-95W processors; 95˚ for 115-130W & 80.6˚ for 135W

IBM Power 710

95˚F

104˚F allowed with degraded performance

IBM Power 720

95˚F

104˚F allowed with degraded performance

IBM Power 730

95˚F

104˚F allowed with degraded performance

IBM Power 740

95˚F

104˚F allowed with degraded performance

In the first two parts of this series, I had empirical data from engineering research projects and detailed calculations founded on conservative and defensible assumptions to assess the impact of server fan energy on total operating cost and inlet temperature on a variety of performance benchmarks. Unfortunately, for this piece, a semi-retired mechanical guy with a 3-4 year technology refresh cycle on his laptop is not going to get a plethora of price quotations on production servers; therefore, part 3 of the series lacks my customary precision. I apologize for that. Nevertheless, I think it is clear that price premiums for the 104˚ threshold are on the way out. Finally, for those readers who prefer a server model with a premium for a higher temperature rating, an ROI study as outlined in Appendix C of ASHRAE’s Thermal Guidelines for Data Processing Environments, summarized in an earlier blog,2 or exemplified in the BICSI course, Data Center Temperature Design Tutorial will invariably easily justify the expense.

Continues in Airflow Management Considerations for a New Data Center – Part 4: Climate Data vs Server Inlet Temperature

1. “Dell’s Next Generation Servers: Pushing the Limits of Data Center Cooling Cost Savings, “ Jon Fitch, Dell Enterprise Reliability Engineering white paper, February, 2012, page 7. (I beg my reader’s’ indulgence, but I just have to admire anybody who can use the word “climatogram” seriously, and almost correctly.)
2. “Data Center Best Practices: Specifying Cooling Unit Set Points,” Ian Seaton, www.upsite.com/blog, June 22, 2016

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published.

Subscribe to our Blog

Archives

Airflow Management Awareness Month 2019

Did you miss this year’s live webinars? Watch them on-demand now!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest