7 Airflow Management Considerations in Building a New Data Center23 min read

by | Sep 6, 2023 | Blog

If you think a discussion about some number of airflow management considerations in building a new data center is going to ring up the normal litany of plugging holes with filler panels and floor grommets, separating hot aisles from cold aisles, minimizing or eliminating bypass and recirculation, deploying variable air volume fans, intelligently locating perforated floor tiles and measuring temperature at server inlets, then you would be sorely mistaken. I do not consider any of those practices to be “considerations”; rather, those practices are what I call the minimum price of admission. None of these practices fall into the state of the art or leading edge categories of data center design, but are firmly established as best practices. By all established industry standards and guidelines, these airflow management tactics are the minimum starting point before you can start benefiting from being able to control airflow volume and temperature – the activity of airflow management, and the key to exploiting both efficiency and effectiveness opportunities in the data center.

So, if these minimum requirements are not today’s subject, what are considerations for airflow management in building a new data center? I will unabashedly borrow from the ASHRAE handbook, Thermal Guidelines for Data Processing Environments, and what they call server metrics for determining data center operating environmental envelope. ASHRAE lists seven metrics:

1. Server power versus inlet temperature

2. Server performance versus inlet temperature

3. Server cost versus inlet temperature

4. Climate data versus server inlet temperature

5. Server corrosion versus moisture levels

6. Server reliability versus inlet temperature

7. Server acoustical noise versus inlet temperature

With a little further expansion, these metrics provide a list of important data center design considerations for taking advantage of the minimum level of airflow management best practices cited in my opening paragraph. Note that the order of presenting these considerations does not imply a degree of importance. ASHRAE presents then in a circular diagram and importance will vary from data center to data center, from company to company. With the project team committed to airflow management best practices, these considerations should then become part of the very initial thinking with engineering and architectural resources. (Please note that the considerations below are summarized. I have included links to more in-depth articles on each of these metrics directly after each summarized consideration.)

1. Server power versus inlet temperature

One major benefit of good airflow management is the option of running the data center at a higher temperature and perhaps even exploiting some degree of the allowable temperature maximums for different classes of servers. With good airflow management, a 75˚F supply temperature can result in a maximum server inlet temperature somewhere in the data center of 77-79˚F, whereas with poor airflow management, a 55˚F supply temperature could easily result in server inlet temperatures ranging anywhere from 77˚F up to over 90˚F. Such common extremes notwithstanding, let’s assume that poor airflow management had been compensated for with massive over supply of over-cooled air and with the implementation of good airflow management, one is ready to consider letting server inlet temperatures start to creep up, per the recommendations of most data center thermal management experts. What can we expect to see happen? It’s possible that PUE could go down, while total data center energy use might increase, due to server fans working harder to cool the servers with higher temperature inlet air. When the first study reporting such results was published back in 2009, many took that to mean that raising temperatures was just as bad an idea as they always thought. However, in most cases, if the higher inlet temperatures are accompanied by higher chiller temperatures and more free cooling hours, then the increased server fan energy will be more than offset by the other operational savings. Nevertheless, it is a consideration and should be part of the planning and design process. The higher thresholds for class A3 and A4 servers should also be part of this consideration. There are good planning graphs in the ASHRAE handbook for estimating server fan energy increases versus other mechanical plant operational savings.

(Read more about this metric here.)

2. Server performance versus inlet temperature

Servers being designed and shipped today are much more thermally robust than recent legacy servers, particularly with the advent of Class A3 and Class A4 servers. In the recent past, as servers became equipped with variable speed fans and onboard thermal management, they contained the intelligence to respond to excessive temperatures by slowing down performance. Unfortunately, if energy savings features are disabled, as they frequently are, this self-preservation tactic will likely not function. Conversely, today, there are some server OEM’s who are essentially only delivering A3 servers (with safe operation up to 104˚inlet) and an A2 server, for all practical purposes, is the lowest rated server available on the market, with allowable operation up to 95˚F. So if a new data center is being equipped with new IT equipment, this is a more straightforward consideration. However, if legacy equipment will be moved into the new space, it will be important to contact the vendors to learn where performance temperature thresholds might be for different equipment.

(Read more about this metric here.

3. Server cost versus inlet temperature

Server cost versus inlet temperature is a more straightforward consideration. Class A4 servers, with allowable inlet temperature up to 113˚F, will cost some degree of premium. Class A3 servers, may be a premium from some OEMs and may be the standard from others, with allowable temperatures up to 104˚F, and Class A2 servers, with allowable inlet temperatures up to 95˚F, will be the standard base offering from everybody else. Today, Class A1 servers, with allowable inlets up to 90˚F can be found at flea markets and on data center floors with budget-stretched technology refresh cycles. These costs then need to be weighed against the resultant savings of chiller efficiency and increased access to free cooling hours and, with Class A4 servers, the possible elimination of both the capital and operating expense for any kind of chiller or mechanical refrigerant cooling plant.

(Read more about this metric here.)

4. Climate data versus server inlet temperature

The consideration of climate data could affect everything from selecting a location for building the new data center to determining what class of servers would produce the most beneficial total cost of ownership to whether any kind of chiller or mechanical refrigerant cooling plant can be eliminated from the design plan. Hourly climate data can be purchased directly from NOAA as ASCI text files or from Weatherbank in user-friendly Excel format. Five years of data is probably sufficient, though I typically see risk-averse data center population prefer a ten-year data base. Regardless, it is a small investment if it leads to a risk-averse elimination of a chiller plant or a decision to require Class A4 servers because of resultant millions of dollars in operational savings. Another aspect of this climate data consideration relates to design options for water-side economization. For example, if there are long periods with average wet bulb temperatures within the range of the approach temperature to the economizer, a series water-side economizer might make more sense than a parallel economizer to both take advantage of partial economization hours and eliminate the wear and tear of frequent shut-downs and start-ups.

(Read more about this metric here.)

5. Server corrosion versus moisture levels

The consideration of corrosion on server components and printed circuit boards was in the process of being dismissed as a serious threat as ASHRAE raised allowable relative humidity thresholds to 85% and 90% for different server classes and set 75˚F as a maximum dew point, while OEM’s almost universally expanded their humidity envelopes when all this enthusiasm crashed into regulatory obstacles requiring lead-based solders be replaced by silver-based solders for PCB component attachment. Silver, in the presence of high humidity, is reactive when exposed to gaseous contaminants such as hydrogen sulfide, chlorine, hydrogen chloride, and sulfur dioxide. Because of this risk, the folks at ASHRAE TC9.9 back-pedaled significantly from promoting allowable limits and stressed the advantages of living within the recommended humidity envelope, i.e., 60% maximum RH. At this level, as the RH of air traveling through the server is lowered as a result of acquiring more heat from the server, the hazards associated with humidity continue to reduce. Historically, condensation has been the greatest fear associated with higher moisture levels, but by operating the data center at higher temperatures, as discussed above, ambient and inlet temperatures can be maintained well above the dew point. In conjunction with free cooling, that low threshold is easily enough maintained by mixing with higher volumes of return air; however, this feedback loop needs to be considered in terms of the next consideration on server reliability.

(Read more about this metric here.)

6. Server reliability versus inlet temperature

I covered the consideration of server reliability versus inlet temperature in a previous blog, and the subject is explained in detail in the ASHRAE thermal guidelines handbook. Suffice to say that the ITE OEMs serving on ASHRAE TC9.9 determined that they had enough historical failure information to establish a baseline at 68˚F server inlet temperature and then agreed on negative and positive variations from that baseline at higher and lower temperatures. The ASHRAE handbook presents a case study for Chicago where, because of the number of hours per year under 68˚, wherein the data center would operate between 59˚F and 68˚ with free cooling with no chiller installed or operating, the server reliability would actually improve by 3% over the 68˚baseline. This consideration involves evaluating the already-obtained climate data and assessing the impact of various free-cooling strategies on IT equipment reliability using the X-Factor methodology presented in my previous blog and explained in detail in the ASHRAE handbook. For further clarification, consider the impact of a finding that said you would experience a 20% increase in server failures if you built a data center with no mechanical cooling. What does that number actually mean? If your experience with 4000 servers has been that you would see 10 failures in a year, that number would then increase by 2 server failures out of 4000, and you would want to assess that exposure against the savings associated with more free cooling hours or, perhaps, of not including a chiller in the design and construction of your new data center.

(Read more about this metric here.)

7. Server acoustical noise versus server inlet temperature

The final consideration has to do with the effect of operating the data center at a higher server inlet temperature on fan noise produced by the servers in response to those temperatures. The ASHRAE handbook provides some general estimates for increased noise exposure at higher server inlet temperatures:

Inlet temperature            77            86            95            104         113

Noise Increase                  0          4.7dB       6.4dB        8.4dB     12.9dB

To determine if these estimated incremental increases in sound power level are problematic, they need to be considered in terms of the total environment into which they are being introduced. If the new levels exceed allowable threshold levels, resulting in ear protection requirements for workers and some kind of monitoring capability, the expenses of those practices need to be evaluated in terms of the overall savings produced by the precipitating temperature levels. Other factors to be considered along with this basic category would include reduced noise of running cooling units’ redundancy simultaneously, resulting in both affinity law energy savings as well as reduced overall noise levels, or even considering package economizer units that would be mounted outside the data center on the roof or adjacent pads, thereby minimizing the air movement machinery noise, or considering the cost premium for Class A4 servers versus the lower resulting sound power level penalty.

(Read more about this metric here.)

As you can see, even when we regard all those practices that we think of when we hear “airflow management” as minimum standards and not design considerations, there is still plenty to consider in terms of how to reap the benefits of airflow management best practices. After all, while airflow management can improve the effectiveness of the data center mechanical plant in terms of eliminating or greatly reducing hot spots, airflow management on its own will not necessarily make the data center more efficient, but rather it enables all those practices that lead to hyper-efficiency. Finally, this discussion has focused on servers and there will be other equipment in the data center, such as storage and communication routing that may or may not have different environmental requirements. The methodology described here still applies, though some of the baselines and thresholds may be different.

The industry's first and only tool-less containment solution!

AisleLok® is the industry’s first modular containment solution,
proven to provide the core benefits of containment with greater flexibility and value.

The industry's first and only tool-less containment solution!

AisleLok® is the industry’s first modular containment solution,
proven to provide the core benefits of containment with greater flexibility and value.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Our 7th annual Airflow Management Awareness Month live webinar series has concluded. Watch the webinars on-demand below!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest