Solving Special Data Center Containment Obstacles12 min read

by | Sep 20, 2017 | Blog

Every data center standard identifies data center airflow containment as a very fundamental best practice and it is even legislatively mandated by some state and municipal building and energy codes. Nevertheless, depending on who you are talking to, as little as 50% of data centers have adopted airflow containment best practices. Realistically, I would say somewhere around one-third of our industry remains as holdouts from realizing the efficiency and performance benefits of containment. The question remains: Why?

Indeed, there are a variety of reasons why there are data centers without good airflow separation. First, it costs too much and/or it doesn’t really work are two of the wrong answers. Airflow containment will always provide a data center environment that can support higher power densities and lower energy costs and will always pay for itself quickly.  So while those may be some of the excuses we hear occasionally, they do not hold up and I suspect that more often than not they are just euphemisms for a discomfort with the unfamiliar or effort. Nevertheless, there are some real obstacles to data center airflow containment and one of the more frequently mentioned has to do with the realization that containment does not save energy and money; rather, containment enables the more effective use of the mechanical plant, resulting in energy savings from reductions in airflow volume and increases in supply temperature. With this understanding, it is not unreasonable to hear a response to a containment proposal go something like, “That’s all well and good for some folks, but I don’t have variable speed fans on my CRAHs (or I have DX CRACs which prevent me from raising temperatures and reducing airflow).” The obvious takeaway on learning from others’ mistakes is that all our fluid movers and air movers need to be on variable speed control and we want chilled water which can effectively remove heat at higher temperatures and avoid freezing coils at lower airflow volumes. The less obvious takeaway is that DX does not need to be a show stopper, particularly with newer equipment.

DX is frequently integrated into economizer systems, particularly indirect solutions like indirect evaporative cooling and air-to-air heat exchanger systems because of lower acquisition costs and the low overall operating cost contribution of the mechanical plant where containment in the data center is excellent and the climate supports a high proportion of “free” cooling hours.

More importantly, for existing data centers where management may feel painted into a corner, there are options for upgrading and retrofitting existing DX equipment to still exploit a worthwhile portion of airflow containment’s promise. A study conducted by the Electric Power Research Institute (EPRI) for the California Energy Commission a number of years ago found that DX CRACs could be retrofitted with variable volume fans and effectively ratcheted down to 60% nominal airflow capacity without freezing coils and then delivering significant energy savings.1 In their own small data center with two CRACs with one intended for redundancy, they achieved 86,000 kW/H savings operating fans at 90%, 93,000 kW/H  at 80%, 126,000 kW/H at 70% and 152,000 kW/H at 60%. They cited a production data center in Austin in which they had planned a 19-month payback for retrofitting variable air volume fans on all their CRAC units that actually produced a six-month payback. This EPRI report is worth finding and studying for the methodology it provides on assessing a data center to determine if it is a candidate for producing a good ROI and quick payback for retrofitting DX CRAC units with variable airflow capabilities.

The fifteen step process provides guidance on obtaining and analyzing each data point, but those steps can be summarized as:

  1. The data center has an average ΔT between supply and return < 15˚F
  2. Compressor(s) run status average is < 80%
  3. Compressor run time is < 80%
  4. Valve average open position is < 80%
  5. Good containment maintains minimal IT equipment inlet temperature2

Criteria 1-4 are “or” rather than “and” requirements and criteria #5 is mandatory in combination with any of the previous four items, though conveniently can be retrofitted in almost any situation and, in fact, has its payback justified by the fan retrofit response to any of the other criteria. Furthermore, precision cooling vendors have made it easier for data center operators to reap economic benefits from containment and general airflow management best practices with digital scroll compressors and EC compressor technology.

Temperature limitations of DX cooling units do not need to be an obstacle to exploiting increased access to free cooling hours from higher supply temperatures. I have addressed this key point in numerous previous blogs and white papers but will summarize the proposition here as it bears directly on this question of higher supply temperatures. The key temperature metric in a data center is the difference between the supply temperature and the highest IT equipment inlet temperature. When we talk about raising temperatures in the data center we do not have to mean higher temperatures to the IT equipment (though an earlier seven-part series clearly showed those temperatures could be much higher than most conventional wisdom would allow), but rather we can raise supply temperatures without raising IT inlet temperatures through good airflow containment practices. A legacy hot aisle/cold aisle data center might exhibit greater than a 20˚F difference between supply and highest IT inlet, whereas, that ΔT can be 2-3˚F with good containment, and maybe 5-10˚F with well-executed partial containment. Most DX CRACs may object to a 90˚F set point resulting in 75˚F supply. However, an economizer can be set up to deliver, for example, 75˚F supply when associated containment guarantees a maximum 78˚F IT equipment inlets at that supply temperature. When ambient conditions drive the supply temperature above that set point then the precision cooling units can kick in at whatever lower supply for which they are designed. Dramatic transitions between the two systems can be problematic, particularly for rate-of-change temperature requirements for spinning media but can be mitigated by stepped cycling on and off of the cooling units. Depending on the data center location, managing two separate control points in conjunction with excellent airflow containment can result in an extra 1000-2000 hours per year of free cooling.

Steady-state fans on chilled water CRAHs are also cited as an obstacle to exploiting the benefits of containment and the associated reduction in real airflow demand. First, the access to energy savings from higher supply temperatures will be much more straightforward on these units than on DX. The higher temperatures will provide access to more free cooling hours and, when not taking advantage of free cooling will dramatically reduce a chiller plant operating expense. In addition, the process of retrofitting these units with variable air volume kits is much more straightforward. EC fan retrofit kits will typically have a one and half to two-year payback. When the total project also includes containment acquisition and installation, that payback may stretch to two to two and half years. That timeline may stretch the vision of some accountants, but that still banks a lot of cash over the 15 +/- year life of a data center.

Mechanical obstacles such as single speed cooling unit fans and DX cooling units have been cited as inhibitors to realizing the economic and performance benefits of data center airflow containment. While this equipment may require a little more creativity, these do not mean a data center so equipped is obligated to suffer through sub-optimum efficiency.

Whether you are using DX CRACs, CRAHs, or experiencing other mechanical hindrances, the goal should always be to use airflow management best practices to effectively reduce costs and improve data center containment.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest