Advanced Airflow Management Strategies: What to do After Implementing Best Practices17 min read

by | May 12, 2021 | Blog

Over the last few years, several of us on this blog have come at the subject of data center airflow management from a variety of directions for both new construction projects and remediation projects. We have covered just about every possible strategy and tactic for designing and maintaining a space with effective airflow management. We have provided clear instructions for how to plug every wasteful hole from in the floor to in the racks to in rows of racks to the overall room, along with benefits and cost justifications for all this plugging. When every hole between the cool part of the data center and the warm part of the data center has some kind of grommet, weather stripping, panel, barrier or duct securing the separation of those air masses, and you have some means of monitoring the effectiveness of that separation, are you done? It depends. You may be done in terms of preventing over-heating and eliminating waste; however, if you are looking for the most cost-effective efficient operation of your data center, there may still be some work to be done, or, more appropriately, there may still be some opportunities to be harvested.

Beyond the elimination of waste and the prevention of over-heated computer equipment, airflow management is the great enabler of all sorts of strategies and technologies for making the data center operation more energy efficient, ergo: more cost-effective. A critical first response to effective airflow management is effective temperature management. With poor airflow management, it is necessary to keep the overall temperature below some threshold to guarantee no ICT equipment anywhere in the room will see air intake temperatures above whatever maximum threshold (or SLA) has been established, typically resulting in thermostat set points in the 69-72˚F range, producing supply air around the mid 50’s. With good airflow management, temperature control can be switched to the supply air temperature and it can be set to run continuously at some level within a very few degree of the maximum determined equipment intake temperature. For example, if you have selected 78˚F as the absolute maximum temperature you want any ICT equipment to see, it’s possible to set your cooling system’s supply temperature at around 72-73˚F, and even as high as 75˚ if your airflow management implementation is particularly effective. In a data center with a chiller plant supplying chilled water to precision perimeter cooling units or row-based coolers or rear-door heat exchangers, this change alone will produce a 17-50% energy savings on the chiller operation, which equates to anywhere from 3.5% up to 11% energy savings on the total electrical consumption for the data center.

Another consideration enabled by good airflow management is air movement fan energy potential savings. Data centers with poor airflow management frequently find cooling equipment producing over twice the volume of chilled air than is actually consumed by the ICT equipment. A traditionally reported result of implementing good airflow management is that one or more unneeded cooling units are turned off or even decommissioned. If those cooling units are equipped with variable air volume capability, then it might make more sense to operate all the available units at some reduced volume to take advantage of the nonlinear curve of energy savings resulting from fan affinity laws. More importantly, if the cooling units are not equipped with variable air volume capability, then it would now make sense to explore retrofit opportunities for EC fan kits. In some instances, payback can be extremely quick, but more typically it will range from 2-3 years. While that payback period may push the limits of some businesses’ financial models, consideration should be given to the age of the cooling units and their expected life. If the cooling units are not on their last legs and marked for replacement in the near future anyway, the retrofit fan kits can actually help extend their life and continue delivering a return on investment for 10-15 years.

With the temperature set points enabled by quality airflow management, some form of free cooling will be attractive for almost every data center. Obviously, for a new construction project where the mechanical plant is not already established, there are many more options and it makes sense to consider every technology and what it can deliver for a particular site. This requires some research and for larger projects, I would strongly suggest this is where you task an expert engineering resource to provide some direction. The elements for the study include:

  • Hourly environmental data for the project location, available directly from National Atmospheric and Oceanic Association (NOAA) as ASCI files or in more user-friendly Excel files from sources like The Weatherbank.
  • Ambient conditions which will meet the data center requirements – data center supply temperature minus approach temperatures, which may be serially cumulative, including humidity and levels of potential contamination.
  • Redundancy architectures and complexity/reliability for supporting required site availability requirements. (Note: often free cooling systems include a mechanical/refrigerant element which in many situations can effectively be an “N” in 2N or N+1).
  • Capital investment exposure.
  • Permitting, zoning, impervious cover proportions, height limits, or other external architectural review constraints.
  • Most importantly: Where do you stand in the relationship between ambient instability and risk aversion? This understanding separates airside economization and adiabatic cooling from liquid-to-liquid heat exchange, air-to-air heat exchange, and indirect evaporative cooling. Regardless of whether the outside is allowed inside or just the effect of the outside is allowed inside, effective airflow management will enable a positive return on investment from some form of free cooling just about anywhere. For example, I have seen a productive use of economization in Phoenix, Houston, and Miami. The only place I have failed to see an economization project through to implementation has been in Singapore, and even there I believe they may have tossed in the towel a little too soon.

Finally, a thorough job of airflow management helps create opportunities to benefit from more creative quasi-free cooling sources, such as underground rivers, cave air, and high latitude sea water. Another approach I find particularly compelling for smaller data centers contained within larger buildings of some other primary purpose: the return warm water loop on the building cooling system can actually suffice as the cooling source water for data center precision cooling units. Not only does this topology rely on a “free chiller,” the resultant higher chiller ΔT can actually increase the efficiency of the chiller.

All industry standards and efficiency rating systems and now many state and local codes recognize the value of doing a good job with airflow management. However, once all the holes are plugged, that does not necessarily mean you want to just let your data center run on auto-pilot. As long as there are at least two pieces of ICT equipment ingesting air of different temperatures and as long as there is one hour a year of mechanical/refrigerant cooling deployed in the data center there remain opportunities for improving the effects of the airflow management, and as long as there are businesses who might benefit by doing something better than your business, there are reasons for considering those opportunities for continued fine-tuning.

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor,
manage, and maximize the power and cooling infrastructure for critical
data center environments.

 

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor, manage, and maximize the power and cooling infrastructure for critical data center environments.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

1 Comment

  1. Michael Beatty

    Ian, I enjoyed your article. I am a representative for Liebert in Pennsylvania, and would like to affirm a number of your recommendations. EC Fans are an excellent way to reduce power consumption. Upgrades are available for these, but if someone doesn’t have the Liebert icom controls they will have to upgrade the controls as well for this feature. I normally don’t recommend upgrading units that are 15 years old or older.

    Most older Liebert units have centrifugal blowers. A lesser expensive option than EC Fan upgrades is to install VFDs on the motors instead.

    We have found that lowering the EC Fans below the raised floor improved efficiency another 15%. If you have a 24″ or more raised floor you can get this option and the EC Fans would lower into the floor stand, thus having less resistance for the radial air flow.

    One of the most popular options now is refrigerant economization. When the ambient air temperature drops to 65F we can turn off a compressor and use a refrigerant pump instead. At temperatures below 40F we can typically turn off both compressors. These systems typically save 55% of energy costs compared to traditional DX costs.

    Thanks again for your article and helping people learn about ways to improve data center cooling!
    Mike

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Our 7th annual Airflow Management Awareness Month live webinar series has concluded. Watch the webinars on-demand below!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest