The Relationship Between PUE and Airflow Management in Data Centers12 min read

by | Jan 28, 2015 | Blog

There is a more or less direct relationship between airflow management and PUE. There are three basic elements to airflow management:

1. Manage the separation of warm air mass (return) from cool air mass (supply)

2. Manage the volume of supply air

3. Manage the temperature of supply air

Doing each of these elements well and in coordination with each other and then taking advantage of the additional opportunities which are enabled by doing a competent job of airflow management will always result in a reduced PUE. The degree to which the three elements are in fact coordinated and the degree to which associated factors are enabled by managing these well, will determine the degree to which PUE will be reduced.

Creating Separation

The starting point is creating separation between the cool supply and warm return air in the data center. Historically, this separation has been created and maintained by the deployment of hot aisles and cold aisles. This level of separation was adequate in the early years of data center evolution while per-cabinet power/heat densities were relatively low, and this tactic of separation still produces benefits as compared to “military formation” arrangement of racks or random arrangement of racks. More advanced means of separation include contained hot aisles, contained cold aisles, partially contained aisles and exhaust chimneys bridging from cabinets to an isolated return air path, typically a suspended ceiling.

This separation eliminates or greatly reduces hot air re-circulation affecting the temperature of supply air, and bypass supply air passing by the server heat loads and thereby not performing any convective heat transfer. Bypass airflow is sometimes intentionally created to overfill cold aisles and thereby reduce the incursion of re-circulated warm air. Sometimes bypass airflow results from over-producing chilled air by default because air handlers or CRAC units are either fully on or fully off. In either case, this bypass airflow production is waste, the elimination of which directly correlates to a lower PUE. In addition, low temperature set-points may be intentional or merely a reflection of outdated conventional wisdom. Low set points may be intentional when there is poor separation in the data center, resulting in instances of warm air re-circulation. The very low temperature air, when mixed with this re-circulated air, might still be sufficiently cool to meet the requirements of IT equipment in the data center. On the other hand, more often than not, unnecessarily cold air is produced in the data center as a result of outdated conventional wisdom that goes something like, “Nobody ever got fired for over-cooling a data center.” Along with the application of that conventional wisdom, we frequently see the data center mechanical plant set up based on the otherwise sound practices of the comfort cooling system for inhabited work spaces. More specifically, chillers for comfort cooling are typically set to produce a leaving water temperature in the range of 42-50˚F and data center thermostats are typically set somewhere around 70-75˚F, resulting in supply air coming across cooling coils and entering the data center at somewhere around 52-57˚F. Whether these set points are intentional or merely the result of outdated conventional wisdom, they result in considerable waste when the mechanical plant is delivering air 20˚F cooler than is actually required by the IT equipment according to the recommended environmental envelope established by the ASHRAE technical committee on mission critical facilities.

Airflow Demand and Other Considerations

Establishing an improved separation between the cold aisles and hot aisles in a data center may eliminate the hot air re-circulation that necessitated the volumetric over-production of cold air in some cases, but some management and control steps must be taken to actually eliminate bypass airflow. The goal here is to supply a volume of air that exceeds the volume demanded by the IT equipment and any other heat loads in the data center (such as UPS) by as low a margin as possible while still removing all the heat from the data center. The better the separation, the lower this variance needs to be. Ideally, we would know the cumulative airflow demand of all our IT equipment and then supply 2-10% more depending on how good our separation was between hot and cold areas. Today, it might actually be possible to come up with airflow demand for new IT equipment either right out of the thermal performance specifications for that equipment or by inquiring with the vendor. However, we may have some older equipment for which there is no published airflow demand specification or, more likely, we have IT equipment with variable speed fans and variable workloads and it’s just not realistic to peg a single value. This variable is managed much better with some sort of static pressure feedback system. Sometimes in a raised floor environment, static pressure is measured under the floor and used to control data center air handler fan speed. Another effective point of feedback is the pressure differential between the supply side of the room and the return side of the room. In a situation with excellent separation, that pressure differential could be as low as 0.005” H2O gauge column and then ranging up from there based on the leakiness of the aisle separation.

While it is possible to do some airflow volume management by turning cooling units on and off, the most effective means is by deploying variable air volume units, which could be variable frequency drive motors or electronically commutated direct drive fans. The absence of such variable speed capability does not necessarily eliminate this option. Retrofit kits are available in the market and the payback is typically 2-4 years. While a 3-4 year payback may not be attractive to some businesses, remember that the new fan motor is basically going to extend the life of the cooling unit as if it were a new cooling unit.

Implications on PUE

And how does this affect PUE? The reduction of fan energy is not linear to the reduction of fan speed and airflow, but it is rather the cube of that reduction, based on fan affinity laws. Therefore, a 20% reduction in airflow production results in a 49% reduction in fan energy (.83 = .51) and a 50% reduction in airflow results in a 87.5% reduction in fan energy. Considering that the Upsite’s most recent user survey on this subject found users 3.9X over-provisioned, the energy saving opportunities are huge. As for PUE, if fan energy equates to about 12% of the total energy budget of a 1.8 PUE data center, then an 87.5% reduction in fan energy equates to a 10.5% PUE reduction to 1.61.

With separation and airflow volume under control, temperature control is the next element of airflow management. With the other elements in place, the return set point management system can be jettisoned in favor of a supply management temperature control using temperature feedback from the data center. A good control system would monitor temperatures at the tops of racks and at the bottoms of racks and maintain a maximum temperature 1-2˚ below the maximum desired temperature and maintain a minimum temperature 2˚below the high temperature. If that temperature specification derives from the ASHRAE recommended limits, then it will likely be in the mid 70’s, requiring a chiller LWT in the 60’s. This mileage will vary based on type of equipment, but taking a conservative 1.8% per degree, that 1.8 PUE example could be reduced to 1.65, or in conjunction with the fan energy savings to 1.46.

In data centers without chillers, PUE reductions can now be achieved with DX CRAC units using either digital scroll compressors or multi-stage compressors. In all instances, good airflow management practices increase opportunities for free cooling economization, which will usually lead to significantly lower PUE’s than the examples cited above.

Ian Seaton

Ian Seaton

Data Center Consultant

Let’s keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest