Why the Relationship Between Data Center Efficiency and Cost Savings is So Commonly Misunderstood12 min read
You may find this hard to believe, but every now and then I hear something like, “We spent a bunch of money on floor tile cut-out grommets for our data center and never really got our money back, so as long as all our servers are running cool enough we don’t tend to put much stock in all these sales pitches about improving efficiency.” Since I only hear this now and then, one might think that for the most part folks are investing in airflow and cooling efficiency improvements and smiling all the way to the bank. Or, perhaps those occasional doubters and naysayers may just be the honest minority. Regardless, you can’t shake a data center conference podium without some speakers making grandiose claims for energy cost savings resulting from some basic airflow efficiency improvements. How is it that we may not always see these anticipated energy cost savings from our efficiency improvement actions?
Typically, the initial efficiency action is merely an enabler of subsequent steps that must be taken to actually unlock the energy savings potential. For example, brush grommets for cable access floor tile cut-outs were probably the first data center airflow efficiency accessory. Since these grommets are usually plugging holes toward the rear of server cabinets, they minimize bypass airflow, resulting in a greater volume of chilled air being delivered in cold aisles, thereby eliminating or mitigating hot spots and generally reducing the temperature of all server inlets. However, these cooling performance benefits do not directly translate into cost savings unless precision cooling unit fans are ratcheted down from having to over-produce compensatory air volume to produce an appropriate volume minus bypass. Because of fan affinity laws, the impact of these changes can be dramatic. A 10% reduction in airflow usually equates to about a 27% reduction in cooling unit fan energy. When you consider there is frequently an opportunity to reduce the airflow by more than half, the savings can, in fact, be significant. Of course, if you’re trying to put this lipstick on a pig, (i.e., cooling units with single speed fans) then you need not dismay: EC fan retrofit kits can dress up most of those pigs and usually pay for themselves in less than two years.
Floor tile management and blanking panel deployment have long been considered minimum price-of-entry best practices for improving data center air flow management efficiency, but, like floor grommets, cooling effectiveness results are invariably obvious, while cost-savings results are not always as readily realized. The dynamic for these accessories is very similar to what we see with the floor tile grommets. By removing perforated floor tiles from all locations not directly providing cool air to adjacent computing equipment and deploying the proper percent of open tiles for the required flow based on IT heat load and underfloor static pressure, bypass airflow waste is eliminated. Blanking panels, on the other hand, combat both bypass airflow and hot air re-circulation, depending on the pressure dynamics in the room. If surplus supply air is delivered into cold aisles, blanking panels mitigate bypass airflow flowing through the server cabinets. Conversely, if supply air volume may not be entirely adequate to meet demand for an isolated cabinet or two, the IT fans will pull air from somewhere, including pulling heated waste air back through the cabinet. In these cases, blanking panels combat re-circulation. Regardless, intelligent floor tile management and blanking panel deployment improve cooling effectiveness, eliminate or reduce hot spots, and generally reduce equipment inlet temperatures. However, to translate these efficiency improvements into energy cost savings, systems need to be put in place that control air flow volume and temperature based on real need rather than calculated or estimated need. For example, the one truly consequential temperature in the data center is the IT equipment inlet temperature, which is why standard best practices these days say that cooling equipment temperature should be controlled by feedback from the server inlets or some array of sensors located as closely as possible to those inlets. Then, temperature can be controlled based on some specified maximum temperature and airflow can be controlled either by monitoring pressure differentials or maintaining a minimum temperature differential between the highest and lowest server inlet temperatures. These feedback and control strategies allow the airflow efficiency gains from floor grommets, tile management and filler panels to enable the energy savings harvested from intelligent airflow and temperature controls.
The Role of Containment
Finally, the biggest investment in data center airflow efficiency is going to be some form of containment, whether that be chimney cabinet containment, full hot or cold aisle containment or some form of partial aisle containment. Containment will maximize all the cooling performance benefits of the previously discussed accessories and can frequently lead to a quadrupling of the power/heat density a particular space may support. Nevertheless, this investment will not directly translate into energy savings without putting into place feedback and control strategies tied to real demand and exploiting the energy efficiency of variable speed fans, whether those are integrated into precision perimeter cooling units or part of some free cooling architecture. With these controls in place and with all these efficiency elements deployed, temperatures may also be increased, providing a further path to energy cost savings. First, let me make one point very clear: while I am an advocate of exploiting the upper boundaries of computer equipment operating temperatures that OEMs are now offering us and which are reflected in the latest ASHRAE TC9.9 guidelines, for today’s discussion I am not talking about increasing the maximum temperature for IT equipment; rather, I am talking about increasing the minimum temperature. With these airflow management best practices, the difference between cold aisle minimum and maximum temperatures can be reduced from a typical 15-20˚F range down to a 2-5˚F range, allowing minimum temperatures to increase significantly without affecting maximum temperatures. With these resulting higher supply set points, profound chiller savings are achievable – approaching and even surpassing 40%. In addition, access to free cooling hours will significantly increase. In situations where economizers were shunned due to low ROI, those calculations need to be re-visited. The other temperature that increases as a result of taking these steps is the return temperature, assuming baseline hot air re-circulation wasn’t so horrendous that the data center just baked up an equatorial front prior to return air finally finding its way out. Usually, by eliminating bypass air flow, which directly joins the return air stream and thereby reduces that temperature, the resulting higher space temperature differential increases the sensible cooling capacity of coils, which may result in being able to either turn off extra cooling units or add more computing load. Also, the higher return temperature can expand the proportion of partial free cooling delivered by economizers and thereby further reduce the operating cost of the mechanical plant.
Whenever I talk about taking steps to improve the airflow efficiency of the space I always try to make a point of the two different agendas of effectiveness and efficiency and at least touch on the associated steps necessary to realize the financial benefits of the efficiency improvements. Nevertheless, adoption of these basic best practices is not yet universal, so there must still be some residual disconnects. While I can’t pretend that one blog will turn the tide, I am grateful for the opportunity to at least push the conversation along that all these tools for improving efficiency are not ends unto themselves, but enablers of technologies and strategies that absolutely depend on these airflow management accessories and techniques to deliver on their promises. Finally, I don’t want to leave my readers with the idea that I can’t mess with filler panels or containment barriers because all this business of temperature and pressure monitoring and closed loop feedback and control systems is too complicated or expensive. Between internal resources, vendors and technical advisors, these are merely engineering problems with straightforward solutions and, consequently, it is actually too expensive not to take advantage of these efficiency and resulting cost savings opportunities.
Data Center Consultant
Let’s keep in touch!
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.
Cooling Capacity Factor (CCF) Reveals Data Center Savings
Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.