Request a Quote

Upsite Technologies does not sell products directly, instead we attempt to find the perfect distributor for our end users. Fill out the form below or Contact Us for assistance finding your distributor today.

Request a Quote

Download Modular Containment Brochure

Download Modular Containment White Paper

Download EMS Brochure

Download Wirefree Monitoring White Paper

Where To Start Airflow Management In the Data Center

January 2, 2018
Share:
Tweet about this on Twitter6Share on LinkedIn27Share on Facebook0Share on Google+0Email this to someone

Where To Start Airflow Management In the Data CenterThe most effective place to start an airflow management improvement initiative is not actually anywhere in the data center. Instead, the first steps toward investing in optimized airflow management begin in a conference room, office, hallway by the water cooler, or perhaps Frog Toes Microbrew Pub during Wednesday happy hour.

My good friend Dennis Kniery of Digital Realty Trust recently posted, “PUE2: What Do You Look For.” He makes a compelling case for starting with a metric that tells you where you are today, against which you can measure the results of your activities or scale the potential opportunity. He is absolutely right to stress the importance of establishing a good baseline, particularly one that is relatively straightforward to execute, but doing so may not necessarily be the first step.

I have often outlined a plan of attack that progressed from plugging holes in the raised floor to plugging holes in racks to plugging holes in rows of racks to finally plugging holes in the room, as a way of maximizing the effectiveness of each subsequent step. While this is a logical sequence of activities for avoiding disappointing returns on the bigger ticket items and for providing a dip-your-toe-in-first strategy, it does not address the most critical factor in kicking off an airflow management optimization initiative. That most important factor is securing the assurance that making the data center better is not going to get you fired, demoted, passed over for future promotions, added to the “don’t trust this guy” list, removed from the fiduciary chain of command, or pitied at Frog Toes.

How could that happen to someone from simply improving the data center? Perhaps a short anecdote will explain. I’ll try to protect the not-so-innocent here: Suffice it to say the setting for this story is a large, world-renowned medical facility. I had explained to the IT manager some ten years ago how she could reduce her operating expenses by over $1 million per year with some airflow management improvements. I gave a half hour presentation on how to accomplish all of this with payback in the three to four-month range and ROI/IRR off the charts. She responded that it was not any concern of hers because the electricity bill was not in her department’s budget. For the sake of our ongoing business relationship there, I did my best to mask how horrified I was.

In retrospect, though, this situation was not so uncommon: The IT manager would show some large, unplanned expenses for stuff that did not transact bits and bytes, and there would be no offsetting item in her ledger. When IT is already overhead, as it is pretty much everywhere it’s not the core business, then growing that number can, in fact, pave the path to lower raises, longer promotion cycles, and perhaps employee turnover. Therefore, the important starting point in beginning an airflow management initiative is getting accountability straightened out.

Accountability is generally pretty straightforward in a dictatorship; there is one guy responsible for expenditures who is also the recipient of any benefits from said expenditures. While we may think of a sole proprietorship as the definition of dictatorship, there are variations where a board, executive team, or partnership where all parties get down in the weeds of the business – usually where the data center is the business of the business. This engagement could be in retail or wholesale commercial data centers, managed hosting operations, cloud services, or any e-commerce with a bottom line driven by cost-per-transaction. In these environments, the cost of operating the data center is strategic, and investments that enhance the bottom line should be pretty much no-brainers, barring cash flow issues.

Accountability can get a little more complicated in enterprise data centers. Chances are that for companies that have sent data center staff to any of the major educational conferences over the past ten years have worked their way through this conundrum in one way or another. Sometimes the approach has been to roll up a facilities budget and an IT budget into some higher level umbrella budget for purposes of calculating internal rates of return or returns on investment, paybacks and cash flow, and sometimes one or the other may have budgetary responsibility for both areas.

I saw evidence of these decisions when I was still spending a lot of time in data centers and hosting companies in my data center lab – meetings were frequently attended by both an IT manager and a facilities manager. In the final analysis, it does not matter how this collaboration is accomplished as long as someone is not being disincentivized in doing the right thing for the overall organization. Expenditures for grommets, air dams and barriers, variable air volume fans, aisle doors, etc. need to be evaluated in terms of the effect on energy use and the bottom line and not in terms of relative acquisition cost against alternate solutions.

A final complication resides with smaller data centers and computer rooms that may not be on different electrical service than the rest of the building. IT power consumption can be calculated from UPS or PDU loads, so the mechanical plant, where airflow management results will be realized, is the missing link.

Where precision is desirable, everything that is not on UPS could be sub-metered. Where precision is not so important, the changes enabled in the mechanical plant by good airflow management will show up in the overall operation numbers unless the data is such a small portion of the overall load its mere statistical noise.

Once the accountability lines are clearly established for expenditures to promote effective airflow management and to measure the energy-saving effects of implementing those expenditures (and somebody owns both numbers who actually has authority to have an impact on them) then we can look at establishing the baseline that tells us how low the fruit is hanging and what the ultimate opportunities might be.

Dennis Kniery suggests looking at the temperature differential at the cooling units between supply air and return air. Ideally, this ΔT should be the same as the cumulative average ΔT across the IT load. He offers 25˚F as a default if it is not known or obtaining it is perceived as too much effort. In a data center with poor airflow management, that ΔT at the cooling units will typically be in the 12-15˚F area, and it could be a lot lower. The gap between the two temperature differentials represents the opportunity.

Dennis provides some guidelines for converting ΔT into PUE and dollarizing PUE. I really like this approach because it is simple and generally very predictive. An important caveat is that in a data center with extremely poor airflow management, there could be a very high ΔT at the cooling units because delivered air may be re-heated once or twice through re-circulation before it is consumed at the IT equipment, thereby artificially driving up that ΔT. For this reason, I find it valuable to measure IT inlet temperatures. Periodic random sampling is adequate, but permanent sensors at the bottom and top server air intakes in every rack are best. Look at the greatest differential between supply air and the highest server inlet temperature. If this ΔT is 2-3˚F, then the ΔT measured at the cooling units is meaningful. If this ΔT is high (10-20˚F), then the higher cooling unit ΔT is the result of bad re-circulation.

The only time the ΔT between supply and maximum server inlet will be low (prior to implementing effective airflow management) is when supply air volume is significantly over-produced, which will then produce a lower ΔT across the cooling units. Regardless, one of those temperature differentials will provide a quantitative baseline for pegging the scope of the energy-saving opportunity for an airflow management optimization effort.
In summary, the best starting point for an airflow management optimization project is defining of responsibility and authority in hardware acquisition and energy use, then an assessment of an accurate baseline against which to scale the size of the savings opportunity, and finally measure your progress. You can catch me at Frog Toes to tell me how it all went.

 

Learn how you can improve your airflow management strategy by downloading Upsite’s free white paper: Bypass Airflow Clarified

Bypass CTA


About the Author

ian-seaton Ian Seaton is an independent Critical Facilities Consultant and serves as a Technical Advisor to Upsite Technologies. He recently retired as the Global Technology Manager of Chatsworth Products, Inc.  (CPI).

 

Comment on Where To Start Airflow Management In the Data Center

Your email address will not be published. Required fields are marked *

Subscribe to get more airflow management news from Upsite:
Thanks!
Thank you for subscribing to Upsite's airflow management blog, you'll now receive our weekly blog posts straight to your inbox.
Thanks again!
You are already subscribed to Upsite's airflow management blog, you'll now receive our weekly blog posts straight to your inbox.
Sorry!
There was an error processing your request. Please try again.