Airflow Management Considerations for the Colocation Landlord13 min read

by | Mar 22, 2017 | Blog

Can anybody remember back to when the top priority airflow management consideration for a data center, colocation landlord was the hot air emanating from the competition’s sales person? This bluster frequently touted tier three and a half or tier three plus or “essentially” tier four robust availability, without the benefit of any certifying agency, or perhaps something like 0.095 PUE design intent. My, how times have changed. Now, even a data center landlord’s decision to disregard airflow management is tied to important strategic considerations. These considerations range from SLA issues to profits and customer costs to risk exposure assessment to billing models and availability of hardware and educational material and/or knowledge.

Service level agreement (SLA) terms will determine the degree to which a landlord can both care about as well as enforce airflow management behavior. In general, the more precise a temperature SLA is the more interest a landlord will likely have in establishing and enforcing good airflow management behavior. I have surveyed numerous SLAs available on-line and there are plenty of examples of precision and also plenty of examples of a general lack of concern with airflow management. I will boldly cite some of the good examples but I hope my readers will humor me on my choice to leave some of the guilty nameless. On the positive side, a good example is the standard SLA from Cavalry Data Centers wherein they specify that agreed-on temperature thresholds will be measured at the equipment faces of the top and bottom pieces of IT equipment in any rack and they will not be held liable for non-conforming measurements caused by customer equipment that fails to support hot aisle – cold aisle separation or acts of God. I would say that lumping those terms together says about all we need to say about how important a consideration airflow management is in those data centers. Similarly, Internap specifies that they will guarantee a certain range of inlet temperatures, but they will not provide service level credits if the equipment is not aligned to support hot aisle-cold aisle separation. I particularly liked examples I saw from 365 and Verizon which simply referred to ASHRAE TC9.9 guidelines, which not only specify temperature and humidity ranges but also where measurements are to be made and what airflow management is required to support the provision of those environmental conditions. Conversely, I found other SLAs with environmental specifications such as maintaining data center temperature between 65 and 76˚F, or maintaining a data center temperature of 72˚F +/- 8˚, or, even better, maintaining a space temperature of 64-78˚F that however does not apply to localized conditions within a rack or cage i.e., We’ll cool the room, but you’re on your own as far as your computer equipment is concerned!!?!), and finally I found one where the landlord agreed to maintain the room within a 74-77˚F range. In all of these, but most especially in this last example, the landlord’s airflow management consideration is to do whatever is required to mismanage airflow as much as possible. Otherwise, how can we have a 3˚F delta in the room when the IT equipment will have deltas ranging from 15˚ to well over 30˚F? The only way to maintain that range is with something like 400% bypass airflow. (Example: 1MW at a 20˚ ΔT requires 155,000 CFM; whereas 1MW at a 3˚ ΔT requires 1,033,333 CFM of air.) So a service level agreement may demand good airflow management or it may actually require poor or even horrendous airflow management; regardless, SLAs represent an airflow management consideration for data center landlords.

Another airflow management consideration for the data center landlord is minimizing exposure to risks associated with hosting a minimal number of customers. If all his capital and opex investment is spread among four tenants and he loses one, he may have a quarter of his investment uncovered by revenue. However, if he has the same infrastructure supporting six tenants and he loses one, that may actually represent a 33% less loss of revenue and could represent the difference between a red or black balance sheet until a new tenant is found. How does airflow management contribute to such a scenario? Let’s look at the last example in the SLA discussion where good airflow management produced a cooling plant temperature differential more or less equal to the IT equipment intake versus exhaust temperature differential (ΔT = 20˚F), versus the horrible excess of bypass air resulting in a minimal room temperature differential (ΔT = 3˚F). Assuming 20,000 CFM capacity, it would require 8 precision CRAH units to cool the data center with good airflow management, but would require 50 CRAHs to cool the data center with high bypass airflow; with some minimal level of redundancy, that might be 9 CRAHs versus 55 CRAHs. With reasonably efficient CRAHs operating at a 20˚F ΔT and moving 155,000 CFM for 1MW of heat dissipation, a 1.4 PUE is a reasonable expectation, with plenty of room for improvement with higher chiller leaving water temperatures or free cooling. On the other hand, just taking the extra CRAH energy and assuming everything else is equal, the high bypass data center would have a PUE around 2.45. That means the high bypass airflow data center is consuming 1.45 MW for cooling and other non-IT loads whereas the good airflow management data center is only consuming 400 kW for non-IT loads, such as cooling. In other words, the high bypass airflow data center could host four 250kW tenants with a total power budget of 2.45MW; whereas with the same 2.45MW of available power, the good airflow management landlord could host seven 250kW tenants, thereby dramatically minimizing his exposure to a lost tenant.

Profitability and/or market cost-attractiveness is another important airflow management consideration for data center landlords. The example we have been using comparing a good airflow management data center to a high bypass airflow data center illustrates this consideration clearly. At $0.10 per kW/H electrical cost, the good airflow management data center needs to cover $1.23 million in energy costs while the high bypass airflow data center needs to cover $2.15 million in energy costs. That $900,000 difference could go straight to the bottom line as additional profit or it could be shared with customers to both increase profits and compete on price against the less efficient data center, or, in a market share push, it could go straight to an aggressive pricing model. However any particular landlord chooses to play this airflow management card, it should be good for business.

Billing models represent another airflow management consideration for data center landlords. If good airflow management practices cannot be established or enforced, then, in a multi-tenant facility, it might make sense to meter at the PDU, normalize to the UPS output and then apply the same PUE to all tenants within the facility supported by the shared infrastructure. That will work as long as there is not an extreme outlier on either extreme. Otherwise, in by kW-billing, landlords should find it easy to enforce good airflow management behaviors because the results show up directly in their tenants’ fees. A contracted PUE billing model could create an adversarial relationship between landlord and tenant as the landlord tries to drive behaviors that are obviously selfish; whereas a blended PUE billing model can allow the tenant to take advantage of lowered terms based on PUE reductions while the landlord enjoys the extra profits between the contract renegotiation periods. In this billing scenario, the landlord can actually become the tenant’s partner in providing education to improve airflow management and thereby both increase landlord profits and reduce tenant costs. If the landlord does not have the expertise to be an educational resource to the tenants, there are plenty of vendors, service or hardware, with the willingness and expertise to jump in and assist. Finally, with such a partnership and associated educational effort, there will be a variety of data center accessories that can contribute to improved airflow management, ranging from blanking panels to floor grommets to environmental sensors and associated systems all the way to full cabinets for converting side breathing hardware into front-to-rear breathing behavior or rack mount boxes for redirecting airflow patterns of specialty equipment and even containment accessories and components. A data center landlord with a business model commitment to good airflow management practices may want to maintain an inventory of some of these airflow accessories. Depending on the billing model, accessories could be part of the overall service contract, or they could actually represent a small extra revenue opportunity for the landlord.

Airflow management is an important consideration to colocation data center landlords. In some cases, it represents a path to higher profits and in some cases, it represents a path to more competitive pricing in the marketplace. Oftentimes airflow management can contribute to a growing sense of partnership between landlords and tenants and thereby increase the perceived value of the landlord to the tenant. Then again, as we have seen with some service level agreements, airflow management may need to be abandoned entirely in order to meet contracted commitments. While we definitely see plenty of examples of this negative reinforcement, I suspect we may be looking at a looming category of Darwinian asterisks.

Ian Seaton

Ian Seaton

Data Center Consultant

Let’s keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest