Delta T by the Dozen22 min read
On those occasions when we are fortunate enough to share a lunch or happy hour with a vendor of any sort of data center mechanical plant system, accessory or mechanical infrastructure, the only phrase we might hear more often than “ΔT,” would be, “How many do you want?” Despite the ubiquity of ΔT as an explanation for the value of any mechanical products or services, I do not believe the term (aka temperature differential) is clearly enough understood in regards to the wide range of efficiency and effectiveness metrics in our data centers. I have previously addressed The Four Delta Ts of Data Center Cooling on this blog and subsequently addressed the two primary ΔTs and then the two secondary ΔTs which define the ‘meaningfulness’ of the two biggies. Today, I spread the net a bit further and capture an even dozen, all of which play a role in either measuring or establishing the performance effectiveness of our data center mechanical plant.
Server In/Server Out
This is the base ΔT of the data center. That is to say, this temperature differential is the temperature rise through a piece of ICT equipment. In other words, the difference between the server inlet air temperature and the server exhaust air temperature. An aggregate for the data center might be referred to as the cumulative server in/server/out ΔT. Theoretically, in a perfect data center, this ΔT would be the same as the average cooling coil ΔT, but that never happens, except perhaps by accident. What this cumulative server in/server out ΔT actually represents is a weighted mean for the data center and our weighting factor is CFM. Therefore, using a simple weighted mean equation such as:
= W1X1+ W2X2+ W3X3+… WnXn
W1+ W2+ W3+… Wn
and a data pool in which we have:
Some number of servers consuming 10,000 CFM with a 16˚F ΔT
Some number of servers consuming 100,000 CFM with a 20˚F ΔT
Some number of servers consuming 500,000 CFM with a 22˚F ΔT
Some number of servers consuming 75,000 CFM with a 26˚F ΔT
Some number of servers consuming 150,000 CFM with a 30˚F ΔT
We would arrive at a weighted mean server in/server out ΔT of 23.5˚F. Of course, in a data center where all the equipment was exactly the same and all of it was running the same workloads, then any individual server would be a proxy for the whole data center and this calculation is much simpler. However, in reality, our count of servers may not be an accurate weighting factor. For example, the weighting for our 30˚F ΔT could be dramatically deflated by counting blade server chasses or dramatically inflated by counting individual blades in each chassis.
This ΔT is the difference between the cooling unit supply air temperature and return air temperature. Some wise data center pros will assert that if we can only track one ΔT, then it should be this one. They would be frequently right, though not always, and maybe not usually. The thought here is that if this ΔT exceeds 20˚F (or some proxy for server in/server out) then there is some re-circulation going on in the data center; if this ΔT is less than 20˚F, then we have some bypass going on. The obvious conclusion then, would be that a 20˚F ΔT means that everything is working perfectly. Maybe. However, if the actual server in/server out ΔT is 30˚F, then that 20˚F coil ΔT indicates some serious bypass airflow problems. Conversely, if our server in/server out ΔT is 16˚F, then our 20˚F ΔT should be telling us we have some hot air re-circulation going on somewhere. Even more interestingly, it is quite possible to have CRAH/CRAC in/out ΔT exactly equal to our server in/server out ΔT and we could still have a mess on our hands. For example, we could have massive amounts of bypass airflow mixing with massive amounts of re-circulated (re-heated) hot return air netting out to a ΔT pretty close to our calculated target based on the server weighted mean ΔT.
This ΔT is the difference between the temperature of the air leaving the cooling coils, i.e., the supply temperature, and the inlet temperature our ICT equipment is receiving. This ΔT is one of our checks on the relationship between our server in/server out ΔT and our CRAC/CRAH in/out ΔT. Historically in legacy data centers, we have seen supply temperatures less than 60˚F and associated servers ingesting air anywhere from 70˚F to over 80˚F, though today we see that ΔT frequently greatly reduced by good airflow management practices. Theoretically, we should expect:
CRAC/CRAH in/out = server in/server-out + coil/inlet ΔT
Whenever the coil/inlet ΔT exceeds 0˚F, we have room for improvement; whenever it exceeds 5˚F we have a need for an airflow management project. In addition, when our statement turns out to be ≠ instead of =, then we have other airflow management problems to research. For example, if server in/server out = CRAC/CRAH in/out, drinks are on us. However, if server in/server out – CRAC/CRAH in/out + coil/inlet ΔT > 0˚F, then our original equivalent statement is misleading and the result of some bypass airflow someplace and evidence of wasted fan energy and perhaps wasted chiller energy as well.
Top Rack/Bottom Rack
The ΔT between the inlet temperature of servers at the bottom of the rack versus servers at the top of the rack may be a fine-tuning factor of the coil/inlet ΔT, or it may just be telling us that the coil/inlet ΔT is just meaningless nonsense without another variation of the weighted mean exercise. Regardless, this ΔT is a further check on the value of equilibrium between server in/server out and CRAC/CRAH in/out. For example, if top rack/bottom rack > 0 and there is equilibrium between server in/server out and CRAC/CRAH in/out, then that equilibrium has been achieved by bypass airflow and there are untapped opportunities for fan energy savings. Furthermore, if this ΔT = 0, we can only celebrate if we keep that metric in context with the previous ΔTs. For example, I have seen plenty of examples where this ΔT is pretty close to zero as a result of such a large volume blast of cold air that our hot aisles could be\ somebody else’s cold aisles. If server in/server out > CRAC/CRAH in/out, then our top to bottom equilibrium was achieved by oversupplying some volume of bypass airflow.
Server Out/CRAH In
This ΔT refers to the temperature of air evacuating our servers versus the temperature of air received on return at our cooling equipment. Server out > CRAH in means we have bypass airflow in our data center somewhere. Server out < CRAH in means we have some hot air re-circulation and we are probably compensating for that condition with unnecessarily low and costly temperature set point.
Coil approach is the ΔT between the chilled water or refrigerant entering the CRAH/CRAC cooling coils and the supply air temperature delivered into the data center. The higher this ΔT is, the lower our fan and pump costs are. The lower this ΔT is, the lower our chiller plant costs are and the more access we have to free cooling hours, assuming we have that capability. My readers can find tools for calculating these relationships in Cooling Efficiency Algorithms: Coil Performance and Temperature Differentials, and Cooling Efficiency Algorithms: Chiller Performance and Temperature Differentials. In general, all things being equal, the lower coil approach ΔT in most cases will support lower operating energy costs. Conversely, with excellent data center airflow management that supports higher supply temperatures, we can sneak up on that ‘have your-cake-and-eat-it-too’ territory by keeping our coil temperature high but allowing the supply air temperature to creep up, resulting in fan, pump, chiller and economizer savings.
Coil ΔT is the difference between chilled water coming into the coils from the chiller or economizer heat exchanger and the temperature of the water leaving the coils after it has picked up heat from the data center return air. A higher coil ΔT indicates more heat is being removed from the data center. That plot point can be explained by greater efficiency at heat capture, but it can also be explained by a false marker created by hot air re-circulation in the data center. If a lower coil ΔT is driven by higher chilled water temperature, that ΔT will be indicative of better chiller efficiency and/or access to more free cooling hours. Pump energy may be increased to prevent the coil ΔT (heat removal) from getting too low, but my experience has been that chiller and economizer savings far outweigh any increases in pump energy.
Chiller ΔT, i.e., leaving water temperature (LWT) versus entering water temperature (EWT) should be the same as coil ΔT, minus any heat added by pump motors between the cooling units and the chiller. The story is the same as the coil ΔT story. A higher ΔT indicates greater efficiency in heat removal; whereas a lower ΔT (resulting from higher LWT) indicates a lower operating cost (kW per ton). Oversizing the chiller provides a path to efficient heat removal and lower operating cost.
The economizer approach temperature is the ΔT between the scavenger water or glycol entering the heat exchanger from the cooling tower versus the water leaving the heat exchanger to be returned to the data center chilled water loop. The economizer approach temperature is increased with a savings in pump energy and is decreased to remain active more hours as a replacement to the chiller.
Tower approach temperature is the ΔT between the ambient wet bulb temperature and the water temperature being provided to the condenser side of the chiller or to the scavenger side of the economizer heat exchanger. Operational manipulations to reduce this ΔT are not particularly effective. Recall for a moment what you have learned about fan and pump affinity laws. We know if we decrease fan speed and therefore throughput by 25% that we get a 58% savings in fan energy (.753 = .42 and 100%- 42% = 58%), but when we apply the cube root going the other way, we do not have such cause for celebration. For example, if we want to increase our surface area for evaporation by ramping up cooling tower fans with a 25% increase in applied energy, we only get less than an 8% increase in fan volume and tower performance ( = 1.077). I do not call that a great trade-off. Over-sizing the tower is generally a much more effective way to reduce the approach ΔT and thereby increase access to free cooling hours.
Return Air ΔTs
We get a two-fer with our data center return air in economizer situations. This one is pretty straightforward. When the ambient condition (minus cumulative approaches) is less than maximum supply air temperature set point, we are in 100% economization mode. When the ambient condition (minus caveats) is greater than maximum supply set point but less than return temperature, we can move into partial economization mode. Our outside air temperature or cooling tower LWT is not cool enough to cool the data center, but it is cool enough to mix with the return air (or water) and effectively reduce the load on the mechanical plant. I have provided an example tool for calculating this value in Cooling Efficiency Algorithms: Economizers and Temperature Differentials (Waterside Economizers – Series).
By in/out ΔT I am describing the temperature difference between inside a containment aisle versus outside a containment aisle. According to the textbook, various vendor assertions and countless seminars, conference presentations and papers produced by Professor Seaton, this ΔT will essentially be the same as the baseline server in/server out ΔT. That’s the plan and it is a good plan, though not always elegantly executed. When this pair of temperature differentials do not match, or do not at least “essentially” match, then we have cause for a detective project – find the leak. Sometimes these leaks can be identified in short order, but sometimes they can be a bit sneaky. Several years ago a research project was conducted that turned into a leak hunting detective project. They found that a major path for bypass airflow in containment aisles was through dead, comatose, or sleeping servers, particularly when cold aisles were being over-pressurized. Furthermore, hard-working servers were also found to allow bypass airflow, particularly with grossly unbalanced aisle static pressures.
This differential between dry bulb temperature and wet bulb temperature is unique to this list and therefore is a throw-in to make a baker’s dozen. This ΔT is unique in that it is the one item on this list over which we really do not have any control, unless we are working with a realtor to find a location for our new data center. Since wet bulb temperatures are determined by evaporation and resultant heat removal, at 100% relative humidity there is no difference between wet bulb temperature and dry bulb temperature and, consequently, the lower the relative humidity gets the bigger difference between the wet bulb temperature and the dry bulb temperature. These differences affect cooling tower performance, waterside economization availability and indirect evaporative cooling availability.
Data Center Consultant
Let's keep in touch!
Cooling Capacity Factor (CCF) Reveals Data Center Savings
Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.