The Secret to Successfully Raising Data Center Temperatures9 min read
In the past couple of years, you would have been hard-pressed to attend a data center conference, pick up a data center focused magazine or journal, or subscribe to a data center newsletter or blog without being pestered on the value of turning up the thermostat in your data center. That may no longer be the case, but reaping the benefits of a warmer data center requires more than merely turning up the thermostat. The fact of the matter is that with a rather typical set point of 72⁰F (resulting in supply air being delivered around 54⁰F) many data centers will still require a significant over-supply of cool air volume to maintain all the servers in that space with inlet temperatures below 80⁰F. While we would call this a cold data center, it would be more accurate to call it a hot-and-cold data center, with 20⁰F differences between servers in the bottom of some racks versus those mounted in the top. But, if you merely turn up the set point in a data center like this, all you are going to accomplish is raising the inlet temperature on some servers to 85⁰ or maybe 90⁰F. Unless, of course you supply 3X the volume of chilled air than is actually being consumed by your electronic equipment.
Before we look at the recommended precursors to raising the temperature in your data center, let’s just look at what we might get by raising the set point without taking any of those initial steps. First, assuming we can produce enough air volume, we can have access to more free cooling hours. For example, a 6⁰ increase in set point in my home town of Austin would result in a 29% increase in free cooling hours. That might be enough to turn a marginal payback into an attractive payback. Similar set point changes would produce 19% more free cooling hours in Boston, 14% in Chicago, 16% in Denver, a whopping 110% in Los Angeles and 22% in San Jose. Obviously, results vary by location. More than likely we were already over-producing supply air by maybe 1.5X and now, with the higher set point, we may have to up that to 2.5X or higher. In conclusion, we are burning some air movement energy, but we have increased our hours of free cooling.
The often unstated best practice and enabling behavior for raising the temperature in the data center is to establish full control over the airflow management practices. This preliminary step involves plugging all the holes in the data center between the supply air side and the return air side. These holes will be everything from cable access ports in floor tiles to unused rack mount units in cabinets, to the open space between hot aisles and cold aisles to the gaps created by equipment mounting rail set-backs. With all these holes plugged, the variation in server inlet temperatures throughout the data center should be no more than a range of 5⁰F (2-3⁰ is achievable with discipline and well-engineered products). With this level of effective airflow management, the traditional set point can be abandoned in favor of setting the supply temperature at 3-5⁰ below the desired maximum server inlet temperature. If this setting were 77⁰, for example, then an Austin data center would see a 222% increase in free cooling hours. A data center in Boston would see a 79% increase in free cooling hours, while Chicago would see 74%, Denver 66%, Los Angeles 198% and San Jose 41%. In addition to all the extra free cooling, the airflow management practices would allow reducing the volume of supplied air from 2.5X actual demand to 1.1X actual demand, resulting in an additional 56% reduction in airflow which equates to over 90% in fan energy savings based on affinity law values.
The beauty of approaching the warming of the data center on this intelligent path is that when you’re done and counting all the money you’ve saved, you will have no servers living off air any warmer than what many had been getting in the cold data center.
Data Center Consultant
Let’s keep in touch!
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.