Data Center Heat Energy Re-Use Part 3b: Hot Water Cooling (Challenges and Metrics)26 min read

by | Jul 8, 2020 | Blog

Hot water or warm water data center cooling provides access to maximum free cooling hours as well as the foundation for profitably harvesting data center heat energy – essentially converting waste into resource. What could be better than lower costs and extra income? Perhaps the more important question therefore is, why do we see so few examples of obvious and reasonable common sense?

In my previous installment in this ongoing series exploring the different ways that data center operators capture and re-use the heat energy produced by their ICT equipment, I covered two representative case studies. Both examples used some form of liquid cooling to exploit benefits of warmer water temperatures. The IBM Zurich SuperMUC “cooled” servers with 140˚F water applied directly to chips through innovative micro channels. The eBay Project Mercury data center was able to raise rear door heat exchanger supply water temperature from low-to-mid 50˚F range to mid-to-high 60˚F range. Project Mercury achieved increased free cooling hours in Phoenix and SuperMUC produced enough heat to power absorption chillers to deliver cooling for storage and network equipment heat loads at around 600kW, in addition to delivering some comfort heating in adjacent labs.

In the first piece of this series we showed how higher temperatures still within the ASHRAE recommended inlet air temperature range could eliminate the need for generator block heaters and in the second piece we explored how effective airflow management could have reduced the lift between the Westin data center exhaust temperature and the Amazon boiler temperature by 28%. Table 1 below provides a glimpse of the economic value that can be derived from temperatures associated with liquid cooling, particularly more direct contact than “proximity” liquid cooling.


TechnologyAir Cooled Data CenterWater Cooled Data CenterTwo Phase Immersion Cooled Data Center
Waste Heat113˚F140˚F167˚F
HVAC/domestic hot waterYesYesYes
District HeatingYes (with heat pump)YesYes
Boiler feed water pre-heatingNoYesYes
Absorption refrigerationNoYesYes
Organic Rankine cycleYes (with heat pump)YesYes
Biomass processingYesYesYes

Table 1: Potential Applications for Data Center Waste Heat

Data derived from Table 1 on page 15 of “Experimental and Numerical Analysis for Potential Heat Reuse in Liquid-cooled Data Centres,” Andreu Carbo, Jaune Salom, Mauro, Mario Macias, Jordi Guitart, Energy Conservation and Management, Volume 112, March 2016, pp 135-145

For the purposes of today’s discussion and because of availability of case study information, I am limiting my focus to the top four technologies from Table 1, which can all be elements of a single data center heat energy re-use implementation, such as the Zurich SuperMUC discussed last time. A key contributor to effectively harvesting waste heat energy is the absorption chiller, discussed much in the headier happy hour debates at ASHRAE, AFCOM and ASME; widely misunderstood; and even more widely ignored in actual practice, despite a good total cost of ownership story. For example, Bruno Michel, interviewed for our previous piece on the Zurich SuperMUC, declared that despite a 3X acquisition cost penalty for absorption chillers over traditional data center cooling infrastructure, they would start to pay for themselves in anywhere from 2-5 years, depending on utility costs and the temperature gradient between the heat supplied from the data center and the heat required to perform absorption chilling. The OPEX TCO at SuperMUC was driven by a coefficient of performance greater than 10 for the absorption chillers versus less than 5 for traditional chilled water cooling. Even more enticing, research conducted at Villanova University found paybacks as low as 4-5 months for absorption chillers in a 10MW data center. (K. Ebrahimi, G.F. Jones, A.S. Fleischer, “Thermo-Economic Analysis of Steady State Waste Heat Recovery in Data Centers Using Absorption Refrigeration,” Applied Energy, Vol 139, 2015, pp384-397). Nevertheless, this next great thing has not quite rolled around to “next” yet nor donned the mantle of greatness. Besides the general conservativeness of our industry, there are a variety of more specific obstacles to data center heat energy re-use, including:

Limitation of low quality heat

Lack of demand for heat energy

Requirement for supplementary heat production

High investment costs

Inconvenient infrastructure

Conflicting financial performance expectations between data center operators and heat energy consumers

Information security and reliability

Business models and mutually beneficial goals

Dynamic system variables conflict with requirements for optimized thermodynamics

Closely held trade secrets of success stories

I have already beat up on low quality heat and the simple answer is pretty straightforward – some form of direct-contact liquid cooling raises the quality of waste heat energy and often eliminates the need for any heat pump boost. In discussing the Westin/Amazon project, Jeff Sloan mentioned that while they were using absorption chillers for comfort cooling, it was nowhere near as efficient as the office heating installation, primarily because of heat pump costs to get up to an absorption operating temperature. The 28% improvement potential from an optimized data center airflow management scheme would really help on the boiler temperature gradient, but would be a pretty small bump on the absorption chiller temperature gradient. Liquid cooling, on the other hand, greatly reduces and even eliminates the gradient between produced heat and temperature required to operate absorption chilling.

Lack of demand for heat energy produced by data centers, particularly in the United States, can be a show-stopper or at least a significant complication. Heat energy is not quite as transportable as electric energy, so a customer/consumer of heat energy in close proximity to the data center is critical. In northern Europe there is a built-in infrastructure in district heating networks with on-line demand for heat energy. Sweden has made this market structure especially appealing to data centers by creating Stockholm Data Parks adjacent to central heat pump stations connected to a municipal district heat network.  Three data centers at Stockholm Data Park Kista supply a district heating network for 35,000 apartments. In fact, Stockholm is recruiting data center construction with a promise of built-in market demand for heat energy with the income from resultant energy sales sweetening the overall TCO package in place of typical U.S. tax incentives. Such market demand is not so easily accessible in most U.S. municipalities. That is why the Westin/Amazon project is such a perfect model for how to proceed with a data center heat energy re-use initiative. Essentially, the Amazon office buildings represent the equivalent of a local heating district “customer” for Clise Properties (the owner of the Westin Carrier Hotel), and Clise Properties and McKinstry Engineering formed an entity registered as an approved utility company. Amazon will avoid some 80 million kW hours of heating energy cost and Clise Properties will avoid expenses of running evaporation towers and expense of resultant water loss.

Having a good model however is not the same as having a foolproof road map. Massachusetts Institute of Technology, for example, as reported in the second piece of this series, had to ultimately abandon a similar plan when they were unable to overcome permitting obstacles for using canal water for cooling and how low grade heat energy did not push through the gatekeeper where laws of thermodynamics and economics prevail. Conversely, the DataBank – Georgia Tech collaboration in Atlanta’s ATL1 data center faced similar obstacles and found workable paths through and around them. According to Brandon Peccoralo, DataBank GM of the Atlanta data center, the plan to develop essentially a closed-loop free cooling architecture with waste heat energy re-use required hitting a couple curve balls.

The basic idea was to use spring water as a cooling source for chillers serving both the colocation data center with standard perimeter cooling units in a more-or-less traditional air-cooled data center (albeit with industry best practice airflow management with hot aisle containment and chimney cabinets) and the Georgia Tech HPC. The excellent airflow management allows the traditional data center to enjoy the chiller efficiencies of 13˚F warmer supply water while the Georgia Tech super computer rear door heat exchangers further raised chiller efficiency with 10-12˚F warmer water. The 90˚F “return” water is used to reduce the lift for heating boilers in an adjacent high rise or leverage the hot water in its building beams for ambient comfort heating. That high rise is effectively the heat rejection mechanism for the data center, eliminating the need for rooftop heat rejection mechanical plant and resultant evaporative water loss, returning the water to the data center to remove heat again. Nevertheless, accessing the spring water required a demonstrated commitment to not change the level of the spring water so, even with a closed-loop system, replenishing some system loss required a rain water capture. Resultant complexities included treating the rain water to eliminate introducing air pollutants into spring water, monitoring spring water levels to trigger replenishment, and filtering Georgia clay out of the spring water. Despite the complexities, the presence of an adjacent customer for the data center heat and increasing the heat energy output with excellent data center airflow management has produced results satisfactory to all parties. Exciting research with a local micro grid will be covered in a subsequent piece.

A disappointing and rather spectacular failure to closely couple data center heat energy to a built-in heat energy demand was the cancellation of Alphabet’s Sidewalk Labs’ plan for a smart city district on Toronto’s Quayside waterfront. The very thing that made this proposal so compelling was also its Achilles heel. This plan took the elements of the Westin/Amazon model and the residential and consumer elements of the Swedish model and juiced it up with digital and lifestyle integration – i.e., a holistic vision, which is just what is needed to integrate technology and human elements into a major redevelopment project with economic, cultural and environmental benefits. However, in order to protect the deployment of that vision, Alphabet apparently asserted for itself a management role incompatible with local social expectations and political hierarchies. An overlooked element of the Quayside proposal was that it was going to tap into existing data centers for heat energy. While rolling the demand for heat energy up to an existing source of the heat energy may not be out of the question, since that was the Westin/Amazon model, it seems to me that better results could be achieved where the source is actually designed and built to be an energy source. For example, the Zurich SuperMUC, with low resistance direct hot water cooling can initiate a much smoother handshake with a heat energy customer.

Some means of elevating low grade heat energy produced by a data center requires both capital investment as well as additional operational expenses. In most of the case studies I’ve reviewed, that means has been heat pumps. The key again is reducing the ΔT as much as possible between the data center heat production and the heat energy demand threshold. According to Bruno Michel, reducing the ΔT between the chip micro channels and the absorption chillers at the Zurich SuperMUC from 167˚F to 68˚F resulted in a 50% reduction in heat pump operating costs. With current absorption chiller technology, that ΔT=0 and there is no low grade heat energy issue. Even warm water cooling such as Westin/Amazon and ATL1 have attractive ROI and payback for reducing boiler lift even if the temperatures are not profitable for absorption chillers.

High investment costs can curb enthusiasm for heat energy re-use projects, particularly where both heat pumps and absorption chillers are involved, since the heat pumps represent an additive cost and absorption chillers represent about a 3X capital increase over conventional chiller, CRAH and tower investment. Nevertheless, because the COP for absorption chillers is more than double traditional alternatives, the payback at Zurich SuperMUC was around two years. Granted, Zurich SuperMUC took advantage of the extremely high temperatures allowed by the low resistance direct contact liquid cooling solution and that payback period could extend to five years with lower electricity costs or lower grade heat energy.

A significant deterrent to pursuing hot water or warm water data center liquid cooling in conjunction with improved grade heat energy re-use is the lack of data on the results of different types of implementation. In general, we see regular press releases of plans to do something creative with liquid cooling and heat energy re-use, but then we cannot find published reports, conference papers or presentations bragging about the results. While it is conceivable that we do not see much operational performance data because projects either failed or were cancelled on further consideration, the more likely explanation is that everyone in our industry is looking for a competitive edge and is therefore not motivated to educate their competition. That being said, we are thankful for the data provided by Westin/Amazon, Zurich SuperMUC and ATL1. Research conducted at Aalto University in conjunction with Aalto Energy Efficiency Programme makes the most significant contribution to filling that information gap. I encourage my readers to read the paper cited as a reference for Table 2. Even though the data has been developed for Finland with several boundary conditions significantly different than we find in the United States (such as ubiquitous district heating networks, access to free cooling for all or most of the year, and financial incentives for carbon emission reductions), the study clearly delineates their methodology and assumptions and I quixotically hope that it could inspire a similar study for U.S. assumptions and conditions. The cost of electricity used for this study is $0.045 per Kw/H.

Electricity for ICT equipment55 million Kw/H55 million Kw/H
Electricity for cooling10 million Kw/H10 million Kw/H
Electricity for heat pumps to upgrade heat from 68˚F to 175˚F 29 million Kw/H
Total Data Center Electricity Use65 million Kw/H101 million Kw/H
Heat recovered for District Heat 71 million Kw/H
District Heat output from DC 1200 KW
Investment to upgrade heat $5.6 million
District heat production, including electricity to run heat pumps 100 million Kw/H
District heat production, cogeneration, solid fuel450 million Kw/H380 million Kw/H
District Heat production, heat-only-boiler, oil50 million Kw/H20 million Kw/H
Electricity production cogeneration225 million Kw/H190 Kw/H
Solid Fuel Use800 million Kw/H670 million Kw/H
Solid fuel cost$17.9 million$15 million
Income from cogeneration electricity (cost avoidance)-$10 million-$8.5 million
Oil Cost$4.6 million$1.8 million
Electricity cost adder for District Heating $3 million
Heat upgrade investment payback 5 years

Table 2: Economic Evaluation of Waste Heat Utilization

(Data adapted from table Economic and Emission Evaluation of Waste Heat Utilization” on page 28 of “Future Views on Waste Heat Utilization – Case of Data Centers in Northern Europe,” Mikko Wahlroos, Matti Parssines, Samuli Rinne, Sanna Syri and Jukka Manner, Renewable and Sustainable Energy Reviews, Volume 8, Part 2, February 2018, pp 1749-1764

The conditions studied in this project produced a five year payback to produce a useable grade of heat energy. I refer my reader to the third row of data: this is that pesky ΔT between the heat produced by the data center and the heat required to do useful work. That is the same range that Bruno Michel discussed reducing at Zurich SuperMUC and that could actually be eliminated with most current direct contact hot water liquid cooling solutions. Doing so reduces the data center electricity consumption by 28% and eliminates or dramatically reduces the $5.6 million capital investment to upgrade the heat. Suddenly the financial analysis starts to look a lot closer to the two years they talked about for the Zurich SuperMUC. Even a 20% reduction of that ΔT like we saw at Westin/Amazon and ATL1 will carve some time out of that payback window.

Liquid cooling has been touted by vendors as well as independent industry experts as a path to improved chip performance, greater access to free cooling, lower total cost of ownership and better ability to support very high densities. To that list of benefits should be added a viable income source from heat energy.

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor, manage, and maximize the power and cooling infrastructure for critical data center environments.

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor,
manage, and maximize the power and cooling infrastructure for critical
data center environments.


Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!


Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite


Airflow Management Awareness Month 2020

Did you miss this year’s live webinars? Watch them on-demand now!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest