A Look at Liquid Cooling Use-Cases and How You Should Prepare18 min read

by | May 22, 2019 | Blog

First of all, although this post will cover the concept of liquid cooling, it’s important to note that we’ll take a bit of a higher-level approach initially to better understand the market. With that being said, liquid cooling is a very real technology being leveraged by numerous data centers for a variety of use-cases. A very recent ResearchAndMarkets report shows us that the data center liquid cooling market is expected to register a CAGR of over 25.2% during the forecast period 2019-2024.

Why is this happening? Rising investments in high-density technology, high-performance computing, and powering smart city initiatives are making state and local players engage in developing the most reliable and efficient methods to cool their data centers. Furthermore, increasing volumes of data generated are creating the demand for data centers, and these centers consume a considerable amount of energy. In 2016, data centers consumed 416.2 terawatt hours of energy, accounting for 3% of global energy consumption, and nearly 40% more than the entire United Kingdom. This consumption is expected to double every four years.

So, to help make operations run smoother, the cooling systems in the data centers are being checked for efficiency. As the ResearchAndMarkets points out, data centers are complex and carry the uncertainty of quantity, timing, and location metrics. The cooling systems need to engage in high-density zones, and it can be an onerous task for traditional cooling mechanisms. A typical data center cooling system must be pre-engineered, standardized, and modular. They are required to be scalable and flexible to meet the data center needs. This is difficult in today’s world with companies looking to cut down costs and being unwilling to spend much on the high-end customized cooling systems.

Case In Point: Powering the NREL HPC Cluster with Liquid Cooling

Let’s back up just a little bit. At the latest Data Center World conference, I attended a session focusing on liquid cooling. During that session, one of Upsite’s leading engineers, Lars Strong, discussed some really interesting concepts around liquid cooling and just how long it’s been around. Let’s start here; according to Lars, early concepts of liquid cooling have actually been around since 1887. Since then, the technology has certainly come a long way.

However, it’s really important to note that liquid cooling does have its place in the data center. You just need to know what it’s great for. Check out this great chart for more details.

That said, a great use-case for liquid cooling are highly dense systems that do a massive amount of processing. So, high-performance compute (HPC) clusters definitely fit this scenario. That said, let me give you a very cool recent use-case.

In the first half of 2018, as part of a partnership with Sandia National Laboratories, Aquila installed its fixed cold plate, liquid-cooled Aquarius rack solution for high-performance computing (HPC) clustering at the National Renewable Energy Laboratory’s (NREL’s) Energy Systems Integration Facility (ESIF).

This new fixed cold plate, warm-water cooling technology together with a manifold design provides easy access to service nodes and eliminates the need for server auxiliary fans altogether. Aquila and Sandia National Laboratories chose NREL’s HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling and has the required instrumentation to measure flow and temperature differences to facilitate testing.

Setting an Example in Sustainable HPC Cluster Design

In building the data center, NREL’s vision was to create a showcase facility that demonstrates best practices in data center sustainability and serves as an exemplar for the community. The innovation was realized by adopting a holistic “chips to bricks” approach to the data center, focusing on three critical aspects of data center sustainability:

  • Efficiently cool the information technology equipment using direct, component-level liquid cooling with a power usage effectiveness design target of 1.06 or better;
  • Capture and reuse the waste heat produced; and
  • Minimize the water used as part of the cooling process. There is no compressor-based cooling system for NREL’s HPC data center. Cooling liquid is supplied indirectly from cooling towers.

To make all of this work as efficiently as possible, the Aquarius system uses several innovations to provide an energy-efficient touch cooling-derived rack system addressing commercial, off-the-shelf dual processor server systems.

Critical technology developments of the warm-water, fixed cold plate system are as follows:

  • Horizontally mounted water-driven fixed cold plates and manifold system
  • Compliant Thermal Interface Material (TIM) specially developed with both stiffness and compliancy, in order to eliminate planarity issues and transfer heat efficiently to the fixed cold plate
  • Server tray with reliable lift mechanism to make firm contact with the fixed cold plates
  • 12-VDC Hot Swap Power Board (HSPB) compensating for rise and fall times to ensure DC power bus stability during servicing
  • Intelligent Management Platform Interface (IPMI) control circuitry with an Inter-Integrated Circuit (I2C) circuit on the HSPB allowing for remote control of the node motherboards as well as granular power measurement and monitoring of all IPMI reporting data points
  • System-compliant compute node faceplates with lit power button and activity lights displayed. (Because there are no fans, the nodes are so quiet that visual feedback is needed to know if the system is powered up.)

The cold plates are literally a sandwich of two stainless steel plates with weld points inside to distribute turbulent flow within them.

Understanding the Results 

The HPC cluster installation was straightforward and easily integrated directly into the data center’s existing hydronic system. A round of discovery testing was conducted to identify the range of reasonable supply temperatures to the fixed cold plates and the impact of adjusting facility flow.

The results speak for themselves. Not only was the integration streamlined and smooth, the key takeaway is that this fixed cold plate design provides a very high percentage of heat capture direct to water. In fact, up to 98.3% when evaluating compute nodes only (the percentage drops to 93.4% when evaluating compute nodes along with the Powershelf for the system because the Powershelf is not direct liquid cooled).

As of today’s operational metrics, the cluster has required zero maintenance, and no water leaks were observed.

Looking Ahead at the Future of Data Center Design

New solutions around data center design innovation are all about the use-case and the right fit. When it comes to liquid cooling, the technology has certainly come a long way. We’re even starting to see more OEM server manufacturers incorporate some kind of liquid cooling design into a few of their server options. This means use-cases are growing, the technology is reaching more maturity, and organizations are keener to adopt the liquid cooling benefits. However, it really does come down to your use-case. And, as Lars noted in his Data Center World session, in most data centers, liquid and air cooling will not be mutually exclusive. There will be a place for both technologies in the data center, it just comes down to finding the right mix.

To make your own design successful, you don’t have to navigate the liquid cooling waters alone. Work with a good design partner that’ll outline your use-case, help you architect the right solution, and ensure you see the results you need for your organization.

Airflow Management Awareness Month 2019

Bill Kleyman

Bill Kleyman

Industry Analyst | Board Advisory Member | Writer/Blogger/Speaker | Contributing Editor | Executive | Millennial

Bill Kleyman is an award-winning data center, cloud, and digital infrastructure leader. He was ranked globally by an Onalytica Study as one of the leading executives in cloud computing and data security. He has spent more than 15 years specializing in the cybersecurity, virtualization, cloud, and data center industry. As an award-winning technologist, his most recent efforts with the Infrastructure Masons were recognized when he received the 2020 IM100 Award and the 2021 iMasons Education Champion Award for his work with numerous HBCUs and for helping diversify the digital infrastructure talent pool.

As an industry analyst, speaker, and author, Bill helps the digital infrastructure teams develop new ways to impact data center design, cloud architecture, security models (both physical and software), and how to work with new and emerging technologies.

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month 2019

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest