Data Center Cooling: Enterprise vs. Cloud20 min read

by | Aug 17, 2022 | Blog

A data center is a data center and cooling is cooling. Are there variations and subsets? — Of course. There are large data centers and there are small ones. There are Tier 1’s, and there are Tier 4’s, and everything in between. There are enterprise data centers and data centers that support the cloud. There are clear differences in cooling strategy between large data centers and small computer rooms and there are clear differences in cooling strategy between high availability data centers and “less critical” data centers, but what about enterprise data centers versus cloud data centers? Aren’t they just large data centers with availability addressed by other data centers out in the cloud, with cooling designed and deployed accordingly?

To a large degree that may be true, but there are some major strategic differences and some much more subtle differences between these subsets of data centers that indicate appropriateness for different approaches to cooling. My approach in today’s piece will not be a survey of those actual differences in cooling technologies, but will rather be an explanation of what might make the most sense based on the differences in strategy and mission. Most of these differences exist in practice and some may not yet have been fully realized by industry participants.

Distinguishing Between Enterprise and Cloud

I am going to go out on a limb here and suggest that if you were to ask respective managers from an enterprise data center and a cloud data center what they wanted from their cooling approach that you might get pretty much the same answer from both: something along the lines of, “Keep my IT equipment cool enough for the lowest cost”. Pretty hard to argue with that. But then why should they look any different at the end of the day?

A key distinguishing feature between the enterprise data center and cloud data center is scale, with direct and obvious implications on the configuration of their cooling. While an individual enterprise data center may be of a relatively similar scale to an individual cloud data center, the ultimate reach of scale is going to be quite different. An enterprise data center may have a disaster recovery data center, but will still support its availability requirements with some level of redundancy on all critical elements of the mechanical plant. Historically, that protection has involved some + or x of CRACs or CRAHs, chilled water loops, chillers and power to all elements. More recently, with greater proliferation of free cooling, an economizer system might be the primary method of cooling and the traditional mechanical plant would be the back-up. While an enterprise data center needs to be protected, a cloud data center is merely one of several legs supporting the cloud. That is to say, the enterprise data center may accurately be labeled mission critical facility, but the cloud data center itself is a redundant element of all the facilities passing loads of work around. Therefore, a specific cloud data center is not going to require A and B chillers and A and B water loops and A and B power feeds to the mechanical elements.

Cooling to Scale

While scale doesn’t always apply directly to individual site differences, I think we’ve seen enough examples now to affirm that, at least frequently, cloud data centers are going to be larger, and sometimes dramatically larger, than enterprise data centers. Part of that scale difference is going to be the difference in increments of scale growth, which also should translate into differences in cooling technologies. For example, if IT capacity grows at a rate of one or two servers at a time, then DX cooling might make a lot of sense. While it may take years to achieve capacity for a 200 ton or 500 ton or 1000 ton chiller, 10-30 ton DX CRACs could be deployed in stages more aligned to IT load deployment, thereby delaying expenses until needed. On the other hand, if cloud data centers add a thousand or more servers at a time, there will be much more practical increments of cooling. For example, cooling cells with energy recovery wheels or indirect evaporative cooling units or air-side economizers can be deployed in increments in excess of a megawatt. In addition, it is for such large increments of load deployment that modular solutions likely are most practical, whereby a one megawatt unit of ICT, cooling, electrical distribution and network distribution could be delivered in one big package.

The Influence of Cost and Workloads

While every salesman working in the data center space has heard from both their IT and facilities contacts how cost is important to them, in the final analysis their jobs depend on keeping the business running and their design and operational decisions will always place cost, particularly ongoing operational costs, a distant second to the security and viability of their mission critical charter. For cloud data centers, on the other hand, the data center itself is the business and the associated profit center, so minimizing both investment and operational costs are their charter. The business model is to meet minimum service level agreements at the lowest possible cost. This strategic mission has both obvious and subtle implications on data center cooling. As discussed above, a cloud data center is not going to invest in robust systems to assure availability when the “work” can be done seamlessly in any one of numerous locations. Therefore, while we shouldn’t see redundant chillers or air handlers or economizers or towers, we also wouldn’t expect to see the complex environmental monitoring and feedback systems supporting intricate and complex control systems. That is not to say there is no complexity in cloud data centers; after all, it is not an easy task to determine that at a particular moment a data center 1000 miles away has just realized an effective energy cost of $0.002 per kW/h less than this data center and therefore immediately transfer 2MW of IT load for the next 5 ½ hours.

Such rapid workload transfers also have an effect on the design and deployment of cloud data centers. For example, a 2MW transfer of IT load would require nearly 600 tons of cooling standing by ready to take it on. That could be one chiller and 20 CRAH’s, all running but doing no work, because there isn’t any way they could get turned on and actually delivering effective cooling at the drop of a hat. Or, the cooling could be provided by large energy recovery wheel cells or indirect evaporative cooling modules or air-side economizers, all of which are not only capable of operating at 10-20% of capacity, but which would consume less than 1% of their rated fan energy at such low rpms and which almost instantaneously move much greater volumes of air. Therefore, cloud data centers would also favor chillerless cooling with the largest possible fan-capacity economizers that can be rapidly accelerated while delivering super economical performance at low utilization.

The cloud data center focus on lowest possible cost also has an interesting incidental result. Rather than blade systems or any kind of high performance server platform, cloud data centers typically favor the absolute cheapest most generic 1U pizza box servers. One of the performance characteristics of these servers is that they will have a lower ΔT than most other server topologies; that is to say, these servers will consume more air per watt and produce less of a temperature rise in the air passing through them. This lower ΔT means that these data centers will not harvest the cooling capacity and efficiency benefits of cooling systems which derive their capacity thresholds from ΔT, which includes just about everything except straight air-side economization. Granted, we do see cloud data centers employing different types of economization, but the lower ΔT’s mean that they are leaving cooling capacity on the table and they are not fully exploiting partial economization, which occurs when ambient conditions fall between supply and return temperatures. Enterprise data centers on the other hand, particularly larger facilities with heavy transaction loads, may favor blade systems with associated higher ΔT’s, which means they can enjoy the capacity stretching benefits of ΔT enabled by good airflow containment/isolation practices and obtain the extra economies from partial free cooling delivered by such economization architectures as indirect evaporative cooling and energy recovery wheel cells.

Key Takeaway

Despite both being rooms full of computers, enterprise data centers and cloud data centers have enough difference in business objectives and mission that they would ideally incorporate quite different approaches to cooling. Some of those differences are clearly in place today and some will become more apparent as the cloud matures. However, the “pay attention” story here has to do with suggestions that the benefits of private cloud will drive more enterprise data centers to look and feel a little more like cloud data centers. If such a possibility actually develops into a significant trend, the effect on the industries that support the data center industry could be profound.

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor,
manage, and maximize the power and cooling infrastructure for critical
data center environments.

 

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor, manage, and maximize the power and cooling infrastructure for critical data center environments.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Our 7th annual Airflow Management Awareness Month live webinar series has concluded. Watch the webinars on-demand below!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest