Understanding Cooling Diversity within your Data Center9 min read

by | Sep 24, 2014 | Blog

Your data center is evolving to meet ever-changing user and business needs. Now here’s the reality – the pace at which this is all happening is only going to continue to accelerate. Already we are seeing higher density environments being deployed with a variety of gears supporting the whole ecosystem.

Consider this – The latest Cisco Global Cloud Index report indicates direct growth in global data center and cloud traffic. For example:

  • Annual global data center IP traffic will reach 7.7 zettabytes by the end of 2017. By 2017, global data center IP traffic will reach 644 exabytes per month (up from 214 exabytes per month in 2012).
  • Global data center IP traffic will nearly triple over the next 5 years. Overall, data center IP traffic will grow at a compound annual growth rate (CAGR) of 25 percent from 2012 to 2017.
  • Annual global cloud IP traffic will reach 5.3 zettabytes by the end of 2017. By 2017, global cloud IP traffic will reach 443 exabytes per month (up from 98 exabytes per month in 2012).
  • Global cloud IP traffic will increase nearly 4.5-fold over the next 5 years. Overall, cloud IP traffic will grow at a CAGR of 35 percent from 2012 to 2017.
  • Global cloud IP traffic will account for more than two-thirds of total data center traffic by 2017.

All of this means great demands on your data center. These demands will revolve around efficiency, better power utilization, and better means of cooling the entire infrastructure. “There will likely always be great diversity in computer room cooling configurations and efficiencies,” says Lars Strong, Senior Engineer for Upsite Technologies. “Early adopters will lead innovation while others with fewer resources and less need for efficiency will trail the industry with older methods. Cooling technologies we are already seeing being explored such as dielectric fluid bath cooling, 100% free air cooling, and various forms of evaporative cooling will continue to be explored and refined. Many of these are starting to make it into main stream application today.”

So what are some of the critical diversities revolving around data center cooling? Well – there are three core principals to understand.

  • Control your rack. There are so many heterogenous platforms out there now supporting a number of different types of cloud and virtualization technologies. Your racks may be unique. They may even be custom. Regardless of how they’re integrated – they’ll need to be cooled. Blanking panels and optimal fitting rack cooling gear allows for diversity in rack and server deployments.
  • Control your floors/rooms. Think about sealing holes and openings – creating little optimizations that lead to big rewards. Cooling diversity in the data center means look at all aspects of room and floor cooling controls. It can even be as simple as deploying a brush grommet to help limit the amount of bypass airflow.
  • Control your air. Does your data center need isolation? Does it need special baffles? Do you need to deploy modular cooling and airflow containment? These are some next-gen questions to next-gen data center demands. Controlling cooling and airflow now require diversity and some creativity. New modular containment designs allow for some pretty efficiency and powerful airflow management architectures.

So let’s put this all together — We know that data center demand will rise with the proliferation of cloud computing and user mobility. We also know that the data center will have to evolve to meet new demands around environmental control and efficiency. Even still, predicting data center diversity as well as cooling demands can still be a challenge. “The most significant wild card in trying to predict the future of cooling technology is the computers themselves, and what cooling requirements, or lack of requirements, they will have,” adds Strong.

“If the cost of ASHRAE Class A4 servers, servers with allowable intake air temperatures of 41° F (5° C) to 113° F (45° C) significantly declines then cooling configurations can change – allowing for far greater use of free cooling. If there are significant advancements in the power required for computing and storage then the need for cooling will dramatically diminish.

What we do know is that as the industry evolves there will be a much closer coupling of the load that needs to be cooled and the infrastructure used to cool it. Servers will talk to cooling units to control both the temperature and volume of conditioned air, or other fluid used for cooling. Systems will become much more integrated with more complex and automated controls logic.”

 

Here you can create the content that will be used within the module.

Bill Kleyman

Bill Kleyman

CTO, MTM Technologies

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest