How Cloud is Redefining Cooling and Thermal Efficiency Best Practices12 min read
The components within our data centers are changing very quickly. The density of the cloud is demanding a new approach to data center design and deployment. We are seeing more data center interconnectivity, more visibility into distributed environments, and a lot more resources being shared. Through it all – data center technologies sit at the forefront of all modern IT systems. Mobility, cloud computing, and the evolving user are all driving change for data center operators. The important piece to understand is that the data center will still be the critical component for all modern technologies. And, moving forward, there will be even more demand around data center resources.
This is where the cloud platform comes in. Micro-data centers and branch locations are changing the way we deliver rich content and data. These smaller locations still need to have resources and still need to be controlled efficiently. There are a lot of new kinds of data centers to consider when working with cloud or even hybrid cloud technologies. These include:
- Large enterprise data centers
- Large government and federal data centers
- Branch locations and data centers
- Micro-data centers (used often for private cloud extension into the public cloud).
- Edge data centers for content delivery
- Dedicated disaster recovery sites
- Custom-built, container-based, data centers
- Application-specific data center sites
The list can go on. The point is that all of these data center points can (and do) now directly interact with a variety of cloud resources. When working with this many data center solutions – it’s important to look at cooling for an entirely new perspective. How are you creating better best practices around your cooling infrastructure? Are you enabling real-world thermal efficiencies around your ecosystem? Most of all – what’s holding you back form making real cooling efficiency breakthroughs?
Let’s start here – we recently saw the latest AFCOM State of the Data Center report and some of the statics it provided around cooling, power, and your data center.
- 70% of respondents indicated that power density (per rack) has increased over the past 3 years.
- 26% indicated that this increase was significant.
This has forced managers to look at new and creative ways to power their data centers. For example, 34% have either deployed or are planning (within 18 months) to deploy a renewable energy source for their data center. Of those respondents using renewable energy, we saw the following used systems:
- Solar: 70%
- Natural Gas: 50%
- Wind: 50%
- Water: 27%
- Geo-Thermal: 10%
Because of the big dependency around data center services, redundancy and uptime are big concerns. We saw fairly steady trends around redundant power levels spanning today and the next three years. For example, at least 55% already have – and will continue to have – N+1 redundancy levels. Similarly, no more than 5% of respondents either currently have, or will have, 2(N+1) redundant power systems. For the most part, data center managers are using at least 1 level of redundancy for power.
Like power, cooling must be a big consideration in the cloud and digital age. Data centers are increasing density, and cooling is critical to keep operations running efficiently. When we look at cooling, more than 58% indicated that the current run, and will continue to run at least N+1 redundant cooling systems. Both today and three years from now – 18% will operate a N+2 cooling redundancy architecture.
With all of this in mind, what’s actually holding you back from moving to newer and better levels of data center cooling efficiency? A recent whitepaper authored by Anixter actually took a look at four common data center challenges revolving around cooling:
- Increasing cabinet densities. Cabinet densities are on the rise for many data centers today. Although they aren’t rising as much as once thought, there are several applications that require a large investment of power. These cabinets can require a different approach to cooling than the rest of the environment.
- Operational budget cuts. Many data center managers are being asked to reduce operational expenses and think that increased thermal efficiency requires significant capital investment.
- Lack of knowledge of airflow management best practices. Just understanding the right techniques can be a challenge. The impact of deploying blanking panels, removing cabling from under the floor and using cable-sealing grommets can pay huge dividends.
- Matching cooling to IT requirements. An efficient cooling system means that the right amount of cooling is being delivered to satisfy the IT equipment’s demands. Because IT’s requirements change dynamically, the cooling system should be adjusted frequently, but the information required to do that isn’t always provided or accessible.
- Overwhelming thermal design considerations. There are a lot of options and methodologies out there to cool a data center. In addition, there are several options to separate supply and return air. In light of this, choosing the best approach can be difficult.
Fortunately, there’s good news. New ways of controlling your environment can help control costs and optimize resources. As the Anixter whitepaper outlines, due to the cost of operating a data center and the rise in cabinet densities, data center managers today are looking for a more efficient, reliable way of delivering air to the equipment in the cabinet.
Conditional environmental control is the process of delivering the exact amount of supply air at an ideal temperature and moisture content to maximize the cooling system’s efficiency and improve equipment uptime.
There are several key drivers as to why a data center would want to adopt this approach in a new or existing environment:
- Reduces the likelihood of hot spots and cold spots that can lead to equipment failure and added energy usage
- Regains stranded cooling and energy capacity, which allows for additional IT growth to support the business while minimizing unnecessary capital expenditures
- Reduces operational expenses through the reduction in energy consumption by the cooling equipment
- Enhances the business’ environmental image
And, to achieve conditional environmental control, the following four best practices are recommended:
- Supply pressure. A difficult thermal management challenge that many data center operators have to face is how to deliver the proper amount of cooling that the IT load requires without oversupplying. Achieving a balance means the cooling system is using the minimum amount of energy required to deliver the supply air to where it needs to go, thereby reducing cost.
- Supply temperature. The importance of achieving the right temperature is to help maximize the efficiency of the cooling system. It has been well documented that increasing the operating temperature will reduce the energy required to operate the cooling equipment thus providing substantial operational savings.
- Airflow segregation. The main purpose of any airflow segregation strategy is to isolate the cool supply air from the hot exhaust air, which prevents airflow recirculation. The less airflow recirculation there is in the room, the better temperature controls the data center operator will have and the less likelihood there will be any data center hot spots. Airflow segregation, along with following the other best practices of conditional environmental control allows for a number of data center airflow control benefits and cost-savings.
- Airflow control. Airflow control can be defined as the process of directing airflow from the cooling unit to the IT equipment and back. Air flows much like water and will generally take the path of least resistance. That is the reason it is so critical to helping guide the air to where it needs to go. Making sure the supply pressure is balanced, the temperature is at an ideal level and the supply and return air are segregated, which will improve overall efficiency and make for a more reliable environment for the IT equipment. Control over the airflow will ultimately provide data center managers the stability they need for the entire thermal management system. This area of evaluation compared to the other three best practices is probably the simplest and most overlooked.
Today, there are a lot of great ways to deploy powerful cooling and thermal control options. However, it’s critical to understand that future systems are going to be different as cloud, your data center, and business requirements evolve. Concepts around liquid and free cooling are already catching on. Furthermore, greater levels of consolidation and multi-tenancy are forcing administrators to rethink their environmental control best practices. With this in mind – there are already some great ways to get ahead of the cooling conundrum and create data center thermal efficiencies.
CTO, MTM Technologies
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.