Then vs. Now: The Evolution of Cooling Efficiency13 min read
The proliferation of cloud technologies is absolutely evident in today’s market. Organizations are seeing direct benefits behind a distributed, and robust, cloud ecosystem. Most of all – these organizations are finding amazing ways to leverage cloud services as direct competitive advantages. We’re seeing more use-cases around data analytics, business intelligence, and big data deployments. Most of all – we’re seeing that organizations are increasing their spend around critical data center and cloud systems.
“Over the past several years, the software industry has been shifting to a cloud-first (SaaS) development and deployment model. By 2018, most software vendors will have fully shifted to a SaaS/PaaS code base,” said Frank Gens, Senior Vice President & Chief Analyst at IDC. “This means that many enterprise software customers, as they reach their next major software upgrade decisions, will be offered SaaS as the preferred option. Put together, new solutions born on the cloud and traditional solutions migrating to the cloud will steadily pull more customers and their data to the cloud.”
IDC pointed out that worldwide spending on public cloud services will grow at a 19.4% compound annual growth rate (CAGR) — almost six times the rate of overall IT spending growth – from nearly $70 billion in 2015 to more than $141 billion in 2019.
All of this has translated to more utilization of the modern data center. A recent Data Center Knowledge article pointed out that US data centers consumed about 70 billion kilowatt-hours of electricity in 2014, the most recent year examined, representing 2 percent of the country’s total energy consumption, according to the study. That’s equivalent to the amount consumed by about 6.4 million average American homes that year. This is a 4 percent increase in total data center energy consumption from 2010 to 2014, and a huge change from the preceding five years, during which total US data center energy consumption grew by 24 percent, and an even bigger change from the first half of last decade, when their energy consumption grew nearly 90 percent.
Here’s the crazy part – the numbers around utilization could have been even higher; if not for improved data center efficiency. The DCK article discusses how efficiency improvements have played an enormous role in taming the growth rate of the data center industry’s energy consumption. Without these improvements, staying at the efficiency levels of 2010, data centers would have consumed close to 40 billion kWh more than they did in 2014 to do the same amount of work, according to the study, conducted by the US Department of Energy in collaboration with researchers from Stanford University, Northwestern University, and Carnegie Mellon University.
Energy efficiency improvements will have saved 620 billion kWh between 2010 and 2020, according to results of a new study of data center energy use by the US government. The researchers expect total US data center energy consumption to grow by 4 percent between now and 2020 – they predict the same growth rate over the next five years as it was over the last five years – reaching about 73 billion kWh.
Like power, cooling must be a big consideration in the digital-era data center age. Data centers are increasing density; and cooling is critical to keep operations running efficiently. The AFCOM State of the Data Center report discussed, when we look at cooling, more than 58% indicated that they currently run, and will continue to run at least N+1 redundant cooling systems. Both today, and three years from now – 18% will operate an N+2 cooling redundancy architecture.
Creating Cooling Efficiencies (Then vs. Now)
During the 90’s and mid-2000s, designers and operators worried about the ability of air-cooling technologies to cool increasingly power hungry servers. With design densities approaching or exceeding 5 kilowatts (kW) per cabinet, some believed that operators would have to resort to technologies such as rear-door heat exchangers and other kinds of in-row cooling to keep up with the increasing densities.
Still, for decades, computer rooms and data centers utilized raised floor systems to deliver cold air to servers. Cold air from a computer room air conditioner (CRAC) or computer room air handler (CRAH) pressurized the space below the raised floor. Perforated tiles provided a means for the cold air to leave the plenum and enter the main space—ideally in front of server intakes. After passing through the server, the heated air returned to the CRAC/CRAH to be cooled, usually after mixing with the cold air.
For many years, this system was the most common design for computer rooms, data centers and server cooling design. In fact, it is still employed today. But, how effective is it for next-generation workloads and server designs?
When it comes to server cooling – the concept is very simply. Heat must be removed from the vicinity of the server and IT equipment electrical components to avoid overheating the components. Simply put, if a server gets too hot, onboard logic will turn it off to avoid damage to the server.
But, it’s not just heat you have worry about. Some big data and analytics servers are highly sensitive and can actually be impacting by particle contamination. Still, in addition to the threats posed by physical particulate contamination, there are threats related to gaseous contamination. Certain gases can be corrosive to electronic components.
These types of “legacy” cooling systems will certainly still have their place in the data center. However, it goes without saying that new types of workloads simply require a new way to cool the servers they operate on.
Data Center Cooling Efficiency – Designing for Tomorrow
New types of cooling systems have helped create greater levels of stability and efficiency within the modern data center. When designing your own infrastructure, remember that each environment is truly unique; requiring specific architectural considerations for your business. With that in mind – let’s examine some cooling concepts which can help revolutionize data center server and cooling efficiency.
- Modular Data Center Containment. Hot aisle containment and cold aisle containment have been used in computer rooms for years to improve efficiency, increase rack densities and improve overall utilization of the computer room. To date, the data center industry has mainly used hard wall containment and soft curtain containment to accomplish these goals. Check out white paper on the effectiveness and implementation of a new cooling and containment architecture shows a simpler, more cost-effective and easier-to-implement solution called Modular Containment; designed to support next-gen data center operations.
- Utilizing the Cooling Capacity Factor (CCF). The average computer room today has cooling capacity that is nearly four times the IT heat load. Using Data from 45 sites reviewed by Upsite Technologies, this paper shows how you can calculate, benchmark, interpret, and benefit from a simple and practical metric called the CCF. Most of all, in gaining an understanding of this data, you create the ability to lower your PUE and reduce energy costs, you increase workload density without adding more cooling units, and you can recreate your capital expenditure costs by optimizing data center operations.
- Sealing Gaps in the Data Center and Under IT Racks. Computational Fluid Dynamic (CFD) analysis can actually reveal a lot about your data center. Specifically, it can reveal real-world savings potential. For example, often overlooked, the small space between the bottom of an IT rack or cabinet and the raised floor or slab can have a significant impact on IT inlet temperatures. Such spaces are common, as casters and leveling feet under IT cabinets creates gaps from half an inch up to two or more inches between the floor and the bottom of the cabinet. However, this space allows the hot air from the IT equipment exhaust at the rear of the rack to flow under the rack and back into the IT equipment air inlets at the front of the rack. Give this documentation a review to see how you can utilize a CFD analysis and find your own data center savings opportunities.
To create cooling ready for the future – you’ll need to think “outside” of the data center. Basically, get creative when it comes to cooling. I’ve often discussed how you should look at cooling as a science. This is certainly still the case. It’s important to approach cooling and data center requirements from a holistic perspective and be ready for new types of business requirements. In designing your own data center – look for efficiencies and metrics which can help guide the way. This can be a CFD analysis or even a cooling requirements analysis around existing infrastructure components. The more information you have about your current and future needs – the better (and more efficient) your cooling design will be.
CTO, MTM Technologies
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.