Frequently Asked Questions (FAQs)

Q: I have plenty of cooling capacity, so why is my file-server area unevenly hot?

A: Uneven cooling is a prevalent problem that is likely to get much worse as server-farm power consumption rises due to increased rack heights, more equipment within racks, and increasing equipment power consumption. Depending on heat density and computer-room configuration, forensic mechanical engineering may be required to provide a holistic identification of all contributing problems. With that information, solutions are then pretty simple to specify.

Q: Does any particular arrangement of the equipment racks have a positive or negative effect on equipment reliability?

A: Look at how the equipment racks are physically oriented on the raised floor. In more than 75 percent of sites, racks are typically arranged, so they all face the same direction. This arrangement is a significant cause of cooling-capacity problems because the hot-air exhaust from one row of racks becomes the cooling-air intake for the next row. This is not a good environment for computer-hardware reliability and will eventually result in premature and seemingly unexplainable failures.

Q: Should I put perforated floor tiles at the exhaust side of my computer equipment to help cool the room? The temperature of the air out of the racks is too hot.

A: No. Perforated tiles do their best job when placed on the intake side of the computer equipment. This position provides the equipment with the best operating environment. The exhaust air temperature is not a problem if the air going into the computer’s intake is the correct temperature. Placing perforated tiles on the exhaust side of equipment pre-cools the air returning to the air-conditioning units. When the colder return air is received back at the air-conditioning unit, the controls will throttle back on the amount of cooling provided. This adjustment often results in hotspots in the highest heat-load areas of the computer room, which can adversely affect the long-term reliability of the computer equipment.

Q: Why are many data centers having problems with increasing heat load density?

A: Seventy percent of the available cooling in a typical computer room is wasted due to bypass airflow. In rooms with high levels of bypass airflow, cooling occurs through the un-engineered mixing of hot and cold air. This is extremely energy- and capacity-inefficient, but at low-density levels can be successful. As heat density increases, vertical and zone hotspots will develop. The typical approach is to install more cooling capacity, but this may further aggravate the situation. Instead of establishing more cooling capacity, data center managers should first tune-up what they already own. Not only will this be more cost-effective, but when compared with the construction required to install additional cooling, it could reduce the associated risks. Tune-up actions start with determining what capacity the cooling equipment is delivering (typical field measurements indicate that most cooling units are producing significantly less—50 percent or less—than their rated capacity). Recovering this lost capacity is a significant first step. Other steps include reducing bypass airflow to 10 percent or less, installing blanking panels, and matching the number of perforated tiles or grates to the actual heat load. Each tune-up action will buy back increasing capability to deliver the available supply of cold air in a directed manner.Moore’s law states that semiconductor processing power will double every 18 to 24 months. Actual semiconductor performance has firmly held to this 1965 prediction by Gordon Moore (one of the founders of Fairchild Semiconductor and, subsequently, of Intel). According to this principle, a high-end processor in the year 2005 was one million times more potent than its 35-year-old predecessor. This exponential increase in processor capacity has allowed the development of the Wintel processing architecture as a full-fledged, viable competitor to mainframe computing. One of the side effects of continuously increasing processor capability has been the shrinkage of the computer hardware required to perform a fixed volume of work. Over the last 35 years, this rate of footprint reduction has reached 30 percent annually. This shrinkage (called technology compaction) means the amount of physical space required to accomplish a set volume of IT work (measured in constant units of processing and storage) has been declining continuously. If a site upgraded their technology with state-of-the-art equipment every year, they would experience this 30 percent decrease in floor space annually; however, most facilities do not replace hardware this frequently. Instead, the floor space consumed remains constant or grows until an entire generation of technology is replaced every two to five years. When this replacement occurs, a dramatic amount of white space can result if business volumes and new applications have not grown substantially enough to require more boxes of processing equipment. Technology compaction has not been accompanied by a parallel reduction in electrical power consumption. Instead, power consumption per processor has remained the same or even increased as computing ability has grown. Over the last two years, power consumption per most powerful chip available has practically doubled, rising from 60 Watts/chip to 118 Watts/chip. The power consumption of future microprocessors is expected to grow to 150 Watts/chip in the next several years. If space consumption is falling at the rate of 30 percent annually, but power consumption per processor is continually rising, the power consumption over the product footprint in Watts/ft2 must also be rising. And indeed this increase is taking place. Over the last decade, the combined effect of Moore’s law and technology compaction has been a 17 percent annual increase in the density of power consumed and heat dissipated by IT products2.

Q: How can I perform the initial test to check if proper air velocity and pressure in the plenum under the raised floor?

A: You can quickly identify the presence of an air velocity problem in minutes using the Upsite™ Hotspot Troubleshooter Card. This simple card is designed and calibrated to reveal which areas of perforated tile are supplying sufficient air and which perforated tiles are not. If the underfloor air is moving too fast, you cannot get enough cooling air to come up through the perforated tiles. In fact, if the air is moving sufficiently fast, air from above the raised floor will be sucked down into the plenum. In many instances, no cooling is available within 50 feet (15 meters) of the computer room cooling units due to high underfloor air velocities. These areas are obviously going to be very hot.

Q: Blade servers consume unprecedented amounts of power, all of which is converted to heat and must be managed. What impact will this new generation of products have on my data center?

A: The power consumption and density for this generation of servers are much higher than for previous generations. All major manufacturers have similar products with power consumptions ranging from 8 kW to 20 kW per rack, with 30+ kW products on the drawing board. Many sites will have severe problems cooling even small quantities of these computers. The volume of air delivered to the rack from the underfloor will become even more critical. Activities such as eliminating bypass airflow, placing the right number of perforated tiles or grates in the cold aisle only, installing blanking panels in rack openings and matching cooling airflow with server cooling needs are all essential to directing the available supply of cold air accurately.

Q: I have computer equipment right in front of an air-conditioning unit, and the equipment is still running hot. How can this be?

A: It seems counterintuitive, but equipment placed too close to an air-conditioning unit can run hot because no cooling air is available at that point. Static pressure in front of cooling units is very low because the velocity of the air coming off the fan is very high. In fact, the speed may be so high that hot return air from above the raised floor is being sucked down into the sub-floor plenum. High heat load equipment should be placed far enough away from the source of cooling to ensure proper airflow through the perforated tiles. Sometimes this spacing can be as much as 30 feet (9 meters).

Q: Should I have a computational fluid dynamics (CFD) model done of my data center?

A: CFD modeling may be very useful, but only in conjunction with a cooling systems tune-up. A competent tune-up will recover capacity and in many cases will substantially reduce the severity of hotspots. CFD modeling is no substitute for the hard work of crawling around on and under the floor to identify and fix existing problems that are causing poor performance. In fact, modeling can give false information if the person creating the model does not spend a significant amount of time on site making actual field measurements instead of capturing a few random samples. CFD predicted flows and temperatures need to be compared against actual values as there are often differences between predicted and actual values.

Q: For maximum cooling, what is the best location of the perforated floor tiles related to equipment racks?

A: If the only cooling air available is through the cable openings at the back of the rack, cooling inside the rack will be marginal, and the ambient temperature will increase significantly from bottom to top, especially if solid-skin front covers are installed. Perforated tiles need to be located in front of the rack (on the cold aisle) where they can provide cooling to the air-intake side of the servers installed in the rack.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest