Form vs. Function: How Aesthetics Play a Role in the Data Center12 min read

by | Oct 28, 2015 | Blog

Spoiler alert: data center space design and maintenance do not pit form versus function; rather, form and function conspire together so that aesthetics can actually be a relatively reliable first pass on the functional viability of a space. Granted, there is no absolute ruling out the potential existence of a truly ugly hidden underbelly; however, in general, the readily apparent aesthetics mostly translate into functional benefits that extend beyond first blush.

Aisle Layout

Aisle layout may be one of the first attributes noticed about a data center on first entering the space, and there is a strong correlation between appearance and function. The uniformity of racks arranged in hot and cold aisles and the uniform appearance of regularly and properly spaced perforated floor tiles always create a good first impression. Those aesthetics also serve to manage the effective separation of supply and return air in the data center and thereby contribute to the overall thermal health and efficiency of the space. Containment aisles, especially when built around a single platform of server racks, further enhance the aesthetics of the space while even more effectively promoting the thermal health of the data center, maximizing the efficiency of computer room cooling equipment, and enabling greater energy savings from associated technologies such as economizers and warm water cooling. We have covered these benefits in some detail in earlier pieces in this space.

Cabling

Cabling, on the other hand, is one of those data center design elements where form and function may not always be pulling in the same direction. For example: Out of sight, out of mind. You can’t see any cabling in the data center? That’s a good thing, right? The total aesthetics are clean when all that jumble of cable is out of sight, but the typical hiding places are not particularly healthy for the data center. Cabling under the floor can be an impediment to effective cool air distribution and, in extreme but not rare cases, can be an impediment to tiles safely and securely resting on pedestal stringers. Cabling above the ceiling can also be an impediment to effective return air movement and copper can also have a price penalty for plenum ratings and a performance penalty for temperature de-ratings. In both cases, hidden-away cables can easily take on the specter of being ignored – disarray. While disarray is out of sight, it shouldn’t be any concern, until such time as rack number C-12 is scheduled to be de-commissioned, and there are 80 network cables running into that rack, from various different sources.

Best practices, therefore, indicate cabling distribution should be overhead, between the racks and the ceiling. If all that cable is going to be exposed as part of the first impression of the data center, it can be a rather meaningful first impression. It can convey a sense that the space is out of control, or it can convey that all elements are known and controlled. Neatly dressed trays of cabling not only contributes to the overall positive appearance of the data center and facilitates managing moves, adds and changes, but when properly arrayed also contribute to the overall thermal management health of the data center. Vertical location of tray or ladder runs and dressing of drops into racks have a very close correlation between aesthetics and contribution to extending the barrier between hot aisles and cold aisles.

Inside the Racks

Despite a wonderful first impression, cabling inside the racks can often belie everything the first impression of organization and control had tried to communicate. I cannot count the number of times I have opened the back door of a server cabinet and had my positive first impression of a space and its management strangled in a chaotic tangle of copper and fiber. Granted, the whipped spaghetti may very well be the result of a semi-honest faux pas. Perhaps the initial deployment was driven by a terribly unrealistic deadline and everyone agreed that the resultant mess would get tidied up later. Perhaps the initial installation was a BICSI course show piece and then management decided to do a mass platform change from top-of-rack switches to end-of-row switches and everyone headed out on recovery vacations before the cable management got dressed. Regardless, these all-too common aesthetics offenses are not just sores for site eyes and barometers of management laxness.

Experimental studies have shown that cabling in the rear of racks can have a deleterious effect on the thermal health of the electronic equipment. Way back in 2002, a Dell white paper reported how dense rear cabling reduces airflow rates through the cabinet and these reduced flow rates could raise component temperatures as much as 9˚F.1 Today’s servers are not likely to respond in the same way to such constraints. Rather than overheating, because most servers today come with variable speed fans, they will respond to the additional pressure head of jumbled cabling by working harder to move the same amount of air. We have talked at length in this space about fan affinity laws and the non-linear cubed energy savings resulting from lower fan speeds. Pressure head is another cause of non-linear energy use for fans – in this case it is a squared function, so a 10% increase in pressure head will result in a 21% increase in fan energy to move the same amount of air. More recent research has discovered the actual pressure differentials against which server fans operate with cables present in the rear of the cabinet. In this experiment, they only looked at one server at a time, so they compared the pressure produced by cabinet doors alone to the doors plus 2 power cables, 4 ethernet cables and one KVM. At 40 CFM the cables increased the system impedance from 0.02” H2O to 0.042”, and at 55 CFM from 0.04” to 0.078”.2 In real life, it is not likely there would only be one server and four Ethernet cables and two power cables in a 42-45U rack. Instead, 40U’s worth of equipment might mean 160 ethernet cables and 80 power cables, thus creating the potential for a dramatically higher pressure head against which those server fans need to operate.

Thermal, Airflow, and Pressure Considerations

There is not much to be done about the quantity of cables, especially with all the economic incentives to deploy even higher density racks; however, dressing out the rack in tidy cable management helps address the thermal issue as well. The obvious aspects of this part of cable aesthetics is to deploy the shortest cables possible. The power cables can be particularly cumbersome and problematic, but if cables are selected that are just long enough to reach from the server to the in-rack PDU, you eliminate all the congestion along the rear sides of the cabinet and also minimize airflow impediments directly in the path of the fan outlets. With dual power supplies, live servicing, though rare and not recommended, can be accomplished by swapping out one of the short cords for a long temporary power cord, disconnecting the remaining power cord, and then extending the equipment on its slider. With cables tightly dressed as far out of the way as possible from the airflow area in the rear of the rack, you are much more likely to reduce turbulent airflow through the rack and thereby avoid adding extra pressure load on the server fans. Of the three basic terms in the Darcy-Weisbach equation for calculating ΔP, one is a coefficient of laminar or turbulent flow and this coefficient is multiplied by the other two terms, which are ratios. The coefficient is calculated from a Reynolds number, which will be less than 2320 for laminar flow and anywhere from 4000 to over 100,000 for turbulent flow. The neater the cable dressing and the more effectively the bundles are maintained outside of the airflow path, the lower that Reynolds number will be, resulting in a lower coefficient of friction and therefore lower total pressure head and lower server fan energy.

Conclusion

Yes, a pristine data center will always bring a soothing warm, inner glow to most industry old-timers. Done well, however, these beauties are much more than a pretty face. They tell us a lot about how a business is managed and how profits can be generated without raising prices or squeezing wages.

1Artmar, Paul; Moss, David; and Bennett, Greg, “Rack Impacts on Cooling for High Density Servers,” Dell White Paper, August 2002
2 Coxe, K.C., Rack Infrastructure Effects on Thermal Performance of a Server,” Dell white paper, May 2009.
Ian Seaton

Ian Seaton

Data Center Consultant

Let’s keep in touch!

0 Comments

Submit a Comment

Your email address will not be published.

Subscribe to our Blog

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest