Overcoming Data Center Slab Floors with No Path for Return Air16 min read

by | Oct 18, 2017 | Blog

Even though every data center standard identifies data center airflow containment as a fundamental best practice and it is legislatively mandated by some state and municipal building and energy codes, approximately one-third of our industry remain as hold-outs from realizing the efficiency and performance benefits of containment. The question remains: Why?

While there are a variety of reasons why some data centers are without good airflow separation, high cost and low expected results are two of the wrong reasons. Airflow containment will always provide a data center environment that can support higher power densities and lower energy costs and will always pay for itself quickly.  Sometimes these reasons may just be excuses laying over a general antipathy toward change, effort or knowledge acquisition. More often, however, I suspect they are cited when confronted with cold, hard reality which may not be as tidy as textbook installations illustrated in conference presentations, data center certification courses, magazine articles, or blogger pontifications. Reality often includes apparent barriers to optimum airflow management such as physical, mechanical or hardware complexities or obstacles. The fact is, unless we’re talking about a new design that has had active involvement from IT, facilities, architectural engineering and strategic planning from the very beginning of conceptualizing the project, there are going to be complexities and obstacles not accounted for in the theoretical vision of containment.  It doesn’t have to be a project to convert a central office, a warehouse or a big box store into a data center; complexities and obstacles can just as readily arise on a new build if anyone from facilities, IT, architecture or business planning makes a decision about anything unilaterally.

Nevertheless, there are some real obstacles to data center airflow containment and in two blogs last month I addressed obstacles to effective airflow management presented by fire suppression systems and associated codes as well as by installed mechanical infrastructure not normally associated with reaping the economic and performance benefits of containment – i.e., single speed fans and DX coolers. Earlier this month I explored the confrontation with overhead physical obstacles such as ductwork, ladder rack, power busway, basket tray, or any variety of hanging Unistrut-type configurations. In all three cases, I showed how data center airflow containment could be deployed and provided some guidance on how to estimate financial paybacks. A more complex obstacle to data center airflow containment arises from spaces that appear to have been designed specifically to impede intelligent airflow management, such as including elements like a data center slab floor and no suspended ceiling for capturing return air. While such circumstances will not be found in a new purpose-built design, they can be encountered when trying to retrofit a space built for some other purpose (retail, warehouse, jai-alai, laser tag, Federal Reserve vault, etc.) or when trying to modernize a dinosaur. Implementing effective airflow management in these environments can require a little more creativity.

For most retrofit containment projects, cold air containment offers the most practical path to maximizing effective airflow management with containment. I have worked in a couple good examples. One was a colocation data center in Amsterdam using the KyotoCooling air-to-air heat exchanger for primary cooling. In this data center, one end of the rows of cabinets abutted the wall behind which the cooling cells were located. Fan walls delivered cooling air directly into the contained cold aisles and then waste air was returned through the room via return air fans on the same fan wall, located above the contained cold aisle ceiling. This architecture of separation integrated elegantly with the KyotoCooling cells in which the heat exchanger wheels’ horizontal orientation accepted return air from above and delivered supply air below. The same general layout could be effectively served by downflow precision cooling units mounted on pedestals with some form of grate or mesh walls at one end of the cold aisles and then a large common duct mounted on the service bay side of that wall above the contained cold aisle ceiling connected to the return air intakes of all the cooling units. Likewise, roof-mounted indirect evaporative cooling units could supply ducted cooling air into contained cold aisles and return air could be drawn up directly into the return side of the economizer or into a roof-mounted duct serving multiple economizers.

Another variation in which I worked was a colocation data center in the Pacific Northwest where upflow cooling units were arrayed in a service bay outside the white space and the cooling air was accumulated into a master duct and delivered by ducts over the cold aisles and containment was provided by plastic curtains bridging the ducts and the server cabinet roofs. Return air was pulled out of the room by fans on the service bay wall and then out of the service bay by the cooling unit return air in-takes. This approach could also accommodate roof-mounted economizers and could be further enhanced by end of row doors to more fully realize the density and/or efficiency benefits of containment.

“The fact is … there are going to be complexities and obstacles not accounted for in the theoretical vision of containment.”

Another elegantly simple variation is where a single row of server cabinets separates the supply side of the data center from the return side of the data center. I had included this model in my courses for years as a way to realize containment benefits in a small computer room in a non-purpose-built space wherein a short group of 3-6 server cabinets arrayed in a single row would face away from upflow cooling units. Supply air is accumulated into a duct that then splits in front of the cabinets where cooling air is delivered. Return air is then drawn back by the cooling units which are located a short three to five feet behind the cabinets. In such a layout, external containment may not even be required as long as good blanking panel discipline in maintained, along with effective sealing between cabinets and within cabinets between the equipment mounting area and the side panels. While I had always envisioned this single row “containment” solution as a no-brainer for small computer rooms in non-purpose-built spaces, I have been proven short-sighted by at least a couple more robust variations. One of these was a financial institution data center in the northeast with a long row of high-density cabinets (24-26kW). This data center operates exclusively on free air cooling, pulled into the cold aisle through an outside facing wall and exhausted out of the hot aisle back into the atmosphere. There are a few fans in the wall separating the cold side of the building from the hot side, facing the cold side, to re-circulate some return air during the winter when the free cooling supply temperature would otherwise be too cold. Another such example was in a municipal government data center in the Pacific Northwest where cooling was provided by roof-mounted indirect evaporative coolers. A wall was built separating the cold side from the return side, integrating the cabinets.

While all the examples I have personally worked in or observed where containment was incorporated on slabs without suspended ceilings for return air separation have been cold air containment deployments in some fashion or other, I can envision easily enough a hot air containment array of rows abutting a fan wall with return fans pulling air out of the contained hot aisle and supply fans delivering cooling air into the overall space. Such a design could involve precision cooling units in a service bay or external economizers. The containment effectiveness would likely be enhanced by end of row doors on the ends of hot aisles opposite the fan walls. Return fans might be slightly over-spec’d to avoid any pressure issues in the contained hot aisle that might push some re-circulation back through the cabinets into the data center. If you’re paying attention, by now you probably have busted my chops here and noticed that I am basically just talking about your everyday cold air containment architecture, plumbed backward. I cannot see any reason why such a variation would be attractive unless, perhaps, a data center was planning to exploit high supply temperatures, as suggested by the pieces I wrote here this summer, and wanted to avoid creating a general work area with temperatures above 120˚F.

Finally, just for better clarity, I would like to add a couple caveats. Several of the above scenarios require use of upflow precision cooling units. Upflow cooling units appear to be inherently less efficient than downflow units. Rather than float out a particular vendor’s performance data (they all have salespeople to deliver these stories), industry standards are indicative of the differences. I pulled Table 1 below from the Electronic Code of Federal Regulations, Title 10 (Energy), Part 431 (Energy Efficiency Program for Certain Commercial and Industrial Equipment), Subpart F (Commercial Air Conditioners and Heat Pumps), and is effective and current as of October 5, 2017. The data for computer room air conditioners is the same as you will find in Table 6.8.1K in ASHRAE 90.1 and in the minimum requirements in ASHRAE 127.

Table 1: MINIMUM EFFICIENCY STANDARDS FOR COMPUTER ROOM AIR CONDITIONERS
Equipment type

Net sensible cooling capacity

Minimum seasonal coefficient of performance (SCOP) efficiency

Compliance date: Products manufactured on and after .  .  .

Downflow unit

Upflow unit

Computer Room Air Conditioners, Air-Cooled

<65,000 Btu/h
≥65,000 Btu/h and <240,000 Btu/h

2.20
2.10

2.09
1.99

October 29, 2012.
October 29, 2013.

≥240,000 Btu/h and <760,000 Btu/h

1.90

1.79

October 29, 2013.

Computer Room Air Conditioners, Water-Cooled

<65,000 Btu/h
≥65,000 Btu/h and <240,000 Btu/h

2.60
2.50

2.49
2.39

October 29, 2012.
October 29, 2013.

≥240,000 Btu/h and <760,000 Btu/h

2.40

2.29

October 29, 2013.

Computer Room Air Conditioners, Water-Cooled with a Fluid Economizer

<65,000 Btu/h
≥65,000 Btu/h and <240,000 Btu/h
≥240,000 Btu/h and <760,000 Btu/h

2.55
2.45
2.35

2.44
2.34
2.24

October 29, 2012.
October 29, 2013.
October 29, 2013.

Computer Room Air Conditioners, Glycol-Cooled

<65,000 Btu/h
≥65,000 Btu/h and <240,000 Btu/h

2.50
2.15

2.39
2.04

October 29, 2012.
October 29, 2013.

≥240,000 Btu/h and <760,000 Btu/h

2.10

1.99

October 29, 2013.

Computer Room Air Conditioner, Glycol-Cooled with a Fluid Economizer

<65,000 Btu/h
≥65,000 Btu/h and <240,000 Btu/h
≥240,000 Btu/h and <760,000 Btu/h

2.45
2.10
2.05

2.34
1.99
1.94

October 29, 2012.
October 29, 2013.
October 29, 2013.

The first point of clarification I want to make about standards, in general, is that they are written by industry participants (vendors, wink wink) and so it is relatively safe to assume that they are documenting requirements that they can meet. That being said, then, if we consider a 1MW IT load, with zero cooling redundancy and zero bypass or re-circulation (dream-on), we would need fifteen precision twenty-ton cooling units. The seasonal coefficient of performance for each downflow unit would net an average power consumption of 29.4kW versus 30.8 kW for the equivalent upflow units. For fifteen units operating all year without any free cooling, the upflow units would consume 183,960 kW/H of electricity more than the downflow units, or just over $18,000 per year at $.10 per kW/H. I assert that the penalty for upflow cooling units would be mere statistical noise compared to typical savings to be realized as a result of containment (see my previous three blogs), but a meticulous project manager ought to at least have this line item on his/her IRR budget work-up.

The second caveat I would want to note here is that several of these data center examples with which I have had previous experience incorporated cold aisles abutting fan walls. Some of these examples had end-of-row doors at the ends of the aisles opposite the fan walls and some did not, though I would suggest that such end of row doors should always be considered as a best practice to optimize the effectiveness of the containment separation. However, while I have never seen it in actual practice, if there is a larger room in which longer rows of cabinets could be deployed and cooling could be available at both ends of the rows, I do not see any reason why the rows of cabinets could not abut opposing fan walls and then access could be made available by in-row doors rather than end of row doors.

On first glance, data centers on slab floors without a suspended ceiling or alternative return plenum appear to provide significant challenges to deploying effective containment airflow management. I have first-hand experience with several different approaches to utilizing containment within these conditions and these experiences have suggested to me some additional possible design alternatives. I suspect I have merely scratched the surface here and so look forward to this conversation continuing with additional experiences or brainstorms from my readers.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest