Overcoming Overhead Obstacles in Data Center Containment10 min read

by | Oct 4, 2017 | Blog

Somewhere around one-third of our industry is not yet realizing the efficiency and performance benefits of containment. This is the case despite data center standards which identifying airflow containment as a crucial fundamental best practice. The question remains: Why?

There are a variety of reasons why data centers have not implemented containment in some form. It costs too much and/or it doesn’t really work are two of the wrong answers. Airflow containment will always provide a data center environment that can support higher power densities and lower energy costs and will always pay for itself quickly. Often the reasons not to improve containment are just excuses covering a general antipathy toward change, effort or knowledge acquisition.

The fact is, unless we’re talking about a new design that has had active involvement from IT, facilities, architectural engineering and strategic planning from the very beginning of the project conception, there are going to be complexities and obstacles not accounted for in the theoretical vision of data center containment. It doesn’t have to be a project to convert a central office, a warehouse or a big box store into a data center; complexities and obstacles can just as readily arise on a new build if anyone from facilities, IT, architecture or business planning makes a decision about anything unilaterally.

Nevertheless, there are some real obstacles to data center airflow containment and in two blogs last month I addressed obstacles to effective airflow management presented by fire suppression systems and associated codes as well as by installed mechanical infrastructure not normally associated with reaping the economic and performance benefits of containment – i.e., single speed fans and DX coolers. In both cases, I showed how data center airflow containment could be deployed and provided some guidance on how to estimate financial paybacks.

Other obstacles frequently cited as reasons for not being able to design and execute a complete airflow management solution include an array of overhead structures such as ductwork, ladder rack, power busway, basket tray, or any variety of hanging Unistrut-type configurations. These design elements may have come with a room into which the data center is being inserted or they may have come a day before you are ready to place your server cabinet and containment purchase order at the hand of an enthusiastic but stealth infrastructure team. In either case, these structures may add some complexity to the project but they will not absolutely prevent the project from implementing effective airflow management (i.e., containment) and reaping the benefits and payback resulting from this recognized best practice.

“there are going to be complexities and obstacles not accounted for in the theoretical vision of data center containment”

Cold air containment is usually the most effective solution when retrofitting a non-purpose-built space, upgrading an existing space or working around design elements that became stealth elements in a fragmented design-build project. Cold air containment’s simplest form is retrofitting an existing space on a raised floor. In this situation, the containment architecture can be installed below all the overhead obstacles and the data center reaps the full benefits of containment.

Without the raised floor, it can get a little more interesting. One approach to retrofitting cold air containment on a slab floor that I have seen work quite effectively included a fan wall and the rows of cabinets abutting that wall. The cold aisles aligned to the fans and the aisles effectively captured the supply air and the containment was built below all the overhead structures.

I first saw this approach of capturing cold air on a slab floor in Amsterdam some ten years ago with cooling provided by cells with re-purposed energy recovery wheels. However, it could work equally well with any kind of cooling equipment located in a service area outside the actual computer room white space.

If there is some reason why hot air containment is preferable, such as a design specification for 95˚F supply air resulting in 120˚F or higher return air, (which would effectively be the data center ambient temperature in a “cold” air containment architecture), then there are still a couple options. For a complete hot air containment and therefore the highest energy efficiency and quickest infrastructure payback, chimney cabinets should be considered if there is a reasonable opportunity to align rows so the chimneys dodge the overhead morass. If maneuvering rows of chimney cabinets is impossible – remember: with chimneys you do not need perfectly shaped rows – then partial containment with either short chimneys or aisle doors, (accompanied by partial overhead airflow barriers in conjunction with some means of contained overhead return air path), will provide access to significant energy savings and ability to support higher densities, albeit not as extreme as what could be achieved by full containment.

In an August blog titled, “Partial Containment: The Pursuit of Imperfection,” I walked through an example for a 1MW IT load where partial containment produced annual energy savings of around 1.5 million kW hours per year, without economization and anywhere from 1000 hours to over 3000 hours a year in extra free cooling, depending on geographic location and type of economization. While these partial containment numbers may not be as grandiose as much containment sales literature would suggest, they will still pass the sniff test of most accounting payback and ROI goals and also release stranded capacity if constraints to growth are an issue.

There is no need for overhead congestion to be an obstacle to realizing the benefits of data center airflow containment. It doesn’t matter if you are stuck with a previous owner’s folly or trying to survive stealth undermining of your own best plans, airflow containment can be built underneath all variety of overhead infrastructure and provide a path to lower operating costs, freed stranded capacity and healthier IT hardware.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest