How to Manage Airflow in an Open Compute Environment11 min read

by | Dec 2, 2015 | Blog

How do you manage airflow in an Open Compute environment? The short answer is: The same as you would in any data center. The long answer is: The same as you would in any data center. The philosophy, guiding principles and strategies for maintaining maximum separation between supply air masses and return air masses and the benefits derived therefrom in terms of overall cooling energy efficiency and effectiveness will apply to any data center except maybe for a data center exclusively using direct contact liquid cooling. Nevertheless, despite the proposed objective of reduced costs through commoditization, there are a few idiosyncratic particulars, more than likely merely a reflection of the youthfulness of the Open Compute standard, which should at least be on the radar of anyone planning to actually deploy a space in compliance.

Containment Options

The foundation of data center airflow management is maintaining segregation between cold aisles and hot aisles, or between cool spaces and warm spaces. Containment aisles have become a de facto best practice for maintaining this segregation and the Open Compute standard incorporates containment, beginning with the hot aisle containment in the Facebook launch site. However, the only current standard part number for a hot aisle containment partition is defined in part number 06-0000-60 version 2, which specifies the containment partition must reach from the top of the server cabinet to the interface with the data center ceiling, which is 2206.4mm above the floor. However, version 2 also increased the height of the cabinet from 2200mm to 2210mm, or actually higher than the ceiling interface. In addition, the v2 standard also specifies that v2 is backward compatible to v1, so it is conceivable that cabinets conforming to the different standard revision levels could be deployed side-by-side, which means there could be nearly a half inch difference in cabinet heights which would need to be accounted for by containment partitions. Such variability capability is well within reach of high end containment partitions available from some vendors today, but not yet available in a clearly commoditized standard partition. In addition, there is not yet a specification for hot aisle containment doors. Dimensional aspects may be derived from other specifications covering aisle width, cabinet depth and cabinet height, but functionality, mechanical interface and leakage performance requirements have not been established. None of these observations can reasonably be cited as faults or shortcomings to the open compute standards; rather, they merely represent areas which have not yet been addressed and therefore will likely require engineered solutions similar to a legacy data center with EIA compliant hardware.

Use of Filler Panels

The ubiquitous filler panel has defined data center airflow management since before “data center airflow management” was a coined phrase. While the only filler panel mentioned in any Open Compute standard is for filling spaces vacated by missing PDU’s, I think it is safe to assume the standard best practice of blanking unused rack mount spaces applies, whether those are U’s (EIA-310 U’s) or OU’s (Open U’s), particularly since there is a basic requirement for containment and resultant air segregation. The blanking panels obviously just need to be a different size. The standard adequately addresses the requirement by defining the 48mm spacing of the OU’s, the hole size and the 540mm opening between rack columns. While not directly called out in any standard, the requirements are easily derived. The one application that may still need some fine-tuning is with one variation of the v2 Open Rack wherein an OU to EIA adaptor is allowed in one power zone (10 OU’s in v1 or 14 OU’s in v2), while the rest of the rack remains set with OU spacing. The interface between the EIA power zone and the adjacent OU power zone will not correspond exactly to the dimensional boundaries of the EIA blanking panel and the OU blanking panel, resulting in a couple accommodations. First, it is likely that there will need to be a separate interface blanking panel that will be either taller or shorter than a standard EIA blanking panel or a standard OU blanking panel. In addition, it is likely that interface blanking panel may require a parallel flange extending beyond the EIA portion of its EIA U – OU interface to prevent an approximate 1 7/8” wide by some portion of the U-OU height gap, which could be a path for bypass or re-circulation airflow. Such a blanking panel would be easy to design and produce; it just needs to be accounted for in the total rack layout.

Front-to-Rear Breathing Equipment

The Open Compute standards are all based on front-to-rear breathing equipment, which accounts for almost all servers. However, a significant percentage of communication switches breathe in a variety of alternative paths. Best practices in legacy data centers indicate all such equipment, if it cannot be avoided, needs to be racked in ways that convert the macro-breathing pattern to front-to-rear.  I have not been inside an Open Compute data center now for a couple years, but the last one I visited had a large core switch area with large side breathing switches in open (lower case “o”) racks, basically undermining everything else that was achieved in that space. In this particular space, the cooling plant was extremely efficient and the fans were highly efficient, so any waste necessitated by the presence of unsegregated switch exhaust air still resulted in a brag-worthy PUE. Nevertheless, as we have demonstrated in prior discussions in this space, an open entrance room with only a handful of uncontained switch racks can necessitate very significant reductions in supply temperature to guarantee some specified maximum server inlet temperature. Since these data centers are intended to be without mechanical cooling, the uncontained switches will not affect chiller plant operating efficiencies; however, they could possibly make the data center impossible. Therefore, particularly when such a deployment is planned for a not entirely cooperative climate zone, all recognized best practices for managing the airflow of equipment that does not breathe front-to-rear could be critical to the success of the design and mission. These best practices range from deploying larger switches in cabinets specially equipped with baffles and ducts to reroute airflow so the cabinet is behaving front-to-rear to deploying smaller switches with rack mount boxes and accessories to create front-to-rear behavior.

Conclusion

The data center design and Open Rack portions of the Open Compute project clearly rely on standard airflow management best practices, which just happen to deliver state of the industry performance due to the integration of all the various elements. While the various standards and projects have not fully addressed all the implantation issues associated with the departure from the EIA universe, those issues are easily enough addressed by either standard EIA solutions or easily-engineered sheet metal accessories. So how do you manage airflow in an Open Compute environment? The same way you do in an EIA environment.

Ian Seaton

Ian Seaton

Data Center Consultant

Let’s keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest