The Convergence of Tech: Why IT and Facilities are Coming Together18 min read

by | Sep 14, 2022 | Blog

There is still only one generally applicable explanation for the convergence of tech whereby IT and facilities are coming together: someone upstairs dictated it.

Granted, there are some variations on this theme. Where the core business is turning a profit from providing space, utilities and infrastructure for data center operations, facilities may direct much of IT’s behavior. Likewise, where the core business is transacting data, such as trading companies or scientific labs, IT may direct much of facilities’ behavior. For the rest of the industry, there are plenty of good reasons for why IT and facilities should be coming together, mostly having to do with their very real interdependence.

When heroes converge?

There is an interdependence between IT and facilities on the selection of IT equipment and design of the mechanical plant that can make both parties look like heroes or lead both to dust off resumes, depending on how that interdependence is understood and managed. One aspect of that interdependence involves specifying high ΔT servers versus low ΔT servers.

Just a few years ago, that distinction would have been characterized as blade servers versus pizza box servers, but today that will look more like premium servers versus (dare we say it?) cheap servers. High ΔT servers can get by with as little as 80 CFM per kW, while low ΔT servers can gobble up as much as 165 CFM per kW.

If therefore, IT decides to cheap it out so they can get more servers for their allotted budget, without sharing this understanding with facilities, they could very easily find themselves with a few boxes of servers they will never be able to use because there is not enough cooling capacity. The chiller likely has plenty capacity in tons, but facilities cannot deliver enough airflow.

What about free cooling?

Another implication of server ΔT resides with free cooling. In liquid free cooling, an economizer in series with the chiller can provide partial free cooling when ambient conditions, minus all the requisite approach temperatures, are less than return temperatures; whereas a parallel economizer is either in or out of free cooling mode. Other economizer systems such as indirect evaporative cooling or energy recovery wheel cells deliver partial free cooling when outside air is cooler than data center return air.

If facilities is planning for low ΔT servers, they may not determine any value for series water-side economization or any value for any of the indirect air-to-air heat exchanger cooling systems. If higher ΔT servers are actually deployed, the result is more than merely missed opportunities for saving operating expenses with more partial free cooling. Higher ΔT servers could actually avoid full mechanical cooling all year in many areas, meaning significant capital was wasted on an oversized mechanical plant.

There is another interdependence between facilities and IT on the subject of server selection and mechanical plant design. Unless facilities belongs to ASHRAE and attends the TC9.9 sessions, they very likely have no idea about the different server classes and their operating temperature envelopes. IT, on the other hand, has surely bumped into the information, but likely didn’t realize the relevance and glossed over it. More than any other element of data center design, this area of interdependence is where IT and facilities should start drinking out of the same coffee pot, i.e., where facilities plays an active role in server selection and IT plays an active role in mechanical plant design.

Navigated correctly, this process could easily result in IT increasing expenditures on IT equipment by 5 percent or maybe even 20 percent, while facilities reduces capital expenditures by 50-75 percent and annual operating costs by 50 percent or more. Conversely, if bumbled, IT may come in on budget and facilities may come in on budget and the cost to the company will be 25-50 percent wasted capital and a staggering annual operating waste – the gift that keeps on taking.

Done correctly, this conversation will have IT contributing investment premiums for classes of servers that can operate at extended temperature thresholds and how many hours that can be allowed and still meet their IT equipment reliability requirements countered by facilities input on capital requirements for reduced size or eliminated chillers and annual operating costs for maintaining supply temperatures under specified thresholds.

A long, serious talk

This conversation should be a long, serious conversation that chases down multiple scenarios until the two parties agree on a combined lowest total cost of ownership concept that meets the organization’s service and reliability expectations. Their recommendation should be pitched to ownership in a collaborative pitch to be sure that no line-item budget veto dooms the project to failure, and, oh yes, to assure that high fives and bonus checks are liberally and fairly distributed.

The interdependence of server selection and mechanical plant design should drive the convergence of these two disciplines, but there are other elements that should be part of the discussion to be sure no one is making unilateral decisions that could undo all the promises we have made to ownership in our self-congratulatory smugness. For example, facilities and IT share responsibility for designing and maintaining a space that maintains good separation between supply air and return air.

If that breaks down or is conceived poorly, then all the promises of partial free cooling and reduced mechanical plant sizing and fan energy savings turn into lies. That’s a bit harsh. They just turn into bone-headed mistakes that will cost you credibility for years to come. Likewise, introducing any ICT equipment that does not breathe front-to-rear without some accommodation to make sure front-to-rear breathing equipment will look like air-separation holes to the mechanical plant. That’s another story with an unhappy ending.

Finally, tape storage and solid state storage have very different thresholds for temperature rates of change so that cost differences between these two technologies along with worst case possible hourly rate of temperature change need to be another part of this conversation between IT and Facilities.

Talk – or go extinct

In conclusion, it is probably safe to say that the status quo does not depend on any convergence of IT and facilities. It is probably also safe to say that today’s status quo is tomorrow’s dinosaur. So who survives? There are always going to be all sorts of market drivers to separate winners from losers, but ultimately one of those elements is going to be where mechanical systems are designed around a knowledge of ICT equipment differences and where ICT equipment is specified and deployed with an understanding of mechanical options.

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor,
manage, and maximize the power and cooling infrastructure for critical
data center environments.

 

Real-time monitoring, data-driven optimization.

Immersive software, innovative sensors and expert thermal services to monitor, manage, and maximize the power and cooling infrastructure for critical data center environments.

Ian Seaton

Ian Seaton

Data Center Consultant

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Our 7th annual Airflow Management Awareness Month live webinar series has concluded. Watch the webinars on-demand below!

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest