Data centers’ intricate design: HVAC

Data centers are important structures that hold vital information for businesses, schools, public agencies, and private individuals. HVAC systems must be designed with efficiency in mind.

By Consulting-Specifying Engineer April 29, 2016

Respondents

Tim Chadwick, PE, LEED AP, President, AlfaTech Consulting Engineers, San Jose, Calif.

Robert C. Eichelman, PE, LEED AP, ATD, DCEP, Technical Director, EYP Architecture & Engineering, Albany, N.Y.

Barton Hogge, PE, ATD, LEED AP, Principal, Affiliated Engineers Inc., Chapel Hill, N.C.

Bill Kosik, PE, CEM, LEED AP, BEMP, Building Energy Technologist, Chicago

Keith Lane, PE, RCDD, NTS, RTPM, LC, LEED AP BD+C, President/Chief Engineer, Lane Coburn & Associates LLC, Seattle

Robert Sty, PE, SCPM, LEDC AP, Principal, Technologies Studio Leader, SmithGroupJJR, Phoenix

Debra Vieira, PE, ATD, LEEP AP, Senior Electrical Engineer, CH2M, Portland, Ore.


CSE: Have you specified unique HVAC systems to cool data center projects? This may include liquid cooling, natural ventilation, etc.

Hogge: We see high-performance computing (HPC) pushing the envelope for cooling density. Containment has been superseded by rack-level cooling for extreme power-density applications. We see various approaches to bringing the heat-exchange process as close as possible to the IT hardware. HPC manufacturers are integrating cooling coils with the rack chassis, creating a closed loop that can facilitate near-compressorless cooling, depending on location.

Sty: The mechanical design for the HPC lab at NREL’s Energy Systems Integration Facility uses direct water cooling to the cabinet (75°F supply) with the return water (95°F) waste heat used to heat the adjacent lab facilities and offices. Due to the elevated supply-water temperature required, the data center cooling system uses indirect evaporative cooling in lieu of mechanical refrigeration. This approach contributes significantly to the 1.06-PUE target directed to us by NREL. As the data center scales up from the initial install to the full 10-MW build-out, the potential for reuse of the waste heat grows beyond the ESIF facility to other buildings on campus.

Chadwick: Whenever possible, our recommendation is direct evaporative cooling. We have completed refrigerant-free data centers in hot and humid climates using this approach (and liberal cold-aisle design conditions). Where local air quality or other factors prohibit this, we have used indirect evaporative-cooling solutions such air polymer or metal heat exchangers or enthalpy wheels. We have used relative high-temperature water-cooled system designs to take advantage of significant water-side economizer hours. While we have explored and investigated immersion cooling and other unique cooling systems, the cost of the systems to date have not been justified. However, for higher-density loads or where water-conservation measures are particularly important, these types of systems should be considered.

CSE: What unique HVAC requirements do data center building projects have that you wouldn’t encounter in other buildings?

Kosik: In the past 20 years, there has been a monumental shift in thinking on how data centers are cooled and the temperature and moisture-content requirements. This shift was brought on by essentially two major developments: data center energy consumption can be significantly lowered by increasing the internal temperature, and computer equipment became more robust and no longer prone to elevated temperatures and wide temperature swings. The uniqueness of this comes from the allowable temperature range of a data center (granted, in the most extreme case) swinging from a low of 41°F to a high of 113°F. I am not aware of any data center that operates under these conditions, but this is a testament to how computer equipment has evolved over the years.

Chadwick: The most unique aspect of data center designs is the required supply air temperature range. Optimal data center designs use a separation between the hot and cold side of the data center. However, it is somewhat ironic to be referring to "cold-aisle" temperatures as high as 90°F and 90% relative humidity. These high limits that some data centers are using also mean hot-aisle temperatures as high as 110°F. In these spaces, employee concerns related to exposure to harsh environments must be considered. You also must consider derating electrical feeders and even whether light fixtures are UL-listed for operations above the typical 104°F temperatures. These unique design temperature ranges, however, also allow for unique economization designs that can dramatically improve efficiency.

Hogge: The IT thermal environment presents unique opportunities, including the industry trend to aggressively expand the data center operating range. Operating at higher temperatures and lower humidity levels has expanded engineers’ options for free cooling. We see traditional enterprise users getting onboard and raising operating temperatures to pursue maximum energy efficiency as well. Airflow management is another unique facet of data centers. To drive energy efficiency management, airflow is treated like a process application, with great care taken to effectively match the cooling system to the operating IT load.

CSE: When retrofitting an existing data center building, what challenges have you faced and how have you overcome them?

Sty: One particular enterprise client engaged us to provide a study and options to modify an empty raised-floor data hall designed well over 15 years ago when projected IT loads were in the 2 to 4 kW/cabinet range. The 12-in.-high raised floor would not support the new projected loads of 8 to 10 kW/cabinet, and a deeper raised floor was not an option due to slab-to-slab clearances. To overcome this challenge, we investigated in-row and back-of-cabinet cooling technologies. The existing raised floor was used as the piping chase for routing to each coil. The end result was a reduction in the data-hall footprint due to increased power density, higher projected energy efficiencies over traditional CRAC/computer room air handler unit perimeter cooling, and the ability to capture the remaining data-hall space for desperately needed office space.

Chadwick: Some common challenges with retrofit projects include space constraints and working in critical spaces. Many existing data centers to be remodeled or other spaces being converted to data centers do not have sufficient space to allow for the typical data center infrastructure. In some cases, this has limited to possible design options such as air-side economizer or raised-floor cooling. For example, retrofitting an existing 12-in. raised-floor data center space presents challenges because the 12-in. floor space does not allow for sufficient airflow for current design densities. In these cases, the existing floor must be removed or else the space must be converted to overhead air distribution. In the case of construction work in active data centers, significant thought must be given to how the retrofit work can be completed without impacting the ongoing operations. Design and construction must not impact the existing data center.

Hogge: In retrofitting legacy data centers, providing a cooling system that can support significantly increased power density in the same footprint can be challenging. The challenge extends beyond the IT environment and includes supporting mechanical systems for critical power systems. Owners’ requirements for increased fuel-storage volumes, added redundancy, and operational enhancements to fuel systems can create complexity that must remain robust and simple to operate. Cooling for indoor generators of increased size is another design challenge, as airflow management becomes critical to maintain an acceptable environment for the generator. We are increasingly using CFD software to inform our designs for complex airflow challenges.