Data centers’ intricate design: HVAC

Data centers are important structures that hold vital information for businesses, schools, public agencies, and private individuals. HVAC systems must be designed with efficiency in mind.


Tim Chadwick, PE, LEED AP, President, AlfaTech Consulting Engineers, San Jose, Calif. Courtesy: AlfaTech Consulting EngineersRobert C. Eichelman, PE, LEED AP, ATD, DCEP, Technical Director, EYP Architecture & Engineering, Albany, N.Y. Courtesy: EYP Architecture & EngineeringBarton Hogge, PE, ATD, LEED AP, Principal, Affiliated Engineers Inc., Chapel Hill, N.C. Courtesy: Affiliated Engineers Inc.Bill Kosik, PE, CEM, LEED AP, BEMP, Building Energy Technologist, Chicago. Courtesy: Building Energy TechnologistKeith Lane, PE, RCDD, NTS, RTPM, LC, LEED AP BD+C, President/Chief Engineer, Lane Coburn & Associates LLC, Seattle. Courtesy: Lane Coburn & Associates LLCRobert Sty, PE, SCPM, LEDC AP, Principal, Technologies Studio Leader, SmithGroupJJR, Phoenix. Courtesy: SmithGroupJJRDebra Vieira, PE, ATD, LEEP AP, Senior Electrical Engineer, CH2M, Portland, Ore. Courtesy: CH2M


Tim Chadwick, PE, LEED AP, President, AlfaTech Consulting Engineers, San Jose, Calif.

Robert C. Eichelman, PE, LEED AP, ATD, DCEP, Technical Director, EYP Architecture & Engineering, Albany, N.Y.

Barton Hogge, PE, ATD, LEED AP, Principal, Affiliated Engineers Inc., Chapel Hill, N.C.

Bill Kosik, PE, CEM, LEED AP, BEMP, Building Energy Technologist, Chicago

Keith Lane, PE, RCDD, NTS, RTPM, LC, LEED AP BD+C, President/Chief Engineer, Lane Coburn & Associates LLC, Seattle

Robert Sty, PE, SCPM, LEDC AP, Principal, Technologies Studio Leader, SmithGroupJJR, Phoenix

Debra Vieira, PE, ATD, LEEP AP, Senior Electrical Engineer, CH2M, Portland, Ore.



The design of this 7th-floor data center, handled by engineers from AlfaTech Consulting Engineers, includes a built-up cooling system. Courtesy: AlfaTech Consulting EngineersCSE: Have you specified unique HVAC systems to cool data center projects? This may include liquid cooling, natural ventilation, etc.

Hogge: We see high-performance computing (HPC) pushing the envelope for cooling density. Containment has been superseded by rack-level cooling for extreme power-density applications. We see various approaches to bringing the heat-exchange process as close as possible to the IT hardware. HPC manufacturers are integrating cooling coils with the rack chassis, creating a closed loop that can facilitate near-compressorless cooling, depending on location.

Sty: The mechanical design for the HPC lab at NREL's Energy Systems Integration Facility uses direct water cooling to the cabinet (75°F supply) with the return water (95°F) waste heat used to heat the adjacent lab facilities and offices. Due to the elevated supply-water temperature required, the data center cooling system uses indirect evaporative cooling in lieu of mechanical refrigeration. This approach contributes significantly to the 1.06-PUE target directed to us by NREL. As the data center scales up from the initial install to the full 10-MW build-out, the potential for reuse of the waste heat grows beyond the ESIF facility to other buildings on campus.

Chadwick: Whenever possible, our recommendation is direct evaporative cooling. We have completed refrigerant-free data centers in hot and humid climates using this approach (and liberal cold-aisle design conditions). Where local air quality or other factors prohibit this, we have used indirect evaporative-cooling solutions such air polymer or metal heat exchangers or enthalpy wheels. We have used relative high-temperature water-cooled system designs to take advantage of significant water-side economizer hours. While we have explored and investigated immersion cooling and other unique cooling systems, the cost of the systems to date have not been justified. However, for higher-density loads or where water-conservation measures are particularly important, these types of systems should be considered.

CSE: What unique HVAC requirements do data center building projects have that you wouldn't encounter in other buildings?

Kosik: In the past 20 years, there has been a monumental shift in thinking on how data centers are cooled and the temperature and moisture-content requirements. This shift was brought on by essentially two major developments: data center energy consumption can be significantly lowered by increasing the internal temperature, and computer equipment became more robust and no longer prone to elevated temperatures and wide temperature swings. The uniqueness of this comes from the allowable temperature range of a data center (granted, in the most extreme case) swinging from a low of 41°F to a high of 113°F. I am not aware of any data center that operates under these conditions, but this is a testament to how computer equipment has evolved over the years.

Chadwick: The most unique aspect of data center designs is the required supply air temperature range. Optimal data center designs use a separation between the hot and cold side of the data center. However, it is somewhat ironic to be referring to "cold-aisle" temperatures as high as 90°F and 90% relative humidity. These high limits that some data centers are using also mean hot-aisle temperatures as high as 110°F. In these spaces, employee concerns related to exposure to harsh environments must be considered. You also must consider derating electrical feeders and even whether light fixtures are UL-listed for operations above the typical 104°F temperatures. These unique design temperature ranges, however, also allow for unique economization designs that can dramatically improve efficiency.

Hogge: The IT thermal environment presents unique opportunities, including the industry trend to aggressively expand the data center operating range. Operating at higher temperatures and lower humidity levels has expanded engineers' options for free cooling. We see traditional enterprise users getting onboard and raising operating temperatures to pursue maximum energy efficiency as well. Airflow management is another unique facet of data centers. To drive energy efficiency management, airflow is treated like a process application, with great care taken to effectively match the cooling system to the operating IT load.

CSE: When retrofitting an existing data center building, what challenges have you faced and how have you overcome them?

Sty: One particular enterprise client engaged us to provide a study and options to modify an empty raised-floor data hall designed well over 15 years ago when projected IT loads were in the 2 to 4 kW/cabinet range. The 12-in.-high raised floor would not support the new projected loads of 8 to 10 kW/cabinet, and a deeper raised floor was not an option due to slab-to-slab clearances. To overcome this challenge, we investigated in-row and back-of-cabinet cooling technologies. The existing raised floor was used as the piping chase for routing to each coil. The end result was a reduction in the data-hall footprint due to increased power density, higher projected energy efficiencies over traditional CRAC/computer room air handler unit perimeter cooling, and the ability to capture the remaining data-hall space for desperately needed office space.

Chadwick: Some common challenges with retrofit projects include space constraints and working in critical spaces. Many existing data centers to be remodeled or other spaces being converted to data centers do not have sufficient space to allow for the typical data center infrastructure. In some cases, this has limited to possible design options such as air-side economizer or raised-floor cooling. For example, retrofitting an existing 12-in. raised-floor data center space presents challenges because the 12-in. floor space does not allow for sufficient airflow for current design densities. In these cases, the existing floor must be removed or else the space must be converted to overhead air distribution. In the case of construction work in active data centers, significant thought must be given to how the retrofit work can be completed without impacting the ongoing operations. Design and construction must not impact the existing data center.

Hogge: In retrofitting legacy data centers, providing a cooling system that can support significantly increased power density in the same footprint can be challenging. The challenge extends beyond the IT environment and includes supporting mechanical systems for critical power systems. Owners' requirements for increased fuel-storage volumes, added redundancy, and operational enhancements to fuel systems can create complexity that must remain robust and simple to operate. Cooling for indoor generators of increased size is another design challenge, as airflow management becomes critical to maintain an acceptable environment for the generator. We are increasingly using CFD software to inform our designs for complex airflow challenges.


Ryan , United States, 04/29/16 12:24 PM:

Nice article and interviews. A trend we have seen at MonMan over the past 10 years has been end users and cooling manufacturers leading the way, with the engineers playing catch-up. No offense intended, especially to Barton Hogge, who we've worked with before on Electrical and HVAC work for Data Centers in the SouthEast. He's a great guy and a great engineer.

I remember a seminar I attended... must have been back in the 2007 time frame, where a 'highly prominent' figure in the Data Center industry was speaking. Without mentioning names, let's just say this person was with an 'institute' that focused on 'uptime'.

He claimed that more engineering goes into designing a strip mall than goes into designing a data center.

He further claimed, and predicted, that in the coming years, consulting engineers would be pushed out of the data center space altogether, as new technologies came into play that required little to no real engineering.

Were I a consulting engineer, I would be highly offended by these comments! Heck, I was still highly offended!

In a way, it seems this has come to pass. The esteemed engineers interviewed for this piece spoke of new, closed-loop systems with the heat transfer happening not just in the rack, but in the server itself. We've already moved beyond hot and cold aisle containment, people - a technology that, a few years ago, was the latest and greatest.

A data center customer can now easily tabulate their server load, call a cooling solutions manufacturer, and in a day or so, have a complete proposal, price included, for a cooling system that is nearly 'plug and play'.

Who is driving this? Is it the big end users with mega data centers? Is it the manufacturers who are always pushing for 'the next best thing'?

I think it's a mixture of both.

As someone involved in data centers for 15+ years, I've seen a lot of trends develop. Perhaps the most obvious trend is the desire of manufacturers to work directly with the end users, and to push their proprietary systems into the overall building design. Many consultants are brought on board only after some of these major decisions have been made.

I hope that prediction from 2007 doesn't come to pass, and consulting engineers aren't pushed out of the process altogether. It seems if some had their way, that's exactly what they would hope for.

A construction project in any industry, without the guidance of a seasoned consulting engineer, is doomed to fail.

A word of caution to end users: don't cut consultants out of the process. A word of caution to manufacturers: don't fancy yourself as the consultant.

Stay cool this year,

Ryan Hulland, MonMan
Consulting-Specifying Engineer's Product of the Year (POY) contest is the premier award for new products in the HVAC, fire, electrical, and...
Consulting-Specifying Engineer magazine is dedicated to encouraging and recognizing the most talented young individuals...
The MEP Giants program lists the top mechanical, electrical, plumbing, and fire protection engineering firms in the United States.
Commissioning lighting control systems; 2016 Commissioning Giants; Design high-efficiency hot water systems for hospitals; Evaluating condensation and condensate
Solving HVAC challenges; Thermal comfort criteria; Liquid-immersion cooling; Specifying VRF systems; 2016 Product of the Year winners
MEP Giants; MEP Annual Report; Mergers and acquisitions; Passive, active fire protection; LED retrofits; HVAC energy efficiency
Driving motor efficiency; Preventing Arc Flash in mission critical facilities; Integrating alternative power and existing electrical systems
Putting COPS into context; Designing medium-voltage electrical systems; Planning and designing resilient, efficient data centers; The nine steps of designing generator fuel systems
Designing generator systems; Using online commissioning tools; Selective coordination best practices
As brand protection manager for Eaton’s Electrical Sector, Tom Grace oversees counterfeit awareness...
Amara Rozgus is chief editor and content manager of Consulting-Specifier Engineer magazine.
IEEE power industry experts bring their combined experience in the electrical power industry...
Michael Heinsdorf, P.E., LEED AP, CDT is an Engineering Specification Writer at ARCOM MasterSpec.
click me