Closed-Loop Water Cooling for IT Racks

Awater-chilled, closed-loop cooling system mounted on an IT equipment rack makes it possible to achieve hardware densities and power consumption levels that have been difficult—if not impossible—to support with conventional HVAC systems. In particular, the system allows data centers to resolve specific hot spot occurrences without revamping the overall infrastructure.


Awater-chilled, closed-loop cooling system mounted on an IT equipment rack makes it possible to achieve hardware densities and power consumption levels that have been difficult—if not impossible—to support with conventional HVAC systems. In particular, the system allows data centers to resolve specific hot spot occurrences without revamping the overall infrastructure.

The deployment of high-density racks is creating power and cooling challenges for data centers worldwide. Server densification is intended to create efficiencies in floor space, cabling and systems management. However, the growth in power density (watts per U) with each new server generation is causing data centers to limit rack utilization based on present cooling capacity.

But several new cooling solutions are coming on the market that include highly efficient rack enclosures capable of supporting high power and heat loads. One such solution is a water-chilled, closed-loop cooling system.

This type of system incorporates modular fans and air-to-liquid heat exchangers to remove high levels of heat generated by advanced server and mass storage systems. A water-chilled, closed-loop cooling system allows a data center to add computing power with minimal impact on the facility's heat load, thus extending the life of the data center.

Originally, data centers were designed to support large, water-cooled mainframes that consumed lots of power and generated intense heat in concentrated areas. As enterprise computers evolved, data centers changed to support racks of multi-processor servers and storage systems that spread the power and cooling requirements over a larger area. Although this trend allowed data centers to scale easier, it created power distribution, cabling and system management challenges. The emergence of 1U servers and blade servers allowed organizations to consolidate their data center infrastructures, decrease cable clutter and streamline server management. However, most data centers are having difficulty adjusting to the effect of high-density racks on power and cooling resources.

A fully loaded 42U rack with dual processor (2P) 1U servers and storage drives requires more than 12 kilowatts (kW) of power. A 42U rack with 96 half-height BL p-Class blade servers, including six 1U BL p-Class power enclosures, requires 28 kW of power. As data centers try to accommodate more of these high-density racks, they are moving toward high amperage, three-phase infrastructures. Three-phase power is typically more efficient than single-phase power, because it provides more than 150% of maximum available power provided by single-phase power.

More power means more heat. Virtually all power consumed by rack-mounted equipment is converted to sensible heat, which increases the temperature of the environment. The sensible heat load is typically expressed in BTU/hr, where 1 watt equals 3.413 BTU/hr. Therefore, the heat load of each rack can be calculated as follows:

Heat Load = Power [watts] × 3.413 BTU/hr per watt

For example, the heat load for a two-processor 1U server is:

577 watts × 3.413 BTU/hr/watt =1,969 BTU/hr

This means that the heat load of a fully-loaded 42U rack of servers is 82,710 BTU/hr. In the United States, cooling capacity is often expressed in “tons” of refrigeration, which is derived by dividing the sensible heat load by 12,000 BTU/hr per ton. The cooling capacity needed for a fully-loaded rack of two processor servers is

82,710 BTU/hr÷ 12,000 BTU/hr per ton = 6.9 tons

Few existing data centers were designed to provide this amount of cooling capacity for a single rack; and few are capable of distributing adequate airflow directly to rows of such racks.

Many data centers limit power consumption and cooling requirements by limiting rack density (utilization). The reasonable limit of rack power and cooling capacity for a conventional forced-air (HVAC) cooled data center is 8 kW per rack, or 27,300 BTU/hr per rack. For power densities approaching 15 kW per rack, facility planners can use advanced thermal modeling technologies to help determine the best layout of computing rooms and provisioning of cooling resources. For racks requiring more than 15 kW, the latest cooling techniques use a proven medium—water. Water can remove 3,500 times the amount of heat as an equivalent volume of air.

A water-chilled, closed-loop cooling system is designed for data centers that have reached the limit of their cooling capability or that need to reduce the effect of high-density racks on their facility. This type of cooling technology supports fully populated high-density racks while eliminating the need to add more facility air-conditioning capacity.

The rack enclosure contains three fan modules and three heat exchanger modules that slide into a cabinet mounted on the left side of the rack. Each fan module contains a variable-speed circulation fan, and each heat exchanger (HEX) module contains an air-to-water heat transfer device. Each HEX module discharges cold air to the front of the rack via a side portal. Chilled water for the heat exchangers is provided by the facility's chilled water system or by a dedicated unit.

Most server designs use a front-to-back cooling principle. The water-chilled, closed-loop cooling system evenly distributes cold supply air at the front of the rack of equipment. Each server receives an adequate supply of air, regardless of its position within the rack or the density of the rack. The servers expel warm exhaust air in the rear of the rack. The fan modules channel re-directs the warm air from the rear of the rack into the heat exchanger modules, where the air is re-cooled and then re-circulated to the front of the rack. Any condensation that forms is collected in each heat exchanger module and is carried through a discharge tube to a condensation tray integrated in the base assembly.

For controlled airflow, the rack enclosure must be closed during normal operation. The enclosure has solid front and rear doors, sidewalls, and top and bottom covers. The front and back doors must be kept closed to ensure that the maximum amount of the cool air is retained within the system. All rack space must be filled by equipment or enclosed by blanking panels so that the cool air is routed exclusively through the equipment and cannot bypass through or around the rack.

Chilled water for the heat exchanger is regulated by the water group controller, a module that contains a magnetic solenoid valve, check valve, flow meter and condensate pump. The water group is connected to the facility's chilled water system (or to a dedicated chiller unit) with flexible 33.8-in. long inlet and outlet hoses. The condensate drain hose, overflow hose and main inlet and outlet hoses can be routed through the back of the cabinet or downward into a raised tile floor. The inlet and outlet hoses are terminated with quick-connect couplings.

This type of system requires approximately 1.5 times the width and 1.25 times the depth of a standard server rack (to allow for the fan and heat exchanger modules and front and rear airflow). However, this type of enclosure has enough cooling capacity to support the heat load of a rack of equipment consuming 30 kW. This heat load is equivalent to that generated by three 10-kW racks, yet the water-chilled, closed-loop cooling system rack occupies 40% less floor space than three standard racks. Likewise, the system supports a heat load equivalent to 3.75 8-kW racks (30 kW/8 kW per rack = 3.75 racks) while occupying 65% less floor space and reducing the overall heat load on the facility.

The water-chilled, closed-loop cooling system can extend the life and capacity of data centers with limited cooling resources. It can integrate with existing and future server cabinets and does not affect how servers are currently deployed, operated and maintained. The water-chilled, closed-loop cooling system:

  • Provides a path to increase power density up to 30 kW per rack

  • Supports fully populated high-density racks while reducing the overall heat load on the facility

  • Saves valuable floor space and cooling resources that would be required for under-utilized racks.

No comments
Consulting-Specifying Engineer's Product of the Year (POY) contest is the premier award for new products in the HVAC, fire, electrical, and...
Consulting-Specifying Engineer magazine is dedicated to encouraging and recognizing the most talented young individuals...
The MEP Giants program lists the top mechanical, electrical, plumbing, and fire protection engineering firms in the United States.
Commissioning lighting control systems; 2016 Commissioning Giants; Design high-efficiency hot water systems for hospitals; Evaluating condensation and condensate
Solving HVAC challenges; Thermal comfort criteria; Liquid-immersion cooling; Specifying VRF systems; 2016 Product of the Year winners
MEP Giants; MEP Annual Report; Mergers and acquisitions; Passive, active fire protection; LED retrofits; HVAC energy efficiency
Driving motor efficiency; Preventing Arc Flash in mission critical facilities; Integrating alternative power and existing electrical systems
Putting COPS into context; Designing medium-voltage electrical systems; Planning and designing resilient, efficient data centers; The nine steps of designing generator fuel systems
Designing generator systems; Using online commissioning tools; Selective coordination best practices
As brand protection manager for Eaton’s Electrical Sector, Tom Grace oversees counterfeit awareness...
Amara Rozgus is chief editor and content manager of Consulting-Specifier Engineer magazine.
IEEE power industry experts bring their combined experience in the electrical power industry...
Michael Heinsdorf, P.E., LEED AP, CDT is an Engineering Specification Writer at ARCOM MasterSpec.
click me