Using warm water for data center cooling

There are many ways to cool a data center. Engineers should explore the various cooling options and apply the solution that’s appropriate for the application.


This article is peer-reviewed.Learning objectives:

  • Apply the various ways to cool a data center.
  • Decide which data center cooling options work best in each part of the world.
  • Model different ways to design a data center by making use of various cooling technologies.

The Slinky and IT systems. Can there really be common attributes between a spiral metal toy invented in the 1940s and a revolutionary technology that impacts most people on this planet? This author thinks so. If you have ever played with a Slinky or seen one in action (especially on stairs), you'll see that as it travels on the stairs, the mass of the spring moves from the trailing end of the Slinky to the leading end. This shifting of the mass causes the trailing end to eventually overtake the leading end, jumping over it and landing on the next stair. If the staircase went on forever, so would the Slinky.

The meaning behind this visual metaphor is that as soon as things seem to settle out in the information technology (IT) sector, some other disruptive force changes the rules and leapfrogs right over the status quo into completely new territory. This is also the vision this author has in his head when thinking about cooling systems for data centers, which have very tight relationships to the IT systems. By the time the HVAC equipment manufacturers have it figured out, the IT industry throws them a curveball and creates a new platform that challenges the status quo in maintaining the appropriate temperature and moisture levels inside the data center. But this is not new—the evolution of powerful computers has continually pushed the limits of power-delivery and cooling systems required to maintain their reliability goals.

A brief history of (computer) time

Early in the development of large, mainframe computers used in the defense and business sectors, one of the major problems that the scientists and engineers faced (and still do today) was the heat generated by the innards of the computer. Some resorted to removing windows, leaving exterior doors open, and other rudimentary approaches that used ambient air for keeping things cool. But others had already realized that water could be a more practical and operationally viable solution to keep the computers at an acceptable temperature. Different generations of computers demonstrate this point:

  • UNIVAC, released in 1951, demanded 120 kW and required 52 tons of chilled water cooling. It had an electrical density of 100 W/sq ft.
  • Control Data Corp.'s CDC 7600 debuted in the early 1970s and consumed 150 kW. (The primary inventor of the CDC 7600 went on to found Cray Computers). It was benchmarked at 10 megaflop/sec and used an internal refrigerant cooling system that rejected heat to an external water loop. (FLOPS, or flops, is an acronym for floating-point operations per second) It had an electrical density of 200 W/sq ft.
  • In 1990 IBM introduced the ES/9000 mainframe computer, which consumed 166 kW. It is interesting to note that 80% of the machine's heat is dissipated using chilled-water cooling. The remaining heat is rejected to the air. This system had an electrical density of 210 W/sq ft.
  • The first part of the 21st century ushered in some pretty amazing advances in computing. In 2014, Hewlett-Packard announced a new high-performance computer—also water-cooled—that is capable of 1.2 quadrillion calculations/sec peak performance ("petascale"-computing capability, which is defined as 1,015 flops). Computing power of this scale designed for energy and space efficiency yield a power density of more than 1,000 W/sq ft.

Moving toward today's technology

One can glean from this information that it wasn't until the late 20th/early 21st century that computing technology really took off. New processor, memory, storage, and interconnection technologies resulted in more powerful computers that use less energy on a per-instruction basis. But one thing remained constant: All of this computationally intensive technology, enclosed in ever-smaller packages, produced heat—a lot of heat.

As the computer designers and engineers honed their craft and continued to develop unbelievably powerful computers, the thermal engineering teams responsible for keeping the processors, memory modules, graphics cards, and other internal computer components at an optimal temperature had to develop innovative and reliable cooling solutions to keep pace with this immense computing. For example, modern-day computational science may require a computer rack that houses close to 3,000 cores, which is roughly the equivalent of 375 servers, in one rack. This equates to an electrical demand (and corresponding cooling load) of 90 kW per rack. This will yield a data center with an electrical density of considerably more than 1,000 W/sq ft, depending on the data center layout and the amount of other equipment in the room. With numbers like this, it was clear: Conventional air cooling will not work in this type of environment.

Current state of data center cooling

Data center cooling system development, employing the most current and common industry methodologies, range from split-system, refrigerant-based components to more complex (and sometimes exotic) arrangements, such as liquid immersion, where modified servers are submerged in a mineral oil-like solution, eliminating all heat transfer to the ambient air because the circulating oil solution becomes the conduit for heat rejection. Other complex systems, such as pumped or thermo-syphon carbon-dioxide cooling also offer very high efficiencies in terms of volume of heat rejection media needed; 1 kg of carbon dioxide absorbs the same amount of heat as 7 kg of water. This potentially can reduce piping and equipment sizing, and also reduce energy costs.

Figure 1: Rear-door heat exchangers attach to the back of the server cabinet. The goal of the RDHX is to cool the air from the servers back to room temperature so there is no additional cooling load that would ordinarily be taken care of by the computer-r

Water-based cooling in data centers falls somewhere between the basic (although tried-and-true) air-cooled direct expansion (DX) systems and complex methods with high degrees of sophistication. And because water-based data center cooling systems have been in use in some form or another for more than 60 yr, there is a lot of analytical and historical data on how these systems perform and where their strengths and weaknesses lie. The most common water-based approaches today can be aggregated anecdotally into three primary classifications: near-coupled, close-coupled, and direct-cooled.

Near-coupled: Near-coupled systems include solutions including rear-door heat exchangers (RDHX), where the cooling water is pumped to a large coil built into the rear door of the IT cabinet (see Figure 1). Depending on the design, airflow-assist fans can also come integrated into the RDHX. This heat-removal design reduces the temperature of the exhaust air coming from the IT equipment (typically cabinet-mounted servers). The temperature reduction varies based on parameters, such as water temperature, water flow, airflow, etc. However, the goal is to reduce the temperature to as close to ambient as possible. For example, with an inlet air temperature of 75 F and a water flow of 1.3 gpm/ton, using 66 F chilled water will cool 85% of the heat in the server cabinet to room temperature. With 59 F chilled water, 100% of the heat will be cooled to room temperature. It is assumed that the cooling water for the RDHX is a secondary or tertiary loop with a 2 F increase in water temperature from the chilled water temperature. This solution has been in use for several years and tends to work best for high-density applications with uniform IT-cabinet-row distribution. Because the RDHX units are typically mounted on the back of the IT cabinet, it does not impact the floor space.

Figure 2: Another way to isolate the cooling load of the computers from the data center is to use enclosed, water-cooled cabinets. Essentially a fan-coil unit that is attached to the computer cabinet, fans draw the hot air from the computers over a waterClose-coupled: Close-coupled water-based cooling solutions include water-cooled IT cabinets that have coils and circulation fans built into the cabinet (see Figure 2). This system allows for totally enclosed and cooled IT equipment where the heat load is completely contained, with only a small amount (~5%) of the heat being released into the data center. This system has similar limitations to the RDHX, but generally is more feasible to neutralize the entire cooling load. The cabinet is larger than a standard IT cabinet. However, because the equipment is enclosed, it is possible to mount higher densities of IT equipment in the cabinet due to the ability of the cabinet to maintain a much more uniform temperature, essentially eliminating any chance of a high-temperature cutout event. This solution also has been in use for several years and tends to work best for high-density applications, especially when the equipment is located in an existing low-density data center.

Direct-coupled: One of the primary challenges when cooling a data center is the ability to control how effectively the cooling load can be neutralized. A data center that uses cold air supplied to the room via raised floor or ducting comes with inherent difficulties, such as uncontrolled bypass air, imbalanced air delivery, and re-entrainment of hot exhaust air into the intakes of the computer equipment—all of which will usually present difficulties in keeping the IT equipment at allowable temperatures.

Most of these complications stem from proximity and physical containment; if the hot air escapes into the room before the cold air can mix with it and reduce the temperature, the hot air now becomes a fugitive and the cold air becomes an inefficiency in the system. In all air-cooled data centers, a highly effective method for reducing these difficulties is to use a partition system as part of an overall containment system that physically separates the hot air from the cold air, allowing for a fairly precise cooling solution. On a macro scale, it is possible to carefully control how much air comes into and out of the containment system and predict general supply and return temperatures. What cannot be done in this type of system is to ensure that the required airflow and temperature across the internal components in the computer system are being met. As individual server fans will vary their speed to control internal temperatures (based on workload), it is possible to starve air, especially if the workload is small, causing the internal fans to go to minimum speed.

<< First < Previous 1 2 Next > Last >>

Jozef , Non-US/Not Applicable, Poland, 12/16/15 04:34 PM:

We heard that the capacity of rack is 60 - 75kW/rack and the problem is to transport heat outside of DC. The moving to water is not easy and most of modern Data Centers use the HVAC in rows face to face with serwers and PUE-s. Basis on that is many military Data Centers which has the empty Server champers, because the 0,1 area is used for servers and rack and for 75% of Data Chambers ther is no electricity for consumption power by servers. DC need bigger Emergency Generators for power providing. And no room for them near to DC Chambers. Still it require clean energy underruptable. That is problem, and who will find the new places for DC.
Consulting-Specifying Engineer's Product of the Year (POY) contest is the premier award for new products in the HVAC, fire, electrical, and...
Consulting-Specifying Engineer magazine is dedicated to encouraging and recognizing the most talented young individuals...
The MEP Giants program lists the top mechanical, electrical, plumbing, and fire protection engineering firms in the United States.
Commissioning lighting control systems; 2016 Commissioning Giants; Design high-efficiency hot water systems for hospitals; Evaluating condensation and condensate
Solving HVAC challenges; Thermal comfort criteria; Liquid-immersion cooling; Specifying VRF systems; 2016 Product of the Year winners
MEP Giants; MEP Annual Report; Mergers and acquisitions; Passive, active fire protection; LED retrofits; HVAC energy efficiency
Driving motor efficiency; Preventing Arc Flash in mission critical facilities; Integrating alternative power and existing electrical systems
Putting COPS into context; Designing medium-voltage electrical systems; Planning and designing resilient, efficient data centers; The nine steps of designing generator fuel systems
Designing generator systems; Using online commissioning tools; Selective coordination best practices
As brand protection manager for Eaton’s Electrical Sector, Tom Grace oversees counterfeit awareness...
Amara Rozgus is chief editor and content manager of Consulting-Specifier Engineer magazine.
IEEE power industry experts bring their combined experience in the electrical power industry...
Michael Heinsdorf, P.E., LEED AP, CDT is an Engineering Specification Writer at ARCOM MasterSpec.
click me