Driving data center PUE, efficiency

When developing data center energy-use estimations, engineers must account for all sources of energy use in the facility.

By Bill Kosik, PE, CEM, BEMP, LEED AP BD+C, HP Data Center Facilities Consulting June 9, 2015

Learning objectives

  • Understand how to measure energy efficiency in a data center.
  • Learn which systems affect power usage effectiveness (PUE).
  • Know how to determine data center reliability.

For the last decade, power usage effectiveness (PUE) has been the primary metric in judging how efficiently energy is used in powering a data center. PUE is a simple energy-use ratio where the total energy of the data center facility is the numerator, and the energy use of the information technology (IT) systems is the denominator. PUE values theoretically run from 1 to infinity. But in real-life operations, well-designed, operated, and maintained data centers typically have PUE values between 1.20 and 1.60. Extremely low energy-use data centers can have a PUE of 1.10. Keep in mind that PUE can never be less than 1.0.

Future flexibility and scalability will keep long-term ownership costs low. This is especially important because IT systems evolve on a lifecycle of 12 to 18 months. This, however, can lead to short-term over-provisioning of power and cooling systems until the IT systems are fully built out. And even at a fully built-out stage, the computers, storage, and networking equipment will experience hourly, daily, weekly, and monthly variations depending on the type of computing performed. This double learning curve of increasing power usage over time plus ongoing fluctuations of power use can make the design and operation of these types of facilities difficult to optimize.

The concept of how PUE is calculated is relatively straightforward. However, putting the concept into practice requires a detailed approach, making sure to consider all elements that affect data center energy use. In addition, when conducting an energy-use simulation and analysis to determine PUE for a data center, it is important to include all available relevant information (at least what is known at the time of the study) in the simulation (see Figure 1). If specific input parameters are not known, industry standard values can be used, such as the minimum energy-efficiency ratings defined in ASHRAE 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings. Examples (not a complete list) include:

1. Overall system design requirements: These requirements generally describe a mode of operation or sequence of events needed to minimize energy use while maintaining the prerequisite conditions for the IT equipment.

  • a. Type of economizer cycle
  • b. If water, describe the control sequence and parameters to be measured and controlled for successful execution of the sequence (maximum/minimum outdoor temperatures and humidity levels).
  • c. If air, describe the control sequence and parameters to be measured and controlled for successful execution of the sequence (maximum/minimum outdoor temperatures and humidity levels).

2. Indoor environmental conditions: Depending on the indoor temperature and humidity parameters, significant amounts of energy can be saved by increasing the supply air temperature and lowering the humidity level. Determining the data center environmental conditions is an important step in the process:

  • a. Supply air temperature
  • b. Return air temperature
  • c. Minimum and maximum moisture content (grains of water per kilogram of air).

3. Power and efficiency parameters for systems and equipment

  • a. Air-handling unit fans
  • b. Compressors
  • c. Cooling system pumps
  • d. Heat-rejection system pumps
  • e. Heat-rejection fans
  • f. Lighting
  • g. Other miscellaneous electrical loads.

4. Efficiency of power-delivery systems

  • a. Incoming electricity transformers losses
  • b. Uninterruptible power supply (UPS) losses
  • c. Power distribution unit (PDU) losses
  • d. Wiring losses

5. IT load

  • a. What is the IT system operational load compared to design load?
  • b. Most power and cooling systems run less efficiently at partial load.
  • c. Most data centers never reach full power-use potential, so the facility will run at partial load virtually the entire life of the facility.

6. Building envelope

  • a. Increased/decreased internal moisture due to vapor migration
  • b. Heating of non-data center spaces (loading docks, exit doors, vestibule)

7. Climate

  • a. Analyze more than 8,760 hr (the number of hours in a year) using ASHRAE international weather data (IWEC2)
  • b. Must consider a full year of weather data, hour-by-hour, to see trends in energy use
  • c. Extreme weather data, n-year return period values of extreme dry-bulb temperature where n = 5, 10, 20, 50 years.

8. Reliability requirements

  • a. System efficiency drops with higher reliability (generally).
  • b. Cooling and power systems must be carefully designed to optimize reliability requirements with partial load performance.
  • c. If multiple modules are needed for reliability, the equipment can also be used as a way to keep energy use to a minimum during partial load.

9. Operating schedules

  • a. Data center facilities will have variable use for systems like lighting and miscellaneous power.
  • b. Based on actual use, the energy use of different systems will vary from facility to facility, especially a lights-out facility.

10.  IT systems and equipment

  • a. Arguably one of the most important factors that control the energy-use outcome
  • b. IT systems: traditional air-cooled, water-cooled, rear-door heat exchanger, fan-powered chimney, high-temperature air/water
  • c. High-temperature air/water will often lead to a chiller-less cooling system, which uses heat-rejection equipment to cool only the IT equipment
  • d. Density (in watts per square foot/meter) of equipment will drive cooling and power solutions and ultimately energy use.
  • e. Efficiency and turn-down ratio of servers
  • f. Efficiency solutions such as virtualization and cloud.

Major cooling-system equipment types

The cooling-system energy use, along with the inefficiencies in the electrical distribution system, will claim the most energy in a data center next to the IT systems. While it is assumed in a PUE calculation that the energy use of the IT systems remains constant, the building-services engineering team has many opportunities to explore effective approaches to optimizing cooling-system energy use. Each design scheme will result in different annual energy use, but must also conform to several other project requirements, such as reliability, first cost, maintenance costs, etc. Each system has strengths and weaknesses and must be analyzed in a logical way to ensure an objective outcome.

Data centers are often complex, with myriad systems and subsystems. Each of these systems has intrinsic operational characteristics that must be choreographed with other big-building systems:

  • Central cooling plants: In general, a central plant consists of primary equipment, such as chillers (air- or water-cooled), heat-rejection equipment, piping, pumps, heat exchangers, and water-treatment systems. Central plants are best-suited for large data centers and have the capability for future expansion.
  • Air-cooled versus water-cooled chillers: Depending on the climate, air-cooled chillers will use more energy annually than a comparably sized water-cooled chiller. To address this, manufacturers offer economizer modules built into the chiller that use cold outside air to extract heat from the chilled water without using compressors. Dry coolers or evaporative coolers can also be used to pre-cool the return water back to the chiller.
  • Direct expansion (DX) equipment: DX systems have the least amount of moving parts because the condenser and evaporator use air—not water—as the heat-transfer medium. This reduces the complexity, but it also can reduce the efficiency. A variation on this system is to water-cool the condenser, which improves the efficiency. Water-cooled computer-room air conditioning units fall into this category.
  • Evaporative cooling systems: Evaporative cooling uses the principle that when air is exposed to water spray, the dry-bulb temperature of the air will be reduced to a level close to the wet-bulb temperature of the air. The difference between the air’s dry bulb and wet bulb is known as the wet-bulb depression. In dry climates, evaporative cooling works well because the wet-bulb depression is large, which enables the evaporative process to lower the dry-bulb temperature significantly. Evaporative cooling can be used in conjunction with any of the cooling techniques outlined above.
  • Water economization: Water can be used for many purposes in cooling a data center. It can be chilled via a vapor-compression cycle and sent out to the terminal cooling equipment. It can also be cooled using an atmospheric cooling tower using the same principals of evaporation used to cool compressors; or, if it is cold enough, it can be sent directly to the terminal cooling devices. The goal of a water-economization strategy is to use mechanical cooling as little as possible, and to rely on outdoor air conditions to cool the water sufficiently to generate the required supply air temperature. When the system is in economizer mode, only air-handling unit fans, chilled water pumps, and condenser water pumps will run. The energy required to run these pieces of equipment should be examined carefully to ensure the savings of using a water economizer will not be diminished by excessively high motor energy consumption.
  • Direct economization: Direct economization typically means the use of outside air directly, without the use of heat exchangers. Direct outside air economizer systems mix the outdoor air with the return air to maintain the required supply air temperature. With outdoor air temperatures that range from that of the supply air temperature to that of the return air temperature, partial economization is achievable but supplemental mechanical cooling is necessary. Evaporative cooling can be used at this point to extend the ability to use outside air by reducing the dry-bulb temperature, especially in drier climates. When the supply air temperature can no longer be maintained, mechanical cooling will start up and cool the load. After the outdoor dry-bulb and moisture levels reach acceptable limits, the supplemental cooling equipment will stop and the outdoor air dampers will open to maintain the temperature. For many climates, it is possible to run direct air economization year-round with little or no supplemental cooling. There are climates where the outdoor dry-bulb temperature is suitable for economization, but the outdoor moisture level is too high. In this case, a control strategy must be in place to take advantage of the acceptable dry-bulb temperature without risking condensation or unintentionally incurring higher energy costs.
  • Indirect economization: Indirect economization is used when it is not advantageous to use air directly from the outdoors for economization. Indirect economization uses the same control principals as the direct outdoor air systems. In direct systems, the outdoor air is used to cool the return air by physically mixing the two air streams. When indirect economization is used, the outdoor air is used to cool down a heat exchanger on one side that indirectly cools the return air on the other side with no contact of the two air streams. In indirect evaporative systems, water is sprayed on a portion of the heat exchanger where the outdoor air runs through. The evaporative effect lowers the temperature of the heat exchanger, thereby reducing the temperature of the outdoor air. These systems are effective in a number of climates, even humid climates. Because an indirect heat exchanger is used, a fan—sometimes known as a scavenger fan—is required to draw the outside air across the heat exchanger. This fan motor power is not trivial and must be accounted for in estimating energy use.
  • Economization options: There are several different approaches and technologies available when designing an economization system. For indirect economizer designs, heat-exchanger technology varies widely
    • It can consist of a rotary heat exchanger, also known as a heat wheel, which uses thermal mass to cool down the return air by using outdoor air.
    • Another approach is to use a cross-flow heat exchanger.
    • Heat pipe technology can also be incorporated in an indirect economization strategy.

Within these options, there are several sub-options that are driven by the specific application, which ultimately will define the design strategy for the entire cooling system.

Electrical system efficiency

Electrical systems have components and equipment of various efficiency levels. Including these system losses in a PUE calculation is essential, because the losses are dissipated as heat and require even more energy from the cooling system to ensure the proper internal environmental conditions are met. Electrical-system energy consumption must include all the power losses, starting from the utility through the building transformers, switchgear, UPS, PDUs, and remote power panels, ultimately ending at the IT equipment. Some of these components have a linear response to the percent of total load they are designed to handle, while others exhibit a very nonlinear behavior, which is important to understand when estimating overall energy consumption in a data center with varying IT loads. Having multiple concurrently energized power-distribution paths can increase the availability (reliability) of IT operations. However, running multiple electrical systems at partial load can also decrease the overall system efficiency.

Electrical system impact on PUE

During preliminary analysis and product selection, it is not uncommon to look at electrical-system concepts in isolation from the other data center systems and equipment. At this stage, however, integration is key—especially integrating with the overall IT plan. Early in the design process, a timeline of the anticipated IT load growth must be developed to properly design the power systems from a modular growth perspective. If modeled properly, the partial-load efficiencies for the electrical system will determine the projected amount of energy used, as well as the amount dissipated as heat. The UPS, transformers, and wiring are just part of the PUE equation. The PUE is burdened with other electrical overhead items that are required for a fully functioning data center, such as lighting and power for administrative space and infrastructure areas, and miscellaneous power loads.

Electrical system impact on cooling systems

Mechanical engineers must include electrical losses dissipated as heat when sizing the cooling equipment and evaluating annual energy consumption, because losses from the electrical systems result in additional heat gain that require cooling (except for equipment located outdoors or in nonconditioned spaces). The efficiency of the cooling equipment will determine the amount of energy required to cool the electrical losses. It is essential to include cooling-system energy usages from electrical losses in lifecycle studies for UPS and other electrical system components. This is where longer-term cost-of-ownership studies are valuable. Often, equipment with lower efficiency ratings will have a higher lifecycle cost due to the higher electrical losses and associated cooling energy required (see Figure 2). Bottom line: Inefficiencies in the electrical system have a double impact on energy use—the energy used for the losses, and the corresponding cooling energy required to cool the losses dissipated as heat.

Building envelope and energy use

Buildings leak air. Moisture will pass in and out of the envelope, depending on the integrity of the vapor barrier. This leakage and moisture migration will have a significant impact on indoor temperature and humidity, and must be accounted for in the design process. To address what role the building plays in data center environmental conditions, the following questions must be answered:

  • Does the amount of leakage across the building envelope correlate to indoor humidity levels and energy use?
  • How does the climate where the data center is located affect the indoor temperature and humidity levels? Are certain climates more favorable for using outside air economizer without using humidification to add moisture to the air during the times of the year when outdoor air is dry?
  • Will widening the humidity tolerances required by the computers actually produce worthwhile energy savings?

Building envelope effects

The building envelope is made up of the roof, exterior walls, floors, and underground walls in contact with the earth, windows, and doors. Many data center facilities have minimal amounts of windows and doors, so the remaining elements of roof, walls, and floor are the primary elements for consideration. These elements have different parameters to be considered in the analysis: thermal resistance (insulation), thermal mass (heavy construction, such as concrete versus lightweight steel), air tightness, and moisture permeability.

When a large data center is running at full capacity, the effects of the building envelope on energy use (as a percent of the total) are relatively minimal. However, because many data center facilities routinely operate at partial-load conditions, defining the requirements of the building envelope must be integral to the design process as the percentage of energy use attributable to the building envelope increases.

ASHRAE 90.1 includes specific information on different building envelope alternatives that can be used to meet the minimum energy-performance requirements. In addition, the ASHRAE publication Advanced Energy Design Guide for Small Office Buildings also goes into great detail on the most effective strategies for building-envelope design by climatic zone. Finally, another good source of engineering data is the Chartered Institution of Building Services Engineers (CIBSE) Guide A: Environmental Design 2015.

Building envelope leakage

Building leakage in the forms of outside air infiltration and moisture migration will impact the internal temperature and relative humidity. Based on a number of studies from National Institute of Standards and Technology (NIST), CIBSE, and ASHRAE, building leakage is often underestimated significantly when investigating leakage in building envelope components. For example:

  • CIBSE TM-23: Testing Buildings for Air Leakage and Air Tightness Testing and Measurement Association (ATTMA) TS1: Measuring Air Permeability of Building Envelopes recommend building air-leakage rates from 0.11 to 0.33 cfm/sq ft.
  • Data from ASHRAE Handbook—Fundamentals, Chapter 27, “Ventilation and Air Infiltration” show rates of 0.1, 0.3, and 0.6 cfm/sq ft for tight, average, and leaky building envelopes, respectively.
  • A NIST report of more than 300 existing U.S., Canadian, and UK buildings showed leakage rates ranging from 0.47 to 2.7 cfm/sq ft of above-grade building envelope area.
  • ASHRAE’s Humidity Control Design Guide for Commercial and Institutional Buildings indicates typical commercial buildings have leakage rates of 0.33 to 2 air changes per hour, and buildings constructed in the 1980s and 1990s are not significantly tighter than those constructed in the 1950s, 1960s, and 1970s.

To what extent should the design engineer be concerned about building leakage? It is possible to develop profiles of indoor relative humidity and air change rates by using hourly simulation of a data center facility and varying the parameter of envelope leakage.

Using building-performance simulation for estimating energy use

Typical analysis techniques look at peak demands or steady-state conditions that are just representative snapshots of data center performance. These analysis techniques, while very important for certain aspects of data center design such as equipment sizing, do not tell the engineer anything about the dynamics of indoor temperature and humidity—some of the most crucial elements of successful data center operation. However, using an hourly (and sub-hourly) building energy-use simulation tool will provide the engineer with rich detail to be analyzed that can inform solutions to optimize energy use. For example, using building-performance simulation techniques for data center facilities yields marked differences in indoor relative humidity and air-change rates when comparing different building-envelope leakage rates. Based on project analysis and further research, the following conclusions can be drawn:

  • There is a high correlation between leakage rates and fluctuations in indoor relative humidity. The greater the leakage rates, the greater the fluctuations.
  • There is a high correlation between leakage rates and indoor relative humidity in the winter months. The greater the leakage rates, the lower the indoor relative humidity.
  • There is low correlation between leakage rates and indoor relative humidity in the summer months. The indoor relative humidity levels remain relatively unchanged even at greater leakage rates.
  • There is a high correlation between building leakage rates and air-change rates. The greater the leakage rates, the greater the number of air changes due to infiltration.

Climate, weather, and psychrometric analyses

Climate and weather data is the foundation of all the analyses used to determine data center facility energy use, PUE, economizer strategy, and other energy/climate-related investigations. The data used consists of 8,760 hours (the number of hours in a year) of dry-bulb, dew point, relative humidity, and wet-bulb temperatures.

When performing statistical analysis as a part of the energy-use study, it is important to understand the quantity of hours per year that fall into the different temperature bins. Data visualization techniques are used along with the ASHRAE temperature boundaries. Analyzing the hourly outdoor temperature data, totaling the hours, and assigning them a temperature zone on the graph indicates where the predominant number of hours falls. Along with these analysis techniques, it is important to understand the following qualifications on how to use the weather data:

  • The intended use of the hourly weather data is for building energy simulations. Other usages may be acceptable, but deriving designs for extreme design conditions requires caution.
  • Because the typical months are selected based on their similarity to average long-term conditions, there is a significant possibility that months containing extreme conditions would have been excluded.
  • Comparisons of design temperatures from “typical year” weather files to those shown in ASHRAE Handbook—Fundamentals have shown good agreement at the lower design criteria, i.e., 1%, 2% for cooling, and 99% for heating, but not so at the 0.4% or 99.6% design criteria.
  • ASHRAE Handbook—Fundamentals should be used for determining the appropriate design condition, especially for sizing cooling equipment.

Climate data

The raw data used in climate analysis is contained in an archive of ASHRAE International Weather Files for Energy Calculations 2.0 (IWEC2) weather-data files reported by stations in participating nations and recorded by the National Oceanic and Atmospheric Administration (formerly the National Climatic Data Center) under a World Meteorological Organization agreement. For the selected location, the database contains weather observations from an average of 4 times/day of wind speed and direction, sky cover, visibility, ceiling height, dry-bulb temperature, dew-point temperature, atmospheric pressure, liquid precipitation, and present weather for at least 12 years of record up to 25 years.

Psychrometrics

Psychrometrics uses thermodynamic properties to analyze conditions and processes involving moist air. With this data, other parameters used in thermodynamic analysis are calculated, namely the wet-bulb temperature. The following is an overview of the key thermophysical properties that are necessary to perform an energy-use study:

  • Dry-bulb temperature is that of an air sample as determined by an ordinary thermometer, the thermometer’s bulb being dry.
  • Wet-bulb temperature, in practice, is the reading of a thermometer whose sensing bulb is covered with a wet cloth, with its moisture evaporating into a rapid stream of the sample air.Dew-point temperature is that temperature at which a moist air sample at the same pressure would reach water vapor saturation.
  • Relative humidity is the ratio of the mole fraction of water vapor to the mole fraction of saturated moist air at the same temperature and pressure.
  • Humidity ratio (also known as moisture content, mixing ratio, or specific humidity) is the proportion of mass of water vapor per unit mass of dry air at the given conditions (dry-bulb temperature, wet-bulb temperature, dew-point temperature, relative humidity, etc.).
  • Specific enthalpy, also called heat content per unit mass, is the sum of the internal (heat) energy of the moist air in question, including the heat of the air and water vapor within.
  • Specific volume, also called inverse density, is the volume per unit mass of the air sample.

Reliability considerations

Most reliability strategies revolve around the use of multiple power and cooling modules. For example, the systems can be arranged in an N+1, N+2, 2N, or 2(N+1) configuration. The basic module size (N) and the additional modules (+1, +2, etc.) are configured to pick up part of the load (or even the entire load) in case of module failure, or during a scheduled maintenance event. When all of the UPS modules, air-handling units, and other cooling and electrical equipment are pieced together to create cohesive power-and-cooling infrastructure designed to meet certain reliability and availability requirements, then efficiency values at the various loading percentages should be developed for the entire integrated system (see Table 1). When all of these components are analyzed in different system topologies, loss curves can be generated so the efficiency levels can be compared to the reliability of the system, assisting in the decision-making process.

When we use the language of reliability, the terminology is important. For example, “reliability” is the probability that a system or piece of equipment will operate properly for a specified period of time under design operating conditions without failure. Also, “availability” is the long-term average fraction of time that a component or system is in service and is satisfactorily performing its intended function. These are just two of the many metrics that are calculated by running reliability analyses.

One of the general conclusions often drawn from reliability studies is that data center facilities with large IT loads have a higher chance of component failure than data centers with small IT loads. Somewhat intuitive, the more parts and pieces in the power-and-cooling infrastructure, the higher the likelihood of a component failure. Also, system topology will drive reliability as found when comparing electrical systems with N, N+2, and 2(N+1) configurations. These systems will have probabilities of failure (over a 5-year period) that range from a high of 83% (N) to a low of 4% [2(N+1)].

When analyzing the energy performance of data centers that use this module design, it becomes evident that at partial-load conditions, the higher reliability designs will exhibit lower overall efficiencies. This is certainly true for UPS and PDU equipment and others that have low efficiency at low-percent loading.

Understanding that PUE is comprised of all energy use in the data center facility, the non-data center areas can be large contributors to the total energy consumption of the facility. While it is not advisable to underestimate the energy consumption of non-data center areas, it is also not advisable to overestimate. Like most areas in commercial buildings, there are changes in occupancy and lighting over the course of days, weeks, and months, and these changes will have to be accounted for when estimating energy use. When performing energy estimations, develop schedules that turn lights and miscellaneous electrical loads on and off, or assign a percentage of total load to the variable. It is best to ascertain these schedules directly from the owner. If unavailable, industry guidelines and standards can be used.

Certainly, no two data centers are exactly the same, but developing nomenclature and an approach to assigning operating schedules to different rooms within the data center facility will be of great assistance when energy-use calculations are started:

  • Data center: primary room(s) housing computer, networking, and storage gear; raised floor area or data hall
  • Data center lighting: lighting for data center(s) as defined above
  • Secondary lighting: lighting for all non-data center rooms, such as UPS, switchgear, battery, etc.; also includes appropriate administrative areas and corridors
  • Miscellaneous power: non-data center power for plug loads and systems such as emergency management services, building management systems, fire alarm, security, fire-suppression system, etc.
  • Secondary HVAC: cooling and ventilation for non-data center spaces including UPS rooms. It is assumed that the data center spaces have a different HVAC system than the rest of the building.

The relationship between the IT systems, equipment, and the cooling system is an important one. The computers rely heavily on the cooling system to provide adequate air quantity and temperature. Without the proper temperature, the servers and other IT equipment might experience slower processing speed or even a server-initiated shutdown to prevent damage to internal components. There are a number of ways to optimize air flow and temperature.

 Air-management and -containment strategies

Proper airflow management creates cascading efficiency through many elements in the data center. If done correctly, it will significantly reduce problems related to re-entrainment of hot air into the cold aisle, which is often the culprit of hot spots and thermal overload. Air containment will also create a microenvironment with uniform temperature gradients, enabling predictable conditions at the air inlets to the servers. These conditions ultimately allow for the use of increased server-cooling air temperatures, which reduces the energy needed to cool the air. It also allows for an expanded window of operation for economizer use.

Traditionally, effective airflow management is accomplished by using a number of approaches: hot-aisle/cold-aisle organization of the server cabinets; aligning of exhaust ports from other types of computers (such as mainframes) to avoid mixing of hot exhaust air and cold supply air; and maintaining proper pressure in the raised-floor supply air plenum; among others. But arguably the most successful air-management technique is the use of physical barriers to contain the air and efficiently direct it to where it will be most effective. There are several approaches that give the end user a choice of options that meet the project requirements:

 Hot-aisle containment: The tried-and-true hot-aisle/cold-aisle arrangement used in laying out the IT cabinets was primarily developed to compartmentalize the hot and cold air. Certainly, it provided benefits, compared to layouts where IT equipment discharged hot air right into the air inlet of adjacent equipment. Unfortunately, this circumstance still exists in many data centers with legacy equipment. Hot-aisle containment takes the hot-aisle/cold-aisle strategy and builds on it substantially. The air in the hot aisle is contained using a physical barrier, i.e., a curtain system mounted at the ceiling level and terminating at the top of the IT cabinets. Other, more expensive techniques use solid walls and doors that create a hot chamber that completely contains the hot air. This system is generally more applicable for new installations. The hot air is discharged into the ceiling plenum from the contained hot aisle. Because the hot air is now concentrated into a small space, worker safety must be considered—the temperatures can get quite high.

Cold-aisle containment: While cold-aisle containment may appear to be simply a reverse of hot-aisle containment, it tends to be much more complicated in its operation. The cold-aisle containment system can also be constructed from a curtain system or solid walls and doors. The difference between this and hot-aisle containment comes from the ability to manage airflow to the computers in a more granular way. When constructed out of solid components, the room can act as a pressurization chamber that will maintain the proper amount of air required by the servers via monitoring and adjusting the differential pressure. The air-handing units serving the data center are given instructions to increase or decrease air volume to keep the pressure in the cold aisle at a preset level. As the server fans speed up, more air is delivered; when they slow down, less is delivered. This type of containment has several benefits beyond traditional airflow management mentioned above.

Self-contained, in-row cooling: To tackle air-management problems on an individual level, self-contained, in-row cooling units are a good solution. These come in many varieties, such as chilled-water-cooled, air-cooled DX, low-pressure pumped refrigerant, and even carbon-dioxide-cooled. These are best applied when there is a small grouping of high-density, high-heat-generating servers that are creating difficulties for the balance of the data center. This same approach can be applied to rear-door heat exchangers that essentially cool down the exhaust air from the servers to room temperature.

Water-cooled computers: Not exactly a containment strategy, water-cooled computers contain the heat in the water loop that removes heat internally from the computers. Once the staple cooling approach for large mainframes for data centers of yore, sectors like academic and research that use high-performance computing continue to use water-cooled computers. The water-cooling keeps the airflow through the computer to a minimum (the components that are not water-cooled still need airflow for heat dissipation). Typically, a water-cooled server will reject 10% to 30% of the total cabinet capacity to the air—not a trivial number when the IT cabinet houses 50 to 80 kW of computer equipment. Some water-cooled computers reject 100% of the heat to the water. Water-cooling similarly allows for uniform cabinet spacing without creating hot spots. Certainly, it is not a mainstream tactic to be used for enhancing airflow management, but it is important to be aware of the capabilities for future applicability.

What’s next?

Looking at the types of computer technology being developed for release within the next decade, one thing is certain: The dividing line between a facility’s power and cooling systems and the computer hardware is blurring. Computer hardware will have a much tighter integration with operating protocols and administration. Computing ability will include readiness and efficiency of the power and cooling systems, and autonomy to make workload decisions based on geography, historical demand, data transmission speeds, and climate. These are the types of strategies that, if executed properly, can significantly reduce overall data center energy use and reduce PUE far lower than today’s standards.


About the author

Bill Kosik is a distinguished technologist at HP Data Center Facilities Consulting. He is the leader of “Moving toward Sustainability,” which focuses on the research, development, and implementation of energy-efficient and environmentally responsible design strategies for data centers. He is a member of the Consulting-Specifying Engineer editorial advisory board.