Data Center Power Use Effectiveness (PUE): Understanding the Contributing Factors

By William J. Kosik, PE, LEED AP, HP Critical Facilities Services, Chicago December 1, 2008

If you are interested in data center design or performance, you’ve undoubtedly been seeing sales sheets, white papers, and news reports of products and techniques for increasing data center energy efficiency.

The metric most often quoted in data center power use efficiency is Power Use Effectiveness (PUE) and its inverse, Data Center Infrastructure Efficiency (DCIE). PUE and DCIE compare the data center facility’s overall power consumption to the power consumed by the information and communication technology (ICT) equipment. These metrics, which are primarily facilities-based, will probably serve as the framework for a future standard; however, they are still being developed. Whether or not the metrics are being used prematurely depends on how they are being used (i.e., for self-promotion or for encouraging discussion), and if the testing procedures or simulation algorithms are disclosed.

Until a formal, recognized standard with a rigorous, engineering-based process to judge data center energy efficiency is developed, presented for public review, and formally adopted by a widespread consensus, published efficiency ratings will continue to be based on the interpretation of those who report the figures.

The good news is that there already exist some established energy standards, such as ANSI/ASHRAE/IESNA Standard 90.1-2007, Energy Standard for Buildings Except Low-Rise Residential Buildings, for other areas of the building that impact the data center’s efficiency (see Figure 1). This article examines these interdependencies, enabling designers, owners, and facility decision makers to ask tough questions when faced with new construction and renovation projects for data centers.

BUILDING ENVELOPE

Relative to the energy required to cool and power ICT equipment, the energy impact from the building envelope is small. However, the importance of code compliance and the effects of moisture migration cannot be overlooked.

For data centers, the integrity of the building’s vapor barrier is extremely important, as it safeguards against air leakage caused by the forces of wind and differential air pressure. It also minimizes migration of moisture driven by differential vapor pressure. Most data center cooling equipment is designed for sensible cooling only (no moisture removal from the air). Higher than expected moisture levels in the data center will result in greater energy consumption and possible operational problems caused by excessive condensate forming on the cooling coils in the air handling equipment.

HVAC, LIGHTING, AND POWER SYSTEMS

Standard 90.1 addresses, in great detail, the energy performance of HVAC and lighting systems, including control strategies, economizer options, and climate-specific topics. However, it provides little guidance on power distribution systems as they are applied to data centers. The lack of developed standards on UPS and the efficiency of the overall power delivery chain (from incoming utility power right down to the individual piece of ICT equipment) is a major gap that needs to be filled.

HVAC is the biggest non-ICT energy consumer in a data center facility. ASHRAE Standard 90.1 presents minimum energy performance of individual components such as chillers, direct-expansion (DX) systems, pumps, fans, motors, and heat-rejection equipment. To be in compliance with the standard, equipment must meet the specified energy-use metrics.

To determine whole-building energy performance, 90.1 sets forth a procedure that prescriptively defines how a given building’s energy performance (the “proposed” building) compares to the calculated theoretical energy performance (the “budget” building). This same method—with some augmentation to address data center-specific design and operations issues—can and should be used to determine the budget PUE and DCIE to benchmark data center energy use. (Incidentally, this method is used for the Energy and Atmosphere credit category in the U.S. Green Building Council LEED rating system, so as more data centers look to become LEED certified, this process needs to be used anyway.)

BUILDING USE AND CONTROL STRATEGIES

This is an area where existing energy standards for buildings need to be adapted to accommodate the parameters that are unique to data centers. Commercial office buildings and energy-intensive facilities, such as laboratories and hospitals, do not have the same characteristics of data centers, so there are no available templates to use as a starting point:

  • The magnitude of process load (the ICT equipment) as compared to power load attributable to the occupants, lighting, and building envelope heat transfer is far greater in data centers than in any other commercial-building type.
  • Because data centers and other technology-intensive facilities can be gradually populated with technology equipment over time, energy usage must be examined at full- and part-load conditions. The estimated energy use of both the 90.1 baseline and the proposed data center design should be modeled according to the schedule of technology equipment implementation. Also, the energy modeling must include the level of reliability and equipment redundancy at both part-load and full build-out conditions. (If dual paths of critical power and multiple pieces of cooling equipment are used to achieve concurrent maintainability and fault tolerance, the same strategy must be used as the critical-load scales up.)

The concept behind demonstrating energy at part-load conditions is to raise awareness during system design and while specifying major power distribution and cooling equipment and equipment control sequences. This part-load performance must be documented using the energy simulation procedures described in 90.1.

  • Energy codes for commercial buildings assume that the primary function of the building is for human occupancy. Therefore, all of the energy-efficiency ratings in existing building energy performance standards are based on the ability to maintain an indoor environment that is comfortable and safe for humans. The indoor environmental conditions required for ICT equipment are far different than those required for human comfort and safety.

The baseline building should be designed to maintain the maximum recommended ASHRAE Class 1 environmental requirements of 78 F at the inlet to the technology equipment, as described in ASHRAE’s Thermal Guidelines for Data Processing Environments. Also, as per the Guide, the supply air dew point shall correspond to the maximum and minimum dew point temperatures at 78 F and 40% RH (51.7 F) and 78 F and 55% RH (60.5 F). The baseline system shall use the same temperature rise across the technology equipment to maintain the same air quantities, fan motor power, and sensible heat gain.

  • Most energy-efficiency strategies for commercial buildings rely on an economizer cycle to reduce power required for cooling by using the ambient conditions either directly (as with an outdoor air economizer) or indirectly (as with a water-side economizer or heat reclaim). Using outside air for cooling in data centers is a very viable concept; researchers are in the early phases of studying the long-term effects of outdoor air pollutants and temperature/moisture swings on ICT equipment.
  • Reliability and operational continuity are still the cornerstones for judging the success of a data center. These requirements can result in multiple paths of active power and cooling and central plant equipment running at low loads. Consider a data center that is brought online and populated with servers, storage devices, and networking gear over two to three years before the facility is running at full load—the facility would have multiple power and cooling distribution paths and central plant equipment (chillers, pumps, UPS) running at 10% to 20% of capacity for several months. In addition to inefficient power use, this type of scenario can cause operational problems if not addressed during the design phase.

CLIMATE

Climate has the greatest influence on a data center’s energy efficiency, followed closely by HVAC system type, electrical distribution system reliability, and part-loading conditions. Because the primary effect of climate is on HVAC system power use, a climate-based energy analysis is required to accurately determine PUE and DCIE.

The energy use of the HVAC system (DX, chilled-water, air-cooled, water-cooled) will vary based on the outdoor air dry-bulb and wet-bulb temperatures, the chilled-water supply temperature (for chilled-water systems only), and the wet-bulb temperature entering the cooling coil (DX systems). The annual energy consumption of a particular HVAC system is determined by the use of biquadratic formulas developed to estimate electrical usage of vapor compression cooling equipment. Depending on the type of HVAC system being considered, the variables used in these equations represent temperatures for outdoor wet bulb, outdoor dry bulb, chilled-water supply, and condenser water supply.

If we were to stitch all of this together into a process to determine PUE and DCIE, the energy use components would breakdown as shown in Table 5.

Using this process, we can develop a profile of how PUE and DCIE vary by HVAC system type and climate. Running an analysis in accordance with the requirements of 90.1, with specific parameters such as building size, construction materials, computer equipment load, and reliability configuration, will result in a minimum energy performance profile for a particular data center. We can then use the profile to create minimum PUE and DCIE requirements for different HVAC systems in different climate zones.

Judging a facility’s energy performance without putting it in context of the climate and HVAC system type will result in skewed data that is not truly representative.

Pumping Equipment type ASHRAE allowable gpm/ton
Table 1: Allowable chilled and condenser water pump power, based on paragraphs G3.1.3.10 and G3.1.3.11 of 90.1. Source for all tables: William Kosik.
Chilled water 2.4
Condenser water 3.0

Supply air volume Constant volume (systems 1-4) Variable volume (systems 5-8)
Table 2: Allowable fan power, based on Table and paragraph G3.1.2.9 of ASHRAE 90.1.
>20,000 cfm 17.25 + (cfm – 20,000) x 0.0008625 24 + (cfm – 20,000) x 0.0012
≤20,000 cfm 17.25 + (cfm – 20,000) x 0.000825 24 + (cfm – 20,000) x 0.001125

Heat rejection equipment Minimum gpm/hp
Table 3: Allowable power for heat rejection power, based on Table 6.8.1G of ASHRAE 90.1.
Propeller or axial fan cooling towers 38.2
Centrifugal fan cooling towers 20.0

Equipment type Minimum tons Maximum tons kW/ton c1 c2 c3 c4 c5 c6 T1 T2
Curve equation – % of kW/ton = f (T1, T2) = c1 + c2*T1 + c3*T12 + c4*T2 + c5*T22 + c6*T1*T2
Table 4: Minimum energy performance of the four primary data center HVAC system types and the variables for the biquadratic equation used to estimate annual energy consumption, based on data from ASHRAE 90.1 and DOE-2.2 energy analysis simulation algorithms.
Air-cooled DX (kW/ton) 5.4 11.3 1.09 DX cooling coil entering wet-bulb temperature Outdoor dry-bulb temperature -1.06393 0.03066 -0.00013 0.01542 0.00005 -0.00021
11.3 20.0 1.11
20.0 63.3 1.22
63.3 >63.3 1.26
Water-cooled DX (kW/ton) 5.4 11.3 0.99 DX cooling coil entering wet-bulb temperature Outdoor dry-bulb temperature -1.06393 0.03066 -0.00013 0.01542 0.00005 -0.00021
11.3 20.0 1.06
20.0 63.3 1.11
Air-cooled chilled water (kW/ton) 5.4 11.3 1.35 Chilled-water supply temperature Outdoor dry-bulb temperature 0.93631 -0.01016 0.00022 -0.00245 0.00014 -0.00022
11.3 20.0 1.35
20.0 63.3 1.35
63.3 >63.3 1.35
Water-cooled chilled water (kW/ton) 5.4 11.3 0.79 Chilled-water supply temperature Entering condenser-water temperature 1.15362 -0.03068 0.00031 0.00671 0.00005 -0.00009
11.3 20.0 0.79
20.0 63.3 0.79

Table 5: A breakdown of the components required to determine PUE and DCIE.
HVAC loads Chiller/CDPR unit power kWh % of total
Fan power kWh % of total
Pump power kWh % of total
Heat rejection kWh % of total
Humidification kWh % of total
Other HVAC kWh % of total
Electrical losses UPS loss kWh % of of total
PDU loss kWh % of total
Backup generator standby loss (heaters, chargers, fuel system, controls) kWh % of total
Power distribution loss kWh % of total
Lighting kWh % of total
Other electrical kWh % of total
ICT ICT equipment kWh % of of total

LOOKING FORWARD

The Green Grid, an organization whose mission is to promote and develop technical standards on data center power efficiency, has made the most progress in attempting to use metrics to develop a detailed, repeatable approach for reporting efficiency ratings. Until standards are in place, the Green Grid’s yardsticks will help level the playing field for both new data center design and existing facility energy-efficiency reporting.

Author Information
Kosik is energy and sustainability director of HP Critical Facilities Services, A Company of HP, Chicago. He is a member of the Editorial Advisory Board of Consulting-Specifying Engineer.