Estimating data center PUE

When estimating data center energy use, consider applying these five analysis techniques.

By William J. Kosik, PE, CEM, HP Technology Services, Chicago March 6, 2013

Socrates said, “The more I learn, the more I learn how little I know.” This quote can correlate strongly with the topic of estimating and analyzing data center energy use. Despite many years of experience, a definitive, detailed approach on how to estimate power usage effectiveness (PUE) in a project’s early design phases is extremely elusive.

Reports and articles typically focus on measuring energy use after a facility is operational. The Green Grid Association and ASHRAE offer a wealth of information on how to perform energy compliance analyses and how to use the data to determine PUE (see “For further reading”). However, applying this knowledge to an energy use simulation process takes a lot of reverse engineering and postulations on how the data center will actually function. 

Even after making the appropriate assumptions and creating the energy model, the fact remains that most data centers are an amalgamation of standard commercial office space (corridors, offices, conference rooms, restrooms, and loading docks) and technology spaces (data halls, communication rooms, UPS/electrical rooms, generator rooms, and air handling rooms). To further complicate matters, there often is no clear differentiation between the spaces included in the PUE calculation and ones that are not. Also, it is typical to see power and cooling systems that serve multiple zones of a facility, obscuring a solution even more. 

According to The Green Grid, "In mixed-use data centers, shared ancillary services, such as common lobbies, common bathrooms, and elevators may be excluded from the energy use boundary. However, ancillary services that are dedicated to the data center must be included (e.g., lobby, bathrooms, office spaces that are dedicated to the data center operation)." While this offers some direction, it becomes clear that the energy engineer responsible for determining the estimated energy use and data center PUE must make significant assumptions. PUE is based on annual energy usage—the average of the energy use over 8,760 hours, but the peak PUE is also an essential metric when examining the greatest hourly power demand. It should be noted that to keep the nomenclature consistent, there technically isn’t a peak PUE because PUE is measured in energy use, not demand. However, knowing snapshot PUE values over the course of a year is helpful when analyzing which systems use the highest amount of power, and when. 

There are three common arrangements for projects that require pre-design phase energy use approximations:

  • Stand-alone data center: includes the entire building
  • Stand-alone data center: includes data halls, UPS/battery rooms, and communication rooms; excludes offices, corridors, and infrastructure rooms
  • Data center inside a commercial building: includes all technology spaces and only office spaces that directly support the data center.  

Clearly, there are many combinations of and variations on these three arrangements that can be developed, and each results in a different energy use and PUE estimate. Until there is a regulated approach and strict definition of how to determine which areas and associated power and cooling systems should be included in the PUE calculation, there will be, at best, inconsistent approaches and, at worst, gaming in order to get the most attractive PUE. The importance of this standardization becomes elevated when data center users ask that the PUE be guaranteed before the development of data center construction documents (see Figure 1). The guarantee is typically tied to financial penalties and shared savings depending on the data center’s energy use performance, so documenting assumptions and referring to relevant industry standards is vital for inclusion in the contractual documents.

The purpose of this article is to expose conditions that will influence estimated energy use; it is not intended to provide solutions simply because there are too many different parameters and circumstances to cover. However, the following items illustrate analysis techniques that can be applied toward particular situations. Undoubtedly, this list will continue to grow as we learn.

  1. Continuous cooling UPS losses
  2. Separate, smaller UPS for other IT loads
  3. Miscellaneous power and lighting usage schedules
  4. Elevated supply air temperatures
  5. Partial IT load.  

Continuous cooling UPS losses

The combination of high-reliability, high-density, and high-temperature data center operation has led to the use of continuous cooling (CC) systems. The HVAC and electrical system design, program requirements, and project budget will drive the amount of cooling equipment required to have continuous operation in the event of a power interruption until on-site generation equipment is fully energized. For water-based cooling systems, the pumps will often be on the CC system to circulate water that is still cold due to its high thermal capacity. In this situation, the computer room air handler or air handling unit fans must also be on the CC system. 

Less common is to have the vapor compression equipment on the CC system, due to the large power demand of the compressors. Because much of the support systems and equipment for the IT gear in the data center are located in areas outside the data center proper, it is necessary to make an assessment of the operational continuity requirements of electrical gear and other equipment that can be adversely affected by exposure to prolonged elevated temperatures. The outcome of this assessment will inform the engineers if cooling equipment outside the data center must be on the CC system. 

As in the other UPS systems, the CC system will have losses that will vary by the amount of cooling load and associated power demand. This is where the use of economization systems not only reduces the amount of energy used by the cooling system, but also reduces the amount of losses generated by the CC system (see Figure 2). When modeling this aspect of the data center, good practice dictates that these electrical loss variations be modeled as accurately as possible as to not overestimate energy use by assuming that the power required by the cooling system does not change throughout the year. 

Separate, smaller UPS for other IT loads

The UPS system in the data center is arguably one of the most critical components of the electrical system to protect the IT equipment from power anomalies and to provide ride-through time in the event of a loss of power. Each type of UPS equipment will have a distinct efficiency curve that depicts how the efficiency changes with changes in the IT load. Based on this curve, the energy engineer can predict how much additional energy is required of the air-conditioning system due to the UPS inefficiencies. 

Less apparent is the use of smaller UPS systems for IT loads that might be located outside the data center proper. The equipment might be for communication devices, security, building management, and in healthcare settings, medical IT equipment used for medical records and imaging. First, it will need to be determined if this equipment will be considered as a part of the primary IT load, thereby going in the denominator of the PUE equation. Second, the inefficiency of this (undoubtedly smaller) UPS system will need to be included in the PUE calculation as part of the annual energy use. The primary UPS system will have different operating characteristics (efficiency at part loads) and should not be used as a proxy for the smaller UPS system. 

Scheduling miscellaneous electrical and lighting loads

The ANSI/ASHRAE/IES Standard 90.1-2010 (the standard) clearly defines lighting power densities for different types of spaces in a building. Because many areas in the data center will have 24/7/365 operation, modeling of the lighting energy use can be highly variable depending on the type and occupancy rate of the facility. The standard requires that when modeling the energy use of a building, “the schedules shall be typical of the proposed building type as determined by the designer and approved by the rating authority." This statement is meant to be applied in the context of modeling a proposed building design and comparing it to the baseline design (as defined by the standard) to determine if the proposed design meets the minimum criteria set forth in the standard. The point here is that outside of the standard, there is no definitive approach on modeling the number of hours per year that lighting systems in a data center will be energized. The same is true when estimating energy use of miscellaneous electrical loads throughout the infrastructure areas. These loads might be occupancy-based, seasonal-based, or IT load-based. It is incumbent on the energy engineer to document the assumptions as to the size of the load, when the load is energized and to what level, and also document assumptions on lighting power density and schedules.

Elevated supply air temperatures

ASHRAE, the governing body for setting data center environmental operating guidelines, recently published expanded criteria for IT equipment operation that introduces two new operating classes that allow for operation at expanded conditions. Specifically, in addition to the existing class A2, which allows for operation up to an IT equipment inlet temperature of 95 F, classes A3 and A4 were introduced that allow for operation up to 104 F and 113 F, respectively. Unfortunately, limited analysis is available on the overall impact of these new classes on total cost of ownership, particularly the increase in IT power consumption at elevated temperature, and the capital cost of developing IT gear to withstand these environments. 

Before ASHRAE published the expanded criteria, it was generally known that increasing the supply air temperature in a data center will lower the energy used in the vapor-compression cycle. The window of opportunity to use economization also opened further by increasing the number of hours per year that the ambient temperature is deemed acceptable for use in an economizer cycle. ASHRAE’s white paper also discussed that at higher temperatures, the fans in the computer gear will speed up to maintain the required internal temperatures to avoid thermal overload, equipment shut-down, and possible physical damage to the computer components. Another point to consider is that servers designed to A3 and A4 specifications might have a higher first cost, working against the theory that elevated temperatures will reduce the total operating costs of the data center. 

These notions will affect energy use and PUE (significantly, in some cases) especially in warmer climates. When a data center is located in a cold or mild climate, increasing supply air temperatures above a certain point will have little or no effect on energy consumption (see Figure 3). It is important to perform analysis on this to equalize expectations and to not overdesign the economizer system. 

In warmer climates, it is a different story. Because there is a much greater number of hours with elevated outside temperatures, being able to use the air—directly or indirectly—for economization will avoid using compressorized air-conditioning equipment. This is an example of a situation where it is possible to use very hot supply air and minimize cooling energy. At the same time, the increased energy of the computer equipment must be taken into consideration. The purpose of PUE starts to get a little fuzzy here because as the temperatures increase, the computer equipment will start to draw more power and use more energy. Table 1 shows the results of an analysis on energy use of cooling systems in different locations operating at different supply air temperatures. The analysis also includes theoretical first costs attributable to using servers that can operate in an A3 environment. The results show the greatest annual energy savings and shortest payback in the warmer climates. As with the other items, it is important to document the modeling approach and describe the principles used as to how the IT energy was simulated.

PUE and partial IT load

Running IT equipment at partial load is one of the biggest causes of inefficient energy use across all systems and equipment in the data center. Exacerbating this situation is that for myriad reasons, servers, storage, and networking gear traditionally have been deployed to run at very low computing loads (utilization), often less than 20% of capability. Recent data from the U.S. Environmental Protection Agency showed that from a sampling of more than 60 enterprise servers, the power draw at an idle computing state ranged from 25% to 80% of a full-load computing load. All else being equal, this translates to a draw on the electrical power system of a minimum of 25%, even when computing loads are nil or at a very minimal state. Fortunately, there are other loads in the data center beyond servers that mitigate this very high minimum power state to a lower value. 

Understanding these power states and how the data center will be designed to accommodate them is vital to properly modeling the energy use. Remember that studies have shown that at 25% load, electrical losses are more than two times greater in a monolithic design compared to a modular system, while the chiller system will draw more than 1.5 times the power than a modular solution. This will influence the energy use of the facility and the PUE mainly at low loads. Also, the data center facility will likely have a base electrical load for lighting and other miscellaneous power that is a constant across all power states. This can drive the PUE very high when analyzing at very low loads. This is an important reason to use lighting and miscellaneous power loads in the energy model that are as accurate as possible. 

At partial loading, the effects of very high reliability power and cooling systems are most noticeable. And because reliability often comes in the form of adding redundant modules in the power and cooling systems, these added modules lower the overall power draw of the other modules, causing them to run an even lower loading points. Using elements of a recent high-reliability data center project as an example, the power and cooling systems are designed in a modular fashion, allowing for adding capacity when required. And during the life of the data center, there will always be a number of power and cooling units that are redundant with other units, sharing the overall power or cooling load. This sharing drives down the loading of the units and typically drives down the efficiency as well. Figure 4 shows the plan for expansion of the data center, comparing it to the number of power and cooling units installed. For the cooling units, the average loading on a unit is 73%, while the UPS modules average 27%.

Figure 5 shows that at 27% UPS module loading, the efficiency will be approximately 92%. Remember, this is when the module is at its maximum loading. Looking at an additional scenario of a 15% IT load, the module efficiency drops to approximately 88%. To get the entire electrical system efficiency, we need to add losses for other components such as transformers, power distribution units, and wiring, which will add another 2% to 3%. The electrical distribution system, including the UPS equipment, is one of the largest contributors to overall energy consumption in the data center and it is necessary to understand its effects and model them properly.

At some point, we will start knowing more as we learn more, but until then, the industry needs to work with the standards and guidelines that are currently available and apply good engineering judgment when attempting to determine energy use and PUE of a yet un-built data center.


William Kosik is principal data center energy technologist with HP Technology Services, Chicago. Kosik is one of the main technical contributors shaping HP Technologies Services’ energy and sustainability expertise and consults on client assignments worldwide. A member of the Consulting-Specifying Engineer editorial advisory board, he has written more than 20 articles and spoken at more than 35 industry conferences.


For further reading

Refer to the following references for more information about estimating data center energy use: