Estimating data center PUE

03/06/2013


Scheduling miscellaneous electrical and lighting loads

The ANSI/ASHRAE/IES Standard 90.1-2010 (the standard) clearly defines lighting power densities for different types of spaces in a building. Because many areas in the data center will have 24/7/365 operation, modeling of the lighting energy use can be highly variable depending on the type and occupancy rate of the facility. The standard requires that when modeling the energy use of a building, “the schedules shall be typical of the proposed building type as determined by the designer and approved by the rating authority." This statement is meant to be applied in the context of modeling a proposed building design and comparing it to the baseline design (as defined by the standard) to determine if the proposed design meets the minimum criteria set forth in the standard. The point here is that outside of the standard, there is no definitive approach on modeling the number of hours per year that lighting systems in a data center will be energized. The same is true when estimating energy use of miscellaneous electrical loads throughout the infrastructure areas. These loads might be occupancy-based, seasonal-based, or IT load-based. It is incumbent on the energy engineer to document the assumptions as to the size of the load, when the load is energized and to what level, and also document assumptions on lighting power density and schedules.

Elevated supply air temperatures

Figure 3: This graph shows annual energy use for air-cooled indirect evaporative cooling for ASHRAE climate zones. When a data center is located in a cold or mild climate, increasing supply air temperatures above a certain point will have little or no effASHRAE, the governing body for setting data center environmental operating guidelines, recently published expanded criteria for IT equipment operation that introduces two new operating classes that allow for operation at expanded conditions. Specifically, in addition to the existing class A2, which allows for operation up to an IT equipment inlet temperature of 95 F, classes A3 and A4 were introduced that allow for operation up to 104 F and 113 F, respectively. Unfortunately, limited analysis is available on the overall impact of these new classes on total cost of ownership, particularly the increase in IT power consumption at elevated temperature, and the capital cost of developing IT gear to withstand these environments. 

Before ASHRAE published the expanded criteria, it was generally known that increasing the supply air temperature in a data center will lower the energy used in the vapor-compression cycle. The window of opportunity to use economization also opened further by increasing the number of hours per year that the ambient temperature is deemed acceptable for use in an economizer cycle. ASHRAE's white paper also discussed that at higher temperatures, the fans in the computer gear will speed up to maintain the required internal temperatures to avoid thermal overload, equipment shut-down, and possible physical damage to the computer components. Another point to consider is that servers designed to A3 and A4 specifications might have a higher first cost, working against the theory that elevated temperatures will reduce the total operating costs of the data center. 

Courtesy: HP Technology Services; source: table data are based on in-house research HP Technology Services conducted in collaboration with HP Labs and HP Enterprise Server, Storage, and Network, both internal HP groups.

These notions will affect energy use and PUE (significantly, in some cases) especially in warmer climates. When a data center is located in a cold or mild climate, increasing supply air temperatures above a certain point will have little or no effect on energy consumption (see Figure 3). It is important to perform analysis on this to equalize expectations and to not overdesign the economizer system. 

In warmer climates, it is a different story. Because there is a much greater number of hours with elevated outside temperatures, being able to use the air—directly or indirectly—for economization will avoid using compressorized air-conditioning equipment. This is an example of a situation where it is possible to use very hot supply air and minimize cooling energy. At the same time, the increased energy of the computer equipment must be taken into consideration. The purpose of PUE starts to get a little fuzzy here because as the temperatures increase, the computer equipment will start to draw more power and use more energy. Table 1 shows the results of an analysis on energy use of cooling systems in different locations operating at different supply air temperatures. The analysis also includes theoretical first costs attributable to using servers that can operate in an A3 environment. The results show the greatest annual energy savings and shortest payback in the warmer climates. As with the other items, it is important to document the modeling approach and describe the principles used as to how the IT energy was simulated.

Figure 4: These graphs compare cooling unit UPS module redundancies at partial and IT loads for the data center expansion plan example. The average loading on a cooling unit is 73% (a), while the UPS modules average 27% (b). Courtesy: HP Technology ServicPUE and partial IT load

Running IT equipment at partial load is one of the biggest causes of inefficient energy use across all systems and equipment in the data center. Exacerbating this situation is that for myriad reasons, servers, storage, and networking gear traditionally have been deployed to run at very low computing loads (utilization), often less than 20% of capability. Recent data from the U.S. Environmental Protection Agency showed that from a sampling of more than 60 enterprise servers, the power draw at an idle computing state ranged from 25% to 80% of a full-load computing load. All else being equal, this translates to a draw on the electrical power system of a minimum of 25%, even when computing loads are nil or at a very minimal state. Fortunately, there are other loads in the data center beyond servers that mitigate this very high minimum power state to a lower value. 

Understanding these power states and how the data center will be designed to accommodate them is vital to properly modeling the energy use. Remember that studies have shown that at 25% load, electrical losses are more than two times greater in a monolithic design compared to a modular system, while the chiller system will draw more than 1.5 times the power than a modular solution. This will influence the energy use of the facility and the PUE mainly at low loads. Also, the data center facility will likely have a base electrical load for lighting and other miscellaneous power that is a constant across all power states. This can drive the PUE very high when analyzing at very low loads. This is an important reason to use lighting and miscellaneous power loads in the energy model that are as accurate as possible. 

At partial loading, the effects of very high reliability power and cooling systems are most noticeable. And because reliability often comes in the form of adding redundant modules in the power and cooling systems, these added modules lower the overall power draw of the other modules, causing them to run an even lower loading points. Using elements of a recent high-reliability data center project as an example, the power and cooling systems are designed in a modular fashion, allowing for adding capacity when required. And during the life of the data center, there will always be a number of power and cooling units that are redundant with other units, sharing the overall power or cooling load. This sharing drives down the loading of the units and typically drives down the efficiency as well. Figure 4 shows the plan for expansion of the data center, comparing it to the number of power and cooling units installed. For the cooling units, the average loading on a unit is 73%, while the UPS modules average 27%.

Figure 5: This graph shows an example of UPS module efficiency at partial IT loads. The efficiency is approximately 92% at 27% UPS module loading. However, the efficiency drops to approximately 88% at 15% UPS module loading. Courtesy: HP Technology ServicFigure 5 shows that at 27% UPS module loading, the efficiency will be approximately 92%. Remember, this is when the module is at its maximum loading. Looking at an additional scenario of a 15% IT load, the module efficiency drops to approximately 88%. To get the entire electrical system efficiency, we need to add losses for other components such as transformers, power distribution units, and wiring, which will add another 2% to 3%. The electrical distribution system, including the UPS equipment, is one of the largest contributors to overall energy consumption in the data center and it is necessary to understand its effects and model them properly.

At some point, we will start knowing more as we learn more, but until then, the industry needs to work with the standards and guidelines that are currently available and apply good engineering judgment when attempting to determine energy use and PUE of a yet un-built data center.


William Kosik is principal data center energy technologist with HP Technology Services, Chicago. Kosik is one of the main technical contributors shaping HP Technologies Services’ energy and sustainability expertise and consults on client assignments worldwide. A member of the Consulting-Specifying Engineer editorial advisory board, he has written more than 20 articles and spoken at more than 35 industry conferences.


For further reading

Refer to the following references for more information about estimating data center energy use:


<< First < Previous 1 2 Next > Last >>

No comments
Consulting-Specifying Engineer's Product of the Year (POY) contest is the premier award for new products in the HVAC, fire, electrical, and...
Consulting-Specifying Engineer magazine is dedicated to encouraging and recognizing the most talented young individuals...
The MEP Giants program lists the top mechanical, electrical, plumbing, and fire protection engineering firms in the United States.
High-performance buildings; Building envelope and integration; Electrical, HVAC system integration; Smoke control systems; Using BAS for M&V
Pressure piping systems: Designing with ASME; Lab ventilation; Lighting controls; Reduce energy use with VFDs
Smoke control: Designing for proper ventilation; Smart Grid Standard 201P; Commissioning HVAC systems; Boilers and boiler systems
Case Study Database

Case Study Database

Get more exposure for your case study by uploading it to the Consulting-Specifying Engineer case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.

These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.

Click here to visit the Case Study Database and upload your case study.

Protecting standby generators for mission critical facilities; Selecting energy-efficient transformers; Integrating power monitoring systems; Mitigating harmonics in electrical systems
Commissioning electrical systems in mission critical facilities; Anticipating the Smart Grid; Mitigating arc flash hazards in medium-voltage switchgear; Comparing generator sizing software
Integrating BAS, electrical systems; Electrical system flexibility; Hospital electrical distribution; Electrical system grounding
Cannon Design’s blog is a place for the many voices of the firm to share thoughts and news related to current projects...
As brand protection manager for Eaton’s Electrical Sector, Tom Grace oversees counterfeit awareness...
Amara Rozgus is chief editor and content manager of Consulting-Specifier Engineer magazine.
IEEE power industry experts bring their combined experience in the electrical power industry...
Michael Heinsdorf, P.E., LEED AP, CDT is an Engineering Specification Writer at ARCOM MasterSpec.