Tackling data center energy use

ASHRAE Standard 90.4: Energy Standard for Data Centers guides engineers in designing mechanical and electrical systems in data centers.

07/27/2018


Learning objectives

  • Become familiar with ASHRAE Standard 90.4-2016: Energy Standard for Data Centers.
  • Learn how to define data center energy use.
  • Calculate energy-reduction options within data centers.

In 1999, a prescient article authored by Forbes columnist Peter Huber titled "Dig More Coal—the PCs Are Coming" focused on energy use attributable to the internet. For one of the first times, the topic of data center energy use was introduced in a major mainstream publication.

Nearly 20 years later, the industry is still on a quest, diligently working on tactics and strategies to curb energy use while maintaining data center performance and reliability. During this time, dozens of cooling and power system designs have been developed to shrink electricity bills from energy efficiency improvements. Building on these advances, manufacturers are now producing equipment, purpose-built specifically for use in the data center. For example, HVAC engineers were mostly limited to designing around standard commercial cooling equipment, generally not able to meet the demands of a critical facility.

As the industry matured (domestically and globally), efforts ramped up to reduce energy use in data centers and other technology-intensive facilities. These programs, while different in scope and detail, all had a similar goal: develop noncompulsory, consensus-driven best practices and guidelines on optimizing data center energy efficiency and resource use. It was truly a watershed moment as these programs manifested into actual design and reference documentation, providing vital access to data on design and operations; prior to this time, finding consistent, verifiable instruction on how to improve data center energy efficiency was not easy. Today, worldwide, these documents are numerous and come from diverse sources.

When organizations such as ASHRAE are developing official standards, the current state of the industry is certainly taken into consideration in an attempt to avoid releasing language that is overly stringent (or too lax), possibly resulting in unintended outcomes. Undoubtedly some of the grass-roots best practices and design approached that were already in place ultimately influenced modifications to the commonly used energy efficiency standard, ASHRAE 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings, to create a data center-specific energy standard.

As building and systems technology has changed over the years, ASHRAE has been able to respond by continuously refining its approach on defining energy efficiency. For more than 40 years ASHRAE has consistently developed a widely used process for determining compliance for commercial buildings. However, determining compliance for data center proves to be more challenging, due to the exceptional nature of the facility's electrical consumption and rigorous reliability requirements; this drove to the creation ASHRAE Standard 90.4-2016: Energy Standard for Data Centers.

Increasing the urgency to develop a data center energy standard is the rapid proliferation of massive data centers, causing logistical difficulties to identify, develop, and release a new standard as complex as ASHRAE 90.1. Every new release of Standard 90.1 (since 2010) builds on the previous release, providing more instruction and metrics for developing energy-consumption compliance specific to data centers. 

Tackling energy use: Looking ahead

As hyperscale data centers and cloud computing flourishes, energy efficiency and operating costs for data centers continues to be a fundamental concern to owners and operators. For example, Cisco Cloud Index provides analysis of cloud and data center platforms and predicts that, by 2020, traffic within hyperscale data centers will quintuple, representing 53% of all data center traffic.

Hyperscale data centers are designed to be highly scalable, homogenous, and highly virtualized. These data centers also tend to have elevated computer equipment densities (measured in watts per square foot or kilowatts per server cabinet) and above-average overall power demands (measured in megawatts). This is not just for new data centers-existing data centers, retiring end-of-life servers, storage, and networking equipment can often end up with a net-positive electrical load, requiring greater amounts of power as compared with the computer systems prior to upgrade.

Part of this has to do with being able to fit more hardware in the place of the displaced equipment. So even if the individual server, as an example, has a smaller power demand than its predecessor, the total load will increase due to the increased number of servers.

The phrase "next-generation computing" may evoke a feeling of massively powerful, autonomous computers. That isn't too far from reality, especially when talking about supercomputers and other high-performance computing systems. These computing platforms are set apart from other systems by the ability to solve extremely complex and data-intensive problems.

For larger systems, data-throughput speed and sheer computational muscle require a peak power demand unlike any commercial computing platform; it is not unusual to see server cabinets rated from 80 kW to well over 100 kW (compared to a corporate data center where the power demand per server cabinet will generally range from 5 kW to 20 kW). These leading-edge machines are purpose-built and use proprietary power distributions and cooling systems. And these machines will run at peak power for weeks or months when processing particularly intensive jobs. Due to the massive power demand and associated energy use, the supercomputing community has placed a high priority on efficiency as it applies to computer performance. For example, The Top500, a data base of end user-supplied performance data, tracks the 500 most powerful commercially available computer systems. A subset of The Top500, the Green500, ranks the top 500 most efficient supercomputers in the world.

Understanding that supercomputers are built for specific uses, especially at the upper end of computational power, there are few similarities amongst the different machines. To address this, the Green500 list is created using the performance-per-watt efficiency metric; this uses the metric floating point operations per second (FLOPS) and the power measurement in watts, resulting in FLOPS per watt. This normalizes the different platforms and system sizes, allowing a valid comparison of supercomputing efficiency. While there are certainly other performance tests for supercomputers, the FLOPS-per-watt metric is particularly useful for building services and energy engineers. 

Early metric development

Over the years, a lot of effort and specialized knowledge has gone into the advancement of energy efficiency in data centers. The formation of some of the most influential organizations began close to 2 decades ago, subsequently building up considerable membership bases. Engineers, scientists, researchers, the U.S. federal government, manufacturers, professional organizations, and many others were the driving force behind establishing these organizations.

An interesting aspect of these associations is that the membership will typically have varied reasons for wanting to develop energy efficiency goals. The diverse mix of participants encouraged debate and discussion, which is a big reason why much of the material published was well-received and is still relevant many years later. Also, the organizations did not operate under the same rules: some were top-down (like the federal government entities) and some were bottom-up (like engineering and professional societies).

One of these organizations, The Green Grid (TGG), is at the forefront of promoting data center efficiency. TGG also has a diverse membership base, so the information generated by TGG consists of varied topics, applicable to different disciplines, but all still centered around data center efficiency.

TGG released the seminal white paper in 20017, "Green Grid Metrics: Describing Datacenter Power Efficiency." This paper formally introduced power-use efficiency (PUE), described as a short-term metric to determine data center power efficiency, derived from facility power measurements:

PUE = (total facility power)/(IT equipment power)

At that time, TGG's stated purpose in developing the paper was to "enable data center operators to quickly estimate the energy efficiency of their data centers, compare the results against other data centers, and determine if any energy efficiency improvements need to be made." (In subsequent white papers, TGG acknowledged that they no longer recommend comparing two data centers based on PUE results because there are many attributes of data center location, reliability level, engineering, implementation, and operations that affect PUE.)

Regardless of TGG's caution on misapplying PUE by comparing different data centers, TGG also acknowledged that many organizations in the data center industry have already begun to compare PUEs among data centers, which raises further questions around how to interpret the results. Organizations are realizing that using PUE as an energy-efficiency yardstick has some drawbacks, exacerbated by the fact that there are various ways to calculate PUE and stakeholders in the industry have expressed concerns about the consistency and repeatability of reported PUE measurements.

Ironically, at the same time PUE was gaining global acceptance, the data center industry was becoming more sophisticated and began requiring not just a metric, but a more detailed, robust process that evaluates conformity to data center energy-use consensus standards.

Today, PUE is one of the most widely used metrics for determining efficiency in data centers. While extremely valuable for calculating and benchmarking efficiency in data centers, PUE was never meant to evaluate proposed data center designs against a standard, especially when different cooling systems, economizers, and climates are part of the evaluation. Even though it would be several years before the release of ASHRAE Standard 90.4, PUE, other related metrics, and informal design practices would continue to be used as unofficial proxies for the yet-to-come formal process used in the ASHRAE standard. 


<< First < Previous Page 1 Page 2 Page 3 Page 4 Next > Last >>

Product of the Year
Consulting-Specifying Engineer's Product of the Year (POY) contest is the premier award for new products in the HVAC, fire, electrical, and...
40 Under Forty: Get Recognized
Consulting-Specifying Engineer magazine is dedicated to encouraging and recognizing the most talented young individuals...
MEP Giants Program
The MEP Giants program lists the top mechanical, electrical, plumbing, and fire protection engineering firms in the United States.
October 2018
Approaches to building engineering, 2018 Commissioning Giants, integrated project delivery, improving construction efficiency, an IPD primer, collaborative projects, NFPA 13 sprinkler systems.
September 2018
Power boiler control, Product of the Year, power generation,and integration and interoperability
August 2018
MEP Giants, lighting designs, circuit protection, ventilation systems, and more
Data Centers: Impacts of Climate and Cooling Technology
This course focuses on climate analysis, appropriateness of cooling system selection, and combining cooling systems.
Safety First: Arc Flash 101
This course will help identify and reveal electrical hazards and identify the solutions to implementing and maintaining a safe work environment.
Critical Power: Hospital Electrical Systems
This course explains how maintaining power and communication systems through emergency power-generation systems is critical.
Data Center Design
Data centers, data closets, edge and cloud computing, co-location facilities, and similar topics are among the fastest-changing in the industry.
click me