Energy performance in mission critical facilities

Mission critical facilities, such as data centers, are judged carefully on their energy use. Engineers should focus on the codes and standards that dictate energy performance and how building energy performance can be enhanced.

By Bill Kosik, PE, CEM, BEMP, LEED AP BD+C, HP Data Center Facilities Consulting March 23, 2015

Learning objectives

  • Understand the various ways to measure energy use in mission critical facilities.
  • Learn about the codes and standards that dictate energy performance.
  • Learn about the codes, standards, and organizations that govern energy performance.

Mission critical facilities support a wide variety of vital operations where facility failure will result in complications that range from serious disruptions to business operations, to circumstances that can jeopardize life safety of the general public. To minimize or eliminate the chance of facility system failure,mission critical facilities have three hallmarks that make them different from other type of commercial buildings:

  1. The facility must support operations that run continuously without shutdowns due to equipment failure or maintenance. Seasonal or population changes within the facility have a small impact on the energy use profile; generally, the facility is internally loaded with heavy electrical consumption.
  2. Redundant power and cooling systems are required to support the 24/7/365 operation. Depending on the level of redundancy, there will be additional efficiency losses in the power and cooling systems brought on by running the equipment at small percentages of the capacity.
  3. The technical equipment used in the facility, such as computers; medical and laboratory equipment;and monitoring , communications, and surveillance systems, will have high power requirements that translate into heat gain and energy use.

Putting these hallmarks together, mission critical facilities need to run continuously, providing less efficient power and cooling to technical equipment that has very high electrical requirements, all without failure or impacts from standard maintenance procedures. This is why energy use (and ways to reduce it) in mission critical facilities has been, and will continue to be, of great concern. This is true whether the mission critical facility is a laboratory, hospital, data center, police/fire station, or another type of essential operation.

And due to constant advances in the design of technical equipment, the strategies and tactics used for reducing facility energy consumption need to anticipate how future changes will impact building design,codes, standards, and other guidelines. Fortunately, the technical equipment will generally become more energy-efficient over time with improvements in design. This can reduce facility energy use in two ways:the equipment will use less energy, and the energy of the power and cooling systems will also decrease.

Data centers are one segment of the mission critical facility industry that arguably see the highest rate of change in how the facilities are designed, primarily based on the requirements of technical equipment,servers, storage devices, and networking gear. Data centers will have the highest concentration of technical equipment on a sq ft or percentage of total power demand as compared to other mission critical facilities. A change in the specifications or operating conditions of the computers in a data center facility will have a ripple effect that runs through all aspects of the power and cooling systems (see Figure 1).Moreover, IT equipment manufacturers are developing next generation technology that can significantly reduce overall energy use and environmental impact of data centers. This is a good thing, but with it brings new design challenges that need to be addressed in codes, standards, and guidelines.

For data centers and the broader range of commercial buildings, there are myriad programs, guidelines,and codes intended to keep energy use as low as possible. Publications from ASHRAE, Lawrence Berkeley National Laboratory, U.S. Green Building Council, and the U.S. Environmental Protection Agency are good examples of technical but practical resources aiding in data center strategy.

But how did all of these come about? To understand the path forward, it is equally important to know how we got here. Similar to the rapid evolution of power and cooling systems in data centers, many of the documents released by these groups were developed in response by changes and new thinking in the data center design and construction industry.

Energy-efficiency programs for buildings

In the United States, one of the first programs developed by the federal government that spawned several broader energy efficiency initiatives is the 1977 U.S. National Energy Plan. This was developed as a blueprint identifying energy efficiency as a priority because "conservation is the quickest, cheapest, most practical source of energy." This plan became the basis for many other building energy use reduction programs that would typically start out at the federal level and eventually trickle down to state and local government.

During this time, one of the most widely used building efficiency standards was published for the first time: ASHRAE Standard 90-1975: Energy Conservation in New Building Design. Because no comprehensive national standard existed at the time, this was the first opportunity for many architects and engineers too objectively calculate the energy costs of their designs and to increase energy efficiency. Since its initial release, the standard has been renamed ASHRAE Standard 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings and has been put on a 3-year maintenance cycle. For example, the 2013 edition of Standard 90.1 improves minimum energy efficiency by approximately 37% from the 2004 edition for regulated loads. It is typical that each new release of the standard will contain significant energy-efficiency requirements.

With the proliferation of communications and computing technology at the end of the 20th century, building codes and standards, especially Standard 90.1, needed to reflect how technology was impacting building design, especially power, cooling, control, and communication systems. Changes in power density for high-technology commercial buildings began to create situations that made it difficult for certain building designs to meet the Standard 90.1 minimum energy use requirements. Also, when following the prescriptive measures in Standard 90.1, the results show that the energy saved by better wall and roof insulation, glazing technology, and lighting is a small fraction of the energy consumption of computers and other technical equipment.

Without adapting the standards to reflect how data center facilities and IT equipment are evolving, it would become increasingly difficult to judge the efficiency of data center facilities against the standard.But without addressing the operation and energy consumption of the computers themselves, an opportunity to develop a holistic, optimal energy use strategy for the data center would be lost. The engineering community and the IT manufacturers, backed up by publicly reviewed, industry-accepted standards and guidelines, needed to take a prominent role in attacking this challenge.

ASHRAE 90.1 language

It is interesting to study how the ASHRAE 90.1 standards issued in 2001 dealt with high electrical density equipment, such as what is typically seen in a data center. Keep in mind that around the beginning of the decade in 2000, high-end corporate servers consisted of a single 33-MHz 386 CPU, 4 MB RAM, and two 120 MB hard drives and were scattered about in offices where they were needed, a far cry from the state-of-the-art. If needed, mainframe computers would reside in a separate data processing room.

Overall, the electrical intensity of the computer equipment was far less than what is commonly seen today in large corporate enterprises. The language in Standard 90.1 at that time talked about "computer server rooms" and was written specifically to exclude the computer equipment from the energy-efficiency requirements, rather than stipulating requirements to make things more efficient. The exclusions dealt primarily with humidification and how to define baseline HVAC systems used in comparing energy use to the proposed design. At that time, the generally held beliefs were the computer systems were very susceptible to failure if exposed to improper environmental conditions and therefore should not have to meet certain parts of the standard that could result in a deleterious situation.

Knowing this, data center industry groups were already developing energy efficiency and environmental operating guidelines. And as the use of computers continued to increase and centralized data centers were beginning to show up in increasing numbers of building designs, it was necessary that ASHRAE play a more important role in this process

New language for data centers

With the release of ASHRAE Standard 90.1-2007, based on input from the the data center community,including ASHRAE’s TC9.9 for Mission Critical Facilities, data centers could no longer be treated as an exception in the energy standard. There were several proposed amendments to Standard 90.1-2007 that included specific language, but it wouldn’t be until the release of Standard 90.1-2010 where data center-specific language was used in the standard. The sections in the standard relating to data centers took another big leap forward with the release of the 2013 edition, which contains specific energy performance requirements for data centers, including the ability to use power usage effectiveness (PUE) as a measure of conformity with the standard.

Standard 90.1 certainly has come a long way, but, as expected in the technology realm, computers continue to evolve and change the way they impact on the built environment. This includes many aspects of a building design, including overall facility size, construction type, and electrical distribution system and cooling techniques. This places an unprecedented demand on developing timely, relevant building energy codes, standards and guidelines because, as history has shown, a lot of change can occur in a short amount of time. And because the work to develop a standard needs to be concluded well before the formal release of the document, the unfortunate reality is that portions of the document will already be out of date when released.

Synergy in energy use efficiency

In the past decade, many of the manufacturers of power and cooling equipment have created product lines designed specifically for use in data centers. Some of this equipment has evolved from existing lines, and some has been developed from the ground up. Either way, the major manufacturers understand that the characteristics of a data center require specialized equipment and product solutions.Within this niche there are a number of novel approaches that show potential based on actual installed performance and market acceptance. The thermal requirements of the computers have really been the catalyst for developing many of these novel approaches; state-of-the-art data centers have IT equipment(mainly servers) with inlet temperature requirements of 75 to 80 F and higher. (The ASHRAE Thermal Guideline classes of inlet temperatures go as high as 113 F.) This has enabled designs for compressorless cooling, relying solely on cooling from outside air- or water-cooled systems using heat rejection devices (cooling towers, dry coolers, close-circuit coolers, etc.). Even in climates with temperature extremes that go beyond the temperature requirements, owners are taking a calculated risk and not installing compressorized cooling equipment based on the large first-cost reduction (see Figure2).

How are these high inlet temperatures being used to reduce overall energy use and improve operations?A small sampling:

  • Depending on the type of computing equipment, during stretches of above-normal temperatures,the computer processor can be slowed down intentionally, effectively reducing the heat output of the computers and lessening the overall cooling load of the data center. This allows the facility to be designed around high inlet temperatures and also provides an added level of protection if outside temperatures go beyond what is predicted. This strategy really demonstrates the power of how interconnected facility and IT systems can provide feedback and feed forward to each other to achieve an operational goal.
  • Cooling technologies such as immersion cooling are fundamentally different from most data center cooling systems. In this application, the servers are completely immersed in a large tank of a mineral oil-like solution, keeping the entire computer, inside and outside, at a consistent temperature. This approach has a distinct advantage: It reduces the facility cooling system energy by using liquid cooling and heat-rejection devices only (no compressors), and it reduces the energy of the servers as well. Since the servers are totally immersed, the internal cooling fans are not needed and the energy used in powering these fans is eliminated.
  • Manufacturers also have developed methods to apply refrigerant phase-change technology to data center cooling that, with certain evaporating/condensing temperatures, does not require any pumps or compressors, offering a large reduction in energy use as compared to the ASHRAE 90.1 minimum energy requirements. Other refrigerant-based systems can be used with economizer cycles using the refrigerant as the free-cooling medium (see Figure 3).
  • Cooling high-density server cabinets (>30 kW) poses a challenge due the large intensive electrical load. One solution to cool such server cabinets is to provide a close-coupled system using fans and a cooling coil on a one-to-one basis with the cabinet. In addition to using water and refrigerants R134A, R407C, and R410 in close-coupled installations, refrigerant R744, also known as carbon dioxide (CO2), is also being employed. CO2 cooling is used extensively in industrial and commercial refrigeration due to its low toxicity and efficient heat absorption. Also,the CO2 can be pumped or operated in a thermo-syphon arrangement.

Trends in energy use, performance

When we talk about reducing energy use in data centers, we need to have a two-part discussion focusing on energy use from the computer itself (processor, memory, storage, internal cooling fans) and from the cooling and power equipment required to keep the computer running. One way to calculate the energy use of the entire data center operation is to imagine a boundary that surrounds both the IT equipment and the power/cooling systems, both inside and outside the data center proper. Inside this boundary are systems that support the data center, as well as others that support the areas of the facility that keep the data center running, such as control rooms, infrastructure spaces, mechanical rooms, and other technical rooms. After these systems are identified, it is easier to categorize and develop strategies to reduce the energy use of the individual power and cooling systems within the boundary.

Take the total of this annual energy use (in kWh), add it to the annual energy use of the IT equipment,and then divide this total by the annual energy use of the IT systems (see Figure 4). This is the definition of PUE, which was developed by The Green Grid a number of years ago. But there is one big caveat: PUE does not address scenarios where the IT equipment energy use is reduced below a predetermined minimum energy performance. PUE is a metric that focuses on the facility energy use, and treats the IT equipment energy use as a static value unchangeable by the facilities team. This is a heavily debated topic because using PUE could create a disincentive to reduce the IT energy. In any event, the goal of an overall energy-reduction strategy must include both the facility and IT equipment.

To demonstrate exemplary performance and to reap the energy-savings benefits that come from the synergistic relationship between the IT and facility systems, the efficiency of the servers, storage devices,and networking gear can be judged against established industry benchmarks. Unfortunately, this is not a straightforward (or standardized) exercise in view of the highly varying business models that drive how the IT equipment will operate, and the application of strategies such as virtualized servers and workload shifting.

To illustrate how energy can be reduced beyond what a standard enterprise server will consume, some next-generation enterprise servers will have multiple chassis, each housing very small yet powerful high-density cartridge computers, with each server chassis capable of containing close to 200 servers.Arrangements like this can have similar power use profiles to the previous generation, but by using more effective components (processor, memory, graphics card, etc.) and sophisticated power use management algorithms, comparing the computing work output with the electrical power input demonstrates that these computers have faster processing speeds and use higher performing memory and graphics cards, yet use less energy than the previous generation. But this is not an anomaly or a one-off situation. For example, studying the trends of supercomputers over the past two decades, it is evident that these computers are also on the same path of making the newest generation of computers more efficient than the previous. As an example, in the last 5 years alone, the metric of mega FLOPS per kW, the "miles per gallon" for the high-performance computing world, has increased 4.6 times while the power has increased only 2.3 times (see Figure 5).

The progression of computers

It is important to understand that many of the high-performance computing systems that are at the top of their class are direct water-cooled. Using water at higher temperatures will reduce (or eliminate) the compressor energy in the central cooling plant. Using direct water-cooling also allows more efficient processor, graphics card, and memory performance by keeping the internal temperatures more stable and consistent as compared to air-cooling where temperatures within the server enclosure may not be even due to changes in airflow through the server. As more higher-end corporate servers move toward water-cooling, areas in the energy codes that discuss air-handling fan motor power will have to be reevaluated because a much smaller portion of the data center will be cooled by air, creating a significant reduction in fan motor power. Fan power limitations and strategies for reducing energy use certainly will still apply, but they will make a much smaller contribution to the overall consumption.

Historically, one of the weak points in enterprise server energy use was the turndown ratio. This compares electrical power draw to IT workload. It used to be that an idle server, with no workload, would draw close to 50% of its maximum power just sitting in an idle state. Knowing that in most instances servers would be idle or running at very low workloads, a huge amount of energy was being used without producing any computing output. As server virtualization became more prevalent (which increased the minimum workloads by running several virtualized servers on one physical server), the situation improved. But it was still clear that there was a lot of room for improvement and the turndown ratio had to be improved. The result is today’s server technology allows for a much closer matching of actual computer workload to the electrical power input (see Figure 6).

There is movement in the IT industry to create the next wave of computers, ones that are designed with a completely new approach and using components that are currently mostly in laboratories in various stages of development. The most innovative computing platforms in use today, even ones that have advanced designs enabling extreme high-performance while significantly reducing energy use, use the same types of fundamental building blocks that have been used for decades. From a data center facilities standpoint, whether air or water is used for the cooling medium, as long as the computer maintains the same fundamental design, the same cooling and power strategies will remain as they are today, allowing for only incremental efficiency improvements. And even as the densities of the servers become greater(increasing power draw per data center area), the same approximate data center size is required, albeit with reductions in the computer room due to the high-density as compared with a lower density application.

But what if an entirely new approach to designing computers comes about? And what if this new approach dramatically changes how we design data centers? Processing the torrent of data and using it to create meaningful business results will continue to push the electrical capacity in the data center needed to power IT equipment. And, as we’ve seen over the past decade, the pressure of the IT industry’s energy use may force energy-efficiency trade-offs that result in a sub-optimal outcome vis-a-vis balancing IT capacity, energy source, and total cost of ownership. While no one can predict when this tipping point will come or when big data will reach the limit of available capacity, the industry must find ways to improve efficiency, or it will face curtailed growth. These improvements have to be made using a holistic process, including all of the constituents that have a vested interest in a continued energy and cost-aware growth of the IT industry.

The bottom line: In the next few years the data center design and construction industry will have to continue to be an active member in the evolution of IT equipment and will need to come up with creative design solutions for revising codes and standards, such as ASHRAE 90.1, making sure there is a clear understanding of the ramifications of the IT equipment to the data center facility. As developments in computing technology research begin to manifest into commercially available products, it is likely that the most advanced computing platforms won’t immediately replace standard servers; a specific type of workload, such as very big data or real-time analytics will require a new type of computing architecture. And even though this technology is still in the development phase, it gives us a good indication that a breakthrough in server technology is coming in the near future. And this technology could rewrite today’s standards for data center energy efficiency.


Bill Kosik is a distinguished technologist at HP Data Center Facilities Consulting. He is the leader of "Moving toward Sustainability," which focuses on the research, development, and implementation of energy-efficient and environmentally responsible design strategies for data centers. Kosik collaborates with clients, developing innovative design strategies for cooling high-density environments, and creating scalable cooling and power models.