Designing data center air handling and conditioning systems

Strategies for implementing new air handling systems, including specific performance criteria, energy use targets, climate and heat recovery are among the many topics to be addressed during the conceptual engineering and design process.

By Bill Kosik, PE, CEM, Oak Park, Illinois March 18, 2019

Learning Objectives

  • Understand the history behind early air handling systems.
  • Learn about current standards and guidelines for air handling systems.
  • See how data center thermal guidelines impact energy consumption and computer performance.
  • Learn about how energy recovery is used in air handling systems.

Data center air handling and conditioning systems are essential to ensure optimal performance and reliability of information and technology equipment (ITE) systems. Requisites for maintaining a proper environment include monitoring and control of maximum and minimum dry-bulb temperature, dew-point temperature, and levels of particulate and gaseous contaminants. Often these requirements, defined by the data center design and engineering team, provide expectations for overall reliability, maintainability, simplicity, temperature and moisture control, and energy efficiency.

During the design process, it is not uncommon to develop design requirements that clash with other goals. (For example, to attain high levels of energy efficiency, a solution is developed that uses outside air to reduce mechanical cooling energy. However, using outdoor air that falls outside the range of ASHRAE’s temperature and moisture content criteria could create an indoor air quality problem that negatively impacts the computer hardware). While this is a less complex example, it illustrates the need for close collaboration among team members. Similarly, power and cooling systems designed for higher reliability will increase maintenance and operation costs—these are the types of trade-offs that need to be studied prior to completing the final design.

Specifying and designing air handling units (AHUs) must also take an integrated approach since each of the internal components of the AHU (fan, motor, filter, coil, damper, humidifier) has its own individual role. While this may seem obvious, it is important that the same level of care is put into the design and selection for each of these components. Once the design is finalized, facility engineering personnel must be educated on the air handling system’s maintenance and operations. This will lead to a solid grasp of the technical underpinnings of the air handling and conditioning equipment—which is critical when the systems become operational—and preparing for subsequent start-up, commissioning, testing, and maintenance. Performance criteria developed during the conceptual engineering phase and then finalized during design include:

  • Dry-bulb temperature.
  • Moisture level.
  • Fan performance.
  • Filtration effectiveness.
  • Overall control and monitoring of the internal components.

The criteria must be included in the knowledge transfer from the design team to the facilities engineering team, who are ultimately responsible for successful operations.

Similarities and differences: Data centers vs. comfort cooling applications

Data center air handling equipment is designed and operated very differently than equipment that is used for comfort cooling applications, such as for commercial and institutional buildings. On the surface, data center and comfort cooling have some similarities. The equipment physically looks the same, and some manufacturers make equipment used in both data centers and comfort cooling. However, the two have design specifications that are somewhat divergent, resulting in very different functionality: outdoor air quantities, filtration requirements, cooling coil sensible and latent capacities, and water temperatures are some of the design parameters that differ significantly between the two. For example, the method of controlling the quantity of air delivered to the conditioned space in a comfort cooling application is typically achieved by a combination of duct-mounted static pressure sensors and temperature monitors. Control systems in a data center vary supply airflow and temperature based on readings from several sensors located under the raised floor, in the data center (in many locations), and mounted in the ductwork.

Another primary difference between data centers and comfort cooling applications became evident in the early days of data center design. At the time, only a few manufacturers made data center-specific equipment, and those systems for data centers were adapted from standard commercial equipment. In some cases, this led to a compromise to the original design intent and produced suboptimal results, such as the inability to deliver the required amount of air due to inadequate development of static pressure. As the data center market expanded, manufacturers introduced equipment designed specifically with data center design and operating criteria in mind. This greatly improved the required functionality and gave the engineer and end user more flexibility and future expandability of the facility, both short- and long-term. (See Figure 1)

Historical perspective

Air handling and conditioning systems date to the early 1900s and have a long history of innovation, resulting in improved worker safety, public health, and overall quality of life. While some of the basic design principles have not significantly changed in the last century, today’s AHUs have evolved into systems that create highly controlled indoor environments while consuming far less energy than previous generations. (See Figure 2)

Given that comfort cooling is entering its second century of operation, data center air conditioning is still relatively new (approximately 50 years). Non-data center applications, such as comfort cooling, ventilation, and industrial applications, still dominate the market. However, the increasing need for precision environmental control for ITE systems has forced an evolution of equipment; one that was needed to meet new design and operational requirements specific to data centers. This evolution also reflects the long-standing tradition of innovation in the HVAC industry, dating back to the beginning of the 20th century.

AHUs are a key component in air conditioning systems, regardless of the application. Since the early years of the 20th century, engineers have designed these systems initially to be used in manufacturing processes and then for human-comfort applications. Regardless of the application, modern AHUs share the same types of components with their mechanical ancestors. Cooling, humidification, dehumidification, filtration, heating, ventilation using outdoor air, and a mechanical means of moving the air (fan). The widespread use of electricity played an important role in the practicality and effectiveness of air handling systems. Prior to this, conditioned air was created by melting ice in the airstream and blowing the cool air though moistened fabric sheets. It was also quite common for heating systems to rely on convective forces to move the air through a building, without the use of electric fans. (While these methods are very primitive as compared with today’s systems, the thermodynamic principals behind them have not changed and are the same ones used today).

Applying AHU design to data centers

Methods of air conditioning data centers, like other applications, have evolved significantly over the years. The changes are typically a direct result of changes in ITE systems. For example, early computers, many created to assist in defense operations, relied on open windows for cooling or traditional office building air-conditioning systems. Later, some used internal liquid or refrigerant cooling. With the advent of the corporate data center in the early 1970s, manufacturers began to develop “precision cooling” AHUs, co-located with the computers placed around the perimeter of the data center. These units had a straightforward mission: supply filtered, cooled, and humidified/dehumidified air to the data center, then draw the warm return air back to the AHU, and then do it over again. This has been one of the more common data center cooling approaches for almost a half-century.

However, pressures on lowering energy costs, relaxed temperature- and moisture-level requirements of ITE systems, and indoor gaseous and particulate levels have changed the landscape for AHUs (and air conditioning systems as a whole), inviting opportunities for manufacturers to develop more innovative and efficient data center cooling techniques.

AHU building blocks

An AHU is a cohesive collection of components whose sizes are determined by the design requirements developed by the data center engineers and end users. Examples of these requirements are cooling/heating load, filtration, volumetric flow rate, temperature, and moisture levels. The unit can be as simple as a ventilation unit that includes dampers, filters, and a fan. In contrast, the AHU can be used for maintaining strict tolerances in temperature and moisture content, controlling particulates, and maintaining precise air pressure differentials.

The following lists the most common AHU components. (Each of the components will have a very different design and manufacturing specification depending on the application, but the intent is to introduce and discuss the different elements.)

  • Outside air damper—Damper for controlling the quantity of outside air which is driven by the control strategy for ventilation, pressurization, and economization
  • Return air damper—Damper for controlling airflow back to the air handling unit.
  • Exhaust damper—Damper to control the amount of exhaust air.
  • Mixed air plenum—Used to mix outside air and return air. Generally, it needs some type of mixing device mounted in plenum—without good mixing, airstreams will stratify.
  • Prefilter—Its purpose is to filter out course particulates to lengthen the final filter’s useful life.
  • Final filter—Its specification is dependent on particulate and gaseous contaminant requirements; an additional filter might be used downstream of the fan.
  • Cooling coil—Primary device used to reduce the dry-bulb temperature (sensible-only cooling) while the moisture level stays the same. In comfort cooling applications, the dry-bulb temperature is cooled to the dew point of the air, releasing moisture from the air (dehumidification).
  • Heating coil—Generally not used in data center applications; this coil is used to warm up the space, especially when the ITW load is small.
  • Supply fan—Ensures proper airflow rates to the terminal devices in the data center.
  • Reheat coil—Only used in applications for humidity control.
  • Humidifier—A device to increase the moisture content of the airstream.
  • Sound attenuators—The devices, comparable to a car muffler, are used upstream or downstream of any source of noise, such as a fan or air movement.
  • Return or return/exhaust fan—Works in conjunction with the supply fan to return/exhaust the required air from the data center.
  • Exhaust fan—A dedicated fan to extract air from the building to maintain ventilation and pressurization requirements.

(See Figure 3)

Factors affecting energy consumption

Heating and cooling processes may be remote from the AHU (for example, generating chilled water for cooling, hot water/steam for heating, and refrigeration compressors). This article focuses mainly on AHUs, but since the performance of the chilled water and direct expansion (DX) coils (components of the air handling systems) depends on these remote processes, there will be some discussion on mechanical cooling vis-à-vis the AHU performance.

Similarly, the control strategy will influence the energy use of the AHU, determining the amount of outside air, damper position, fan speed, and coil capacity.

In a data center, the largest energy consumer is the ITE system, which also has the greatest effect on the energy use of the HVAC system. The ITE system’s design load (kW), running load (kW), and fluctuation of load throughout the day/week/month is the starting point in determining the energy use of the air handling systems. The efficiency and effectiveness of the fans and cooling coils, along with the control strategy of the air handling equipment, will ultimately determine the actual energy use.

A major hurdle to overcome when designing air handling systems for data centers is the variability of the ITE power load (which has a direct correlation to the cooling load and fan airflow). The following are three examples to demonstrate how the ITE power requirements vary, based on different operational profiles:

  • Constant power usage irrespective of time of day or month. The only fluctuations in HVAC system power use will come from the use of economization techniques which is dependent on outside conditions.
  • Constant variability in power use during the day, week, and month. The HVAC system must have the ability to vary the amount of cooling and airflow to keep the computers cool and maintain overall energy efficiency.
  • Scheduled on/off operation of the ITE systems. The HVAC systems are required to run at 100% cooling for extended periods of time, and when the ITE system had concluded the run, it is shut down.

The controllability and efficiency of the air handling systems differ amongst these scenarios; this becomes more pronounced in HVAC systems designed for high levels of reliability where multiple AHUs will run at reduced load. This design strategy, where AHUs are running at a small percentage of their full capability, requires an effective control strategy to ensure the AHUs are maintaining the temperature and moisture levels required to keep the computational power and energy use of the ITE at an optimal point, while maintaining the reliability and maintainability criteria. (See Figure 4)

Data center environmental conditions 

As the level of data center analysis and engineering sophistication increases, computer manufacturers have been developing more robust and resilient hardware capable of operating in a wider range of environmental conditions. This hasn’t always been the case—early data centers were kept as cold as possible to avoid failure of electronic components. Early hardware was also prone to failure if temperatures in the data center fluctuated too quickly; the solution was to keep the data centers at a constant cold temperature.

By contrast, modern ITE hardware including compute servers, storage servers, and network equipment can tolerate a wider range of indoor conditions. As an example, early data center design used supply air temperatures as low at 55°F with a return air temperature of 75°F. A supply temperature for a current design might be 80°F, with a return temperature of 110°F. Not only does this approach save energy by reducing the compressor power to make chilled water in water-based systems, but it also extends the hours of economization (both air and water), lowering energy use even further. (See Figure 5)

Fan energy use

Fans are the principal element in an AHU. Fans not only move air in and out of the data center, but they also must have the ability to draw the air across the coils, filters, and other components located in the AHU to ensure proper optimal heat transfer and filtration. The fan is also the principal energy consumer in an AHU; it consumes more energy than any other system in the data center, except for mechanical cooling and the ITE systems. Fan design, selection, and control are highly important for data center energy optimization. Interestingly, the reliability and maintainability requirements for data centers can provide an opportunity to reduce fan energy. For example, if a certain number of AHUs are required to cool a data center, additional standby AHUs will be installed to keep the cooling system operational during planned and unplanned outages. During normal operation, one strategy is to keep all the AHUs running, including the standby units. This example is illustrated in Figures 6a and 6b. If five 50,000-cfm AHUs (250,000 cfm total) are required to satisfy the cooling load and the system is designed as N+2, a total of seven 50,000-cfm AHUs (350,000 cfm total) are installed. At 100% cooling load, running five AHUs at 50,000 cfm each will require a maximum of 231.5 kW. At 100% cooling load, running seven AHUs at 35,700 cfm each will require a maximum of 118.3 kW. This is a 49% reduction in power demand. When the data center is running at partial load, the energy reduction when running the standby AHUs becomes even more pronounced. For example, a data center running at 50% load will have a maximum fan motor power of 29 kW when running five AHUs. Running the two standby units, the power will be 14 kW. The concept of fan-motor power reduction based on reduced airflow is grounded in the fan affinity laws—these state that the percent reduction in power is proportional to the cube of the percent reduction in the fan’s cubic feet per minute by the cube. For an airflow that is 50% cfm of the initial airflow, the fan power is the cube of 50% or 12.5% of the initial fan power. (See Figures 6a and 6b.)

Energy recovery in AHUs

Heat- or energy-recovery systems can be very effective in reducing energy consumption in data centers. (Generally, heat recovery refers to the transfer of sensible heat where energy recovery refers to the transfer of sensible heat and moisture. (The examples discussed below are for heat-recovery systems). In comfort cooling applications, heat-recovery systems are used in colder climates for recovery of heat that is exhausted to the outside. Laboratories, hospitals, and certain industrial facilities are good applications for heat-recovery systems, as well as anywhere high percentages of outside air are used. There are different ways to recover heat from exhausted air. Generally, the warm exhaust air will pass over an air-to-air heat exchanger (water coil, wheel, thermosyphon), and the heat is then transferred to another airstream that is physically separated from the exhaust air. In cold climates, the heat is typically transferred to a cold-air intake, reducing the amount of energy required to heat the cold, incoming air.

Data centers benefit from heat-recovery systems, but in a different way. A data center running at capacity will require cooling the entire year. Many data centers use outside air economization to reduce mechanical cooling energy when the outside conditions are favorable to introduce air into the data center. There are many design considerations that are not covered in this article when direct outside air economization is used for a data center, but from an energy-reduction standpoint, direct outdoor air economization can be one of the best solutions in the right climate. Since a direct economizer draws air directly from the outdoors, depending on the location of the data center, there is a chance of introducing particulate and gaseous contaminants, potentially deleterious to the ITE systems. In situations like this, indirect economization can be used where the outdoor and indoor airstreams are physically isolated from each other AHUs with indirect heat recovery enable the use of outside air for heat exchange with the return air from the data center. How does this work? This is where data center design presents an opportunity that doesn’t exist in other types of facilities: air returning from the data center to the AHU at full load will be 20°F to 40°F warmer than a typical comfort cooling application (for simplicity, I will use a return-air temperature of 105°F). The heat-recovery system will be able to reduce the return-air temperature up to approximately 100°F, so even in very hot climates, there will be a reduction in mechanical cooling energy for most of the year. In more temperate climates, it is possible to eliminate mechanical cooling entirely. (In climates with few hours that exceed the peak design temperature, a smaller, “trim” cooling system can be used, which uses much less energy and has a much lower cost than a fully mechanically cooled data center). In these scenarios, heat recovery is a misnomer in that the heat is being transferred from indoors to outdoors. Using hourly energy simulation is necessary to accurately predict the reduction in mechanical cooling that is achieved by the heat recovery system. This analysis is very sensitive to the climate that the data center is in, so it is essential the weather data used in the energy simulation is for the specific location for the data center.

Figure 6a: Maximum fan motor power at 100% load in two different operating scenarios.
Figure 6b: Maximum fan motor power at 50% load in two different operating scenarios.

Data center standards and guidelines

There are many documents that cover the design and testing of fan systems specifically for data centers. There are also documents that focus on how the fans must be tested to demonstrate conformance to a standard. For example, the Air-Conditioning, Heating and Refrigeration Institute (AHRI) Standard 430: Performance Rating of Central Station Air-handling Unit Supply Fans defines how to rate central station AHU fans. It includes data on how to calibrate the testing and normalizing factors, such as pressure drop. The standard defines how the equipment must be rated so all manufacturers are using the same process. This “leveling of the playing field” is immensely important to engineers and operators to ensure a fair comparison between equipment brands.

While there is a number of standards and guidelines on the testing of fans, there are also documents dealing with analysis and design. One of these is ASHRAE 90.1-2016: Energy Standard for Buildings Except Low-Rise Residential Buildings, which provides instruction on how to calculate the allowable fan power in different situations.

  1. Tables 6.5.3.1-1 and -2 provide calculations for allowable fan power and pressure drop adjustment, respectively. The Fan Efficiency Grade (FEG), as defined by Air Movement and Control Association (AMCA) testing standards, must be 67 or higher.
  1. Table 10.8-1 defines the minimum nominal full-load efficiency for National Electrical Manufacturers Association (NEMA) motors ranging from 1 to 500 hp. This must be included since the fan efficiency impacts the overall fan power.
  1. Section G3.1.2.9 of Appendix G provides calculations for determining the total electrical power for the system fans. Appendix G, Performance Rating Method, offers an alternative path for minimum standard compliance.

Air handling systems have been in existence for more than a century and continue to evolve based on new applications, technology advancements, and developments in the design for heat transfer, motor, filtration, and fan products. Understanding how the different components in an AHU operate (and the engineering concepts) will improve the cohesiveness of the design and allow for a more holistic process from concept through construction, testing and operation. Operating a data center with a wider view improves the controllability and energy efficiency of the air handling systems. ITE hardware power input (and corresponding heat output) varies based on several factors, such as the type of data center, time of day/week/month, climate, hardware type, computing power, memory, and graphics capability. Having an air handling system that is flexible, adaptable, and modular will ensure ITE systems operate at optimal compute power and energy use levels.

Fans are major components in AHUs that will play an important role in ensuring an optimal environment. By accurately matching the airflow to the heat output of ITE systems, it will reduce fan energy but will also contribute to an ITE system’s computing goals by maintaining the required environmental conditions. Since data centers have very different supply and return temperatures and moisture content as compared with a comfort cooling application, there is a greater opportunity to recover energy by transferring heat from the data center to outdoors via air-to-air heat exchangers. Using this heat-recovery technology will reduce mechanical cooling energy considerably. In some climates, mechanical cooling -can effectively be eliminated entirely. Fortunately, there are many technical resources to assist in the planning and design of air handling systems, including ASHRAE, AHRI, AMCA, and others. Since some of these organizations develop standards and guidelines that are driven by their membership, providing guidance and technical expertise is critical in the planning and manufacturing of the next generation of air handling systems.

For continuing education on this topic, see Data Center Air Handling Units on CFE Edu. Attendees may earn up to 1 AIA CES approved learning unit (LU) upon completion of the course. 


Author Bio: Bill Kosik is a senior energy engineer and an industry-recognized leader in energy efficiency for the built environment with an expertise in data centers. He is a member of the Consulting-Specifying Engineer editorial advisory board.