Air distribution’s importance in data processing environments
Data center design requires a lot of thought and planning when it comes to air distribution. Following ASHRAE guidelines and standards can provide a roadmap for users
Learning Objectives
- Understand ASHRAE TC9.9’s recommended and allowable envelopes for air-cooled IT equipment.
- Learn about common air distribution strategies used to supply air into a data center and cool server infrastructure.
- Review containment solutions that can aid in the establishment and maintenance of a data center environment and how they mitigate leakage.
Data center insights:
- Creating an optimal setup for data centers requires a lot of thought in how the air is distributed throughout the data center while also creating an energy-efficient system.
- ASHRAE TC9.9 – Thermal Guidelines for Data Processing Environments is a good first resource for users looking to understand what is and isn’t recommended and required for server equipment.
Data center design is more than just cooling servers installed in a large spatial volume. Much thought, study and research has gone into the establishment of very discrete operating envelopes to optimize server life and maximize system energy efficiency. Without proper means of air distribution and containment strategies, facilities will fall outside of design tolerance, operate inefficiently and damage server equipment in some cases. The following article will discuss the ASHRAE environmental classes, discuss common air distribution methods to deliver cool air at design conditions to server infrastructure and highlight how containment solutions can aid in the maintenance of said environments.
First published in 2004 by ASHRAE Technical Committee (TC) 9.9, ASHRAE TC9.9 – Thermal Guidelines for Data Processing Environments is one of the most referenced documents within the data center industry and establishes guidelines for data center environments to ensure reliable operation of information technology equipment (ITE), as well as standardizing design requirements for ITE manufacturers. These standards are commonly encountered and used in data center projects nationwide and everyone involved in data center design and operations should be aware of the operating envelopes established within.
The guidelines outlined in Chapter 2 play an important function in the data center space by unifying both ITE and data center design through providing standardized conditions for both. The standard defines several environmental classes for ITE, establishing both recommended and allowable entering air temperature and humidity ranges for each class (see Table 1).
This means ITE manufacturers can maximize their equipment’s reliability within a certain environment. It also means data center design professionals know the conditions to optimize their cooling systems around. Data center designers often reference these environmental ranges to ensure maximum ITE reliability. ITE manufacturers typically design and test their equipment using the allowable ranges for a given class to ensure maximum reliability and capability to operate under extreme conditions. Care should be exercised during the design process to ensure the data center or IT space is designed with the intended ITE classification in mind and any client-mandated envelope requirements are upheld.
The environmental classes for air-cooled equipment established in ASHRAE TC9.9 are intended to broadly represent typical data center/IT environments.
While the recommended operating ranges are similar for all the air-cooled classes (with exception to H1), there is much greater variation in the allowable ranges. Class A1 is intended to represent a tightly controlled mission-critical data center environment. Class A2, with its slightly looser range of allowable conditions, is intended to represent an IT room or similar space with some ability to condition air, but not necessarily to the level of a purpose-built data center.
Classes A3 and A4 were added in 2012 to reflect the industry’s drive for more free cooling and more energy-efficient cooling technologies, with the wider, allowable ranges reflecting those desires. Class H1 represents high-density computing or equipment, typically located in a separate area within a larger data center. This class of equipment typically requires lower entering temperatures to meet component requirements.
The recommended and allowable operating ranges for ITE have expanded since the document was first published, through both expanding the operational ranges of existing environmental classes and through the creation of new environmental classes. The driver for these changes was often a combination of pressure from data center operators for increased efficiency through greater economization/reduction of mechanical cooling, as well as results from studies investigating the effects of various conditions on ITE reliability, including electrostatic discharge in low humidity conditions, as well as effects of corrosion-inducing catalyst elements in the environment.
While necessary as a baseline within the industry, the provided guidelines for each environmental class are not mandatory. The document provides commentary regarding going outside of the recommended ranges, noting it is a balance between energy savings and ITE operational reliability. Each owner/operator needs to find a solution congruent with their business needs.
It is important to note the ranges presented in ASHRAE TC9.9 are for air entering the equipment. Since it was first published, the document has been built around the idea the ITE inlet air temperature is the only temperature that matters for equipment functionality and reliability. The heated exhaust air leaving the ITE does not impact the performance or reliability of the ITE unless it interacts with the entering air. The primary task of the data center designer is to do the following:
- Ensure an appropriate quantity of air required to cool the ITE is conditioned to the standards required by the ITE.
- Ensure the air in step 1 reaches the inlet of all ITE.
- Minimize the effect of the heated exhaust air leaving the ITE on the air in step 2 as much as possible, to maintain the required ITE operating envelope.
Execute, maintain operating envelopes
Now that an overview of operating envelopes has been established, ways to design systems for establishing and maintaining these environmental envelopes must be discussed. To accomplish this, Chapter 5 introduces common air distribution and containment strategies. The idea is comprised of two parts: Distribute supply air to all server rack locations at or close to and not beyond the design thermal envelope; separate server supply air (cool) and exhaust air (warm) pathways to preserve said thermal envelope.
By implementing these items into a design, users can mitigate the generation of hot spots within the data center, reduce the chance of recirculation and/or premature mixing and ensure proper entering air conditions at the ITE. Not doing so will often lead to increased airflow requirements and higher fan power consumption. Consolidating these concepts into design is often difficult if care is not taken when planning out the air distribution methods, server supply and exhaust pathways and ITE configurations within the space. However, through best practice and a few helpful tips, users can deploy a system that operates efficiently within the confines of the defined thermal envelope.
Part 1: Distribution of supply air to server infrastructure
Two main air distribution strategies will be the focus: Overhead and underfloor. Each option has advantages and disadvantages and the right choice is determined by many factors that will not be discussed here. For each method, common equipment types will be discussed and some high-level design considerations will be explained. Understanding the types of air distribution methods and the equipment options at one’s disposal is the next step in ensuring design supply air is distributed to server racks and the operating environment is maintained.
There are several ways to cool a data center. Smaller installations can be cooled with basic equipment provisions, but in this case, solutions for purpose-built, mission critical systems will be focused on.
Overhead distribution overview
Overhead distribution is often accomplished by means of ducting air from a unit either in the data center itself or remotely in the facility. Common unit types for this application include:
- Upflow computer room air handlers (CRAHs) – Chilled water based.
- Upflow computer room air conditioners (CRACs) – Packaged direct expansion (DX) equipment with either air-cooled or water-cooled condensers
- Packaged rooftop equipment – Many system types available (chilled water, DX, direct evaporative, indirect evaporative or a combination thereof).
Air leaves the unit of choice and travels via ductwork elevated above finished floor to server rack inlet locations. Supply locations are either positioned to discharge air downward or extended to a lower elevation to precisely deliver air. Velocity considerations vary with each option. If at high elevation, one must be cognizant of the air velocities exiting the opening to ensure cool air will make it to the server inlet. Too high and the air will shoot past the inlet, resulting in an insufficient air quantity entering the equipment. Too low and the air may prematurely mix with server exhaust air, resulting in excessive inlet temperatures if containment measures are not pursued. More to come on this later. If supply outlets are extended downward, decreased velocities should be considered to minimize throw and allow the servers to efficiently ingest the supply air.
Overall, overhead distribution is effective but complicates multidisciplinary coordination when it comes time for construction. Telecommunications cable tray, electrical busway, power distribution conduit, fire alarm and fire protection piping are examples of trade infrastructure typically routed at elevation within the data center.
Underfloor distribution overview
Underfloor distribution is typically accomplished by means of supplying air into an underfloor cavity below a raised access floor. Common unit types for this application are as follows:
- Downflow computer room air handlers (CRAHs) – Chilled water based
- Downflow computer room air conditioners (CRACs) – Packaged direct expansion (DX) equipment with either air cooled or water-cooled condensers.
The underfloor cavity is created by installing server infrastructure atop floor tiles elevated and supported by a structural grid. Air leaves the unit of choice in a downward direction and passes through the underfloor cavity up through special floor tiles that are perforated, strategically delivering air to server rack locations. Floor tiles come in many variations (damper, no damper, fan-assist, etc.) and in variable, perforated free area. The beauty of underfloor distribution is when it’s ideally deployed, the pressure distribution can be uniform, allowing for good air distribution effectiveness.
One major downside, which is commonly encountered in practice through review of older facilities, is this large, underfloor void introduces ample volume for installation of other trade utilities. Like overhead distribution, coordination may prove to be difficult if other disciplines opt to route their infrastructure through this void. More obstructions underfloor yield a highly variable – and unpredictable – underfloor pressure distribution (see Figure 1). This complicates how much air and how fast it exits the floor tiles across the data center.
To better anticipate this concern, it is advisable to validate a design by means of computational fluid dynamics modeling to ensure that the designer has a good understanding of the underfloor pressure distribution and subsequent airflow characteristics and conditions distributed to server racks.
The importance of server rack positioning
Chapter 5 also touches on the importance of server rack positioning and relative location within the data center. Server racks typically are arranged in the following orientations:
- Front to rear – Air enters through the front, exists through the back.
- Front to top – Air enters through the front and exits through the top.
- Front to top and rear – Air enters through the front and exits through the top and rear.
Server racks are often positioned side by side within the data center. To optimize server performance and maintain targeted operating envelopes, server racks are arranged in rows positioned back-to-back and front-to-front. This way, cool, supply air can be sent to server inlets through one aisle (cold aisle) and warm, exhaust air can be collected and relegated to another (hot aisle). If server racks are positioned front to back, the exhaust air from one server rack would mix with the inlet air to another, artificially elevating the inlet air to that server rack and introducing a deviation from the thermal envelope design. This approach is not recommended.
While proper server positioning can increase the air distribution and maintenance of the thermal operating environment, two problems remain: recirculation and subsequent mixing. Without a means to fully separate the supply and exhaust airstreams, recirculation and mixing will most certainly occur. Exhaust air from the server rack will make its way back into the space where it will mix with incoming supply air. This mixing artificially increases the supply air temperature directed into the cold aisle and the temperature of the air entering the ITE. If this occurs, the reliability of the ITE may be adversely affected.
Part 2: Containment as the means to separate server airstreams
To mitigate alterations to the operating envelope by means of recirculation and mixing, containment strategies are deployed to physically separate server supply air from server exhaust air. There are two main types of containment strategies:
- Cold aisle containment – Containment of cool, server supply air (see Figure 2).
- Hot aisle containment – Containment of warm, server exhaust air (see Figure 3).
Fundamentally, containment is a physical barrier between airstreams. This barrier could be as simple as a wall of polymer-based curtains or as robust as a built-out volume adjacent to the server racks. The goal of containment is to establish a “control volume” for each air stream to pass through. By managing the two airstreams and keeping them separated, recirculation and environmental deviations are minimized and server performance and efficiency is enhanced. By integrating containment solutions into a design, the design operating envelope is more optimally maintained because the opportunity for alteration of supply air by means of mixing or recirculation is reduced – but not eliminated.
Leakage mitigation
Even with containment strategies in place, the opportunity for air leakage and envelope deviation remains. Containment strategies and the distribution methods discussed are not 100% airtight. Because of this, there still exists a possibility air can migrate from one airstream into another, raising the ITE inlet air temperature. Leakage presents itself in many ways. The three that will be focused on are leakage at the server rack, leakage within the containment solution and leakage in an underfloor distribution network.
Mitigating leakage at the rack level
In practice, data centers will always have open server positions within a server rack at one point in time or another. Whether it is an early server deployment within a newly-built data center or routine maintenance and/or replacement of older hardware, open server locations will surface. Server racks are typically comprised of many slots for servers of varying heights to be slid into. The end user can mix and match server types based on their specific hardware needs.
It is critical these open server locations within a rack be blocked with specially designed blank-off panels to minimize open area for server exhaust air to exit the rack enclosure. This way, airstreams are more effectively separated and design supply air temperature variation is mitigated.
Mitigating leakage at the containment level
Depending on the containment strategy and construction, leakage rates will vary. Ensure containment solutions are as airtight as possible. Common points of leakage are as follows: containment panel connection points, penetrations for other trade infrastructure and fastener points of connection. Leakage can be mitigated at all mentioned locations with precautions such as proper gasketing, installation of fastener grommets and sealing around penetrations.
Mitigating leakage in an underfloor distribution network
Leakage presents itself often in underfloor distribution networks. It is quite common to see leakage at improperly installed floor tiles or penetrations within the floor. Often penetrations are cut into floor tiles for telecommunications wiring and/or electrical conduit to pass through. Again, to mitigate leakage, these issues are remedied by installing proper gasketing at the floor tile/structural support grid interface and installing sealing grommets at all floor penetrations. As a rule of thumb, ASHRAE Applications 2019 (see section 20.15) recommends acceptable leakage rates shall not exceed 2% of the system design flow rate requirement.
Maintain efficiency, optimization within data centers
ASHRAE TC9.9 is one of the governing documents used in the data center design industry. Understanding the recommended and allowable operating envelopes and how to execute and maintain them is critical to the design of these facilities and their systems. By deploying these environmental thresholds, incorporating various distribution methodologies and containment strategies and being mindful of some very common contributors of leakage, a data center designer can ensure the maintenance of a proper environmental envelope while minimizing energy usage and negative impacts to ITE.
ESD Global is a CFE Media content partner.
Do you have experience and expertise with the topics mentioned in this content? You should consider contributing to our WTWH Media editorial team and getting the recognition you and your company deserve. Click here to start this process.